Sie sind auf Seite 1von 21

Optimal Regulation Processes

L. S. PONTRYAGIN

HE maximum principle that had such a dramatic effect on


the development of the theory of control was introduced to the
mathematical and engineering communities through this paper,
and a series of other papers [3], [8], [2] and the book [15]. The
paper selected for this volume was the first to appear (in 1961)
in an English translation. The maximum principle, sometimes
referred to as the Pontryagin maximum principle because of
Pontryagin's role as leader of the research group at the Steklov
Institute, rose to prominence largely through the book. The
first proof of the maximum principle is attributed by some to
Boltyanskii [2]. This paper by Pontryagin commences with these
words: "In this paper will be found an account of results obtained by my students V. G. Boltyanskii, R. V. Gamkrelidze and
myself."
Surprisingly, the initial impact on mathematicians of the new
theory was small. The unenthusiastic reception at the 1958
Congress of Mathematicians to the announcement of the maximum principle by the Soviet group is described by Markus in
[12]. Many believed that the new theory was, through its introduction of inequality constraints, a minor addition to the calculus
of variations. Sussmann and Willems [17], on the other hand,
emphasize the conceptual advance made with the discovery of
the maximum principle; this advance was impeded in the calculus of variations literature where the differential equation for
the 'dynamic system' has the very special form x = u which
eliminates the adjoint variable in the Euler-Lagrange equation
and obscures the fact that the Hamiltonian is maximized.
In marked contrast, the impact on the control engineering
community was dramatic. The emergence, almost simultaneously, of the maximum principle, dynamic programming, and
Kalman filtering created no less than a revolutionary change in
the way control problems were formulated and solved. A graduate student at the time was acquainted with the classical frequency response approach to control, with selection of controller
parameters to minimize a quadratic performance index subject,
possibly, to a quadratic constraint on control energy [14] and
with Wiener filtering theory (sometimes used, via reformulation, to solve linear, quadratic optimal control problems). Timeoptimal control problems had already been solved by Bushaw

[5] who obtained, inter alia, the switching curves for secondorder oscillatory systems. But generally the field was static,
constrained perhaps by inadequate tools, an illustrative example being the introduction of time-varying transforms to analyze
time-varying linear systems. In this atmosphere the impact of the
maximum principle, dynamic programming, and Kalman filtering enabling a powerful time-domain perspective of nonlinear
and time-varying systems, was overwhelming. A whole set of
new tools, and new problems, was suddenly available, injecting new life into graduate schools. Conferences and workshops
multiplied to comprehend, use, and extend these new results.
Every researcher who lived in this period was invigorated, and
every research student had a stimulating and open field in which
to work.
The maximum principle excited considerable attention. This,
after all, was the aerospace era in which open-loop problems
suddenly became meaningful; Goddard's problem of maximizing altitude given a fixed amount of fuel, for example, posed in
1919, was solved in 1951 by Tsien and Evans [18] using the
calculus of variations. Like all revolutions, there were excesses.
Linear, quadratic optimal control problems were solved in textbooks and papers via the maximum principle instead of using the
sufficiency conditions provided by Hamilton-Jacobi (dynamic
programming) theory. The importance of robustness, and other
lessons from the past, were forgotten. But optimal control prospered. In aerospace, algorithms for determining optimal flight
paths were developed, linear quadratic optimal control became
a powerful design tool for a wide range of problems, and H oo
optimal control was developed after a concern for robustness
re-emerged. Model predictive control, a widely applied form of
control in the process industries where constraints are significant, makes direct use of open-loop optimal control. Many books
were written for the engineering community; that by Bryson and
Ho [4] captured well the excitement and breadth of the new theories. The initial slow impact on the mathematical community
disappeared with the entry into the field of Western mathematicians who wrote many influential papers and authoritative texts
such as [1], [7], [10], [11], [20] that helped cement the field. The
paper by Halkin [9] was particularly widely read. There was also

125

a voluminous Russian literature the flavor of which is conveyed


in the recent book by Milyutin and Osmolovskii [13].
The approach adopted by Pontryagin and his collaborators,
and developed in the literature quoted above, may be roughly
summarized as follows. Consider a fixed-time optimal control
problem where the system is z = f(x, u),x := (y, z) with given
initial state x(O) = Xo = (0, zo). The control objective is to minimize y(l) subject to z(l) = 0 and u(t) E 1U for all t E [0, 1]. It
is the terminal equality constraint that makes determination of
necessary conditions of optimality difficult. The set of reachable
states at t = 1 is W. Suppose (uo(.), xo(.)) is an optimal controlstate pair. The set of 'better' terminal states is S = {x I y <
yO(I), z = OJ. Optimality implies Wand S are disjoint. A local
approximation Wto W atxo(l) is obtained using an approximation to the differential equation (that is, linear in x and nonlinear
in u) and a set of variations of the optimal control; the set Wis
convex. It is then shown (and this is the hard part) that disjointness of Wand S implies linear separation of Wand S. The latter
is easily shown to imply the maximum principle. This powerful,
and intuitively appealing, approach is still being extended [16].
It is a sign of the continuing vitality of the field that, in addition, a very different approach has been recently developed. The
starting point is that the maximum principle is easily derived for
an optimal control with no terminal equality constraints. Clarke
[6] uses an exact penalty function (which is non-smooth) to remove the troublesome constraints, formulates a perturbed problem (which is smooth), and obtains necessary conditions for this
problem. Necessary conditions are then obtained for the original
problem by passage to the limit. An exposition of the many ramifications of this approach appears in the forthcoming book [19].
With interest in optimal control, from both theoretical and
applied researchers, as high as ever, the substantial impact that
the introduction of the maximum principle has had and continues
to have can be in no doubt.

[2] V.G.BOLTYANSKll, "The maximumprinciple in the theory of optimal processes," Dokl. Akad. Nauk SSSR, 119:1070-1073,1958.
[3] V.G. BOLTYANSKII, R.V. GRAMKRELIDZE AND L.S. PONTRYAGIN, "On
the theory of optimal processes," Dokl. Akad. Nauk SSSR, 110:7-10,
1956.
[4] A.E. BRYSON AND Y.-C. Ho, Applied Optimal Control-optimization,
Estimation, and Control, Hemisphere Publishing Corp. (Washington),
1969.
[5] D.W. BUSHAW, "Optimal discontinuous forcing terms," in Contributions
to the theory of nonliner oscillations, 4:29-52. Princeton Univ. Press,
1958.
[6] EH. CLARKE, "The maximal principle under minimal hypotheses," SIAM
J. Contr. Optimiz., 14:1078-1091, 1976.
[7] W.H. FLEMING AND R.W. RISHEL, Deterministic and Stochastic Optimal
Control, Springer-Verlag (New York,Heidelberg,Berlin), 1975.
[8] R.V.GAMKRELIDZE, "On the theoryof optimalprocessesin linear systems,"
Dokl. Akad. Nauk SSSR, 116:9-11,1957.
[9] H. HALKIN, "On the necessary condition for the optimal control of nonlinear systems,"J. Analyse Math., 12:1-82,1964.
[10] H. HERMES AND J.P. LASALLE, Functional Analysis and Time Optimal
Control, Academic Press (New York), 1969.
[11] E.B. LEE AND L. MARKus, Foundations ofOptimal Control Theory, Wiley
(New York), 1967.
[12] L. MARKUS, In Differential Equations, Dynamical Systems, and Control
Science: A Festschrift in honor of Lawrence Markus, K. D. Elworthy,

[13]

[14]
[15]

[16]

[17]

[18]
REFERENCES

[1] L.D. BERKOVITZ, Optimal Control Theory, Springer-Verlag (New York),


1974.

[19]
[20]

W. N. Everitt, and E. B. Lee, editors, Marcel Dekker, Inc. (New York),


1994.
A.A. MILYUTIN AND N.P. OSMOLOVSKll, Calculus of Variations and Optimal Control, American Mathematical Society (Providence, RI), 1998.
(Translationsof mathematicalmonographs.)
G. NEWTON, L.A. GOULD AND J.E KAISER, Analytical Design of Linear
Feedback Controls, Wiley (New York), 1957.
L.S. PONTRYAGIN, V.G. BOLTYANSKll, R.V. GAMKRELIDZE, AND E.F.
MISHCHENKO, The Mathematical Theory of Optimal Processes, Interscience, Wiley (New York), 1962.
H.J. SUSSMANN, "Geometry and optimal control," In Mathematical Control Theory, J. Baillieul and J.C. Willems, editors, pp. 140-194. SpringerVerlag(New York), 1999.
H.J. SUSSMANN AND lC. WILLEMS, "300 years of optimal control: from
the brachystochrone to the maximum principle," IEEE Control Systems
(Magazine), 17(3):32-44, 1997.
H.S.TSIEN AND R.C.EVANS, "Optimumthrustprogrammingfor a sounding
rocket," J. American Rocket Soc., 21(5):99-107,1951.
R.B. VINTER, Optimal Control, Birkhauser (Boston), 2000.
J. WARGA, Optimal Control of Differential Equations and Functional
Equations, Academic Press (New York), 1972.

D.Q.M.

126

OPTIMAL REGULATIOM PROCESSES


L.

s. PONTRYAGJN

In this paper will be fouod aD account of results obtained by my students V.


G. BoltyaDskUaud R.V. Gamkrelidze and myself (see [1], [2], [3]).

1. Statemeat of the problem. Let 0 be some topological space. We shaD


say that a control proces s is giveD, if ODe bas a system of ordlaary differential
equations
d%L _
(l'l-

11 (X1 , . ., .cn; u) = /1 (X; II )

(1)

or, in vector fonD,


tlx
dt-

= f (x: II ) ,

(2)

where s 1, , Sll are real fuoctiOllS of the tbDe I, :I: = (,,1, , %11) is a v.ector of the 11- dimeosioDal vector space R, II e 0, aDd
f(x; tt)
(i=1, .. , n)
are functions giveD andc:oatiDuous for all values of the pairs (x, u)
assume further that the partial derivatives

aft
dzi

(i, j=l . . . ,

e R x O.

We

III

are also defined aed c:catiDuous ill the eatire space R x O.

In order to find soludoa of equa boa (2), defined OG the interval to'S l .s ,l'
it suffices to exhibit a con'rol function " (c) 00 the segment '0 oS t ~ '1' aod the
initial value 'So of the solutioD for I = 'o Ia. accordaoce with this we shall say
that we are given a control
(3)
of equ.ion (2), if we are li~eD a faacUoa u('), the segaeat of its defiDitioa
'0 ~'S '1' aDd the iaidalvalue So of the solutioa, z('). fa .hat follows we shall
eODside... piecewise-caat.iaaoas coatrol faacUoa u( t), admitting discoatiJluities
of the first order, ucl 1:0 80," soladODS of equation (2). Here we shall suppose

that the cOIltrols a( ,) are cGIlaaaoas iD the initial pomt to aDd semicODl:iDUOUS
&018 the left, Le., the coaditioD II (I - 0) = U ('), t > '0 is sadsfied. We shall say
thar the coatrol (3) carries the poUlt x o' ioto the poiDt :1:1' if the correspoadiol
solation z(') of equatioa (2), satisfying the mitis! condition x(,o> =sO' satisfies
as well the end coaditioa: set 1) =z 1
10 cases iarerc:stiaa 1D applications
JiDeal' space.

is closed repOil of a fiaite-dimeasioDal

Reprinted with permission from American Math Society Translations, L. S.Pontryagin,


"Optimal Regulations Processes," Series 2,Vol.18,1961, pp. 321-339.
127

Now suppose that

(%1, ,.zA; u ) =

f (x, u) is a function defined aad CODat'. =


OD the whole space

tiDuous aloog with its partial derintives


(i 1, ... , 11)
ax1
RO
X To each control ()
3 correspoads then the number
it

LrC)= ~ f(X(I). ll(tdt.


to
Thus,

L is a fuDcuooal of the control (3). The coetrel U will be said to be opti-

mol, if, for any coatrol

which caRies the poiot So into the poiat

11:

1 , the inequality L ( U) ~ L (U) bolds.

Remark 1. If (3) is ao optimal coatro! of the equatioD (2), 11: (,) die 501udOD of equatioa (2) corresponcliag to it, aad t 2 < '3 are two poiats of the interval '0

~, ~'1' then U,. ., (u (t), '2' '3' x ('2 is also aD optimal cODtrol.

Remark 2. If (3) is aD optimal cODuol of the equation (2) t carryiDg die point
11: t and r is any D1JJDbcr , thea
1

%0 into the poiDt

(J"=(u(l--=); t o +-:' t 1 +-: , xu)


is also aD optimal cODuol carrying the pOint zO into s 1A particularly imponant case is thac of a fuoctioa
(s ; ") whicb satisfies
the equatioa

f(X t Ii)

= 1.

(4)

In this case we have: L (U) t 1- '0' and the optimality of the control U means
that the time of trarasitioB from ,Aft' posilion So '0 lAa position :It 1 ;'B minimal.

A case which is .importaat in the applicatioas is that ill which 0 is a closed

region of sOlDe ,edimeasiooal Euclideaa space E; thea _ = (u1 , . , u"), and the
oae CODtI'ollio. puameter II a~es ~el' iato a system of DUlDerical parameters

e,

II'.

uI , ,
Is the case that 0 is aD OpeD set of the space
the 'VUiatioaal pr0blem. formulated here turDS out to be particular case of the problem of Lagnmge
( [ 4 ] , p. 22') aDd the fundamental result presented below (~e aaaximuID priaciple )
coiacides with the bOWD Weierstrass critel'ioo. Howner it is ~ ill the app1icad.oas
to COIISider the case in which die c:cmcroIliog pammeters satisfy iaequa1ities, including the
possibility of equality, for eDqJ1e: IfJ,11 ~ 1 (i -1 , J r}, In Ibis case the Weiersuass
criteriexa obviously does DOt bold, emd abe result preseoted below is DeW.
2. Necessary coaclidoas for opdmality (1IlUiaaID priDciple).
ID order to formulace a aecessary condition for optimality we iDuoduce the

128

~ec-

tor

~ - (sO,~l t ,Sn)
of Va + l)-dimensional EucUdeaa space S iDeo me discussion, and we COD sider the
control process

d~l
--;r;Of,

= r(X,1

= /,- ( X, II ) = f;'(
X, t"OJ')
II

' ' III)


, :);.,

(.1= 0t

:lt '

)
til-,

(5)

in vector form,
~

tIz

""

"=1 <s,u),

(6)

where ,O(x,.) is the fuacdoll whicb defines tbe faacuoaal L. In order, kaowiDg
the control (3) of equadoa (2), 10 obraia the cOIltrol of equatJoa (6), Ie suffices,
beghuWaS with the initial Tatue

So -'(s~ -~O).
to set down the iaitial value ~o of equatioa (6). We defiae

me vector ~O ' writill.

~o - (O,:c~. '., ~~).


Ia this way the coauol <3) of equadoa (2) uniquely defiaes a cODuolof equadoD
(6), aad we will say for siaplicit)' that (3) is a control of equadon (6). U DOW'
the cOJltrol C3) carries the initial value ~0 ioto the termiDaI value
""

s 1-

thea we have

(01
II)
S I's l' ,S 1 '

L(U)

aDd thus is determilled a


lem formulated abo"Ve.

COIUlCCUOil

=s~,

betweea equauoa (6) aod the variadoaal prob-

Along with the contravariant vector ~ of the space S 'we consider aa auziliary covariant vector
of that space, aad we set up the fuactioa

K(~,~,u) =- (~,t~,u) ) ~
'V
(the right side is the scalar product of the vectors t/J and f).

'"

'"

Fo~ fixed values of die qaaotidces '" aad z, the fuaCtiOD

K is a faactioa of
the ~Dleter "; the upper bound of the values of this fuacuoa will be delloted by
'\J ' "
N(t/!,z). We set up, further. the Hamiltoaiaa system of equauoas
d~i
8K
- d =-;:rt
tr'iJi.

(.

1= ,

).
n,

(7)

,ax.

d'~i = _
(i == 0, ... t n).
(8)
dt
:1:'
It is immediately evident that the system (7) coiacides with (5), while Ihe system
(8) is :

d~o

_ 0
dt - ,

di'J _

.;

T - - ~ ~i

at l (i, ,,)
inJ

i=O

129

(j == 1, ... , n).

(9)

Theore. 1. Suppose tAat (3) is em optimal con.trol of equation (2) ond x(t) is the
solution of equation (2) corresponJing to it. We complete the vector z(t) to a vector

~(t), writing:

.1,11

(I) = ~

r (~(/),

II

(I)) tlt,

to

'"

There exists ,hen (J non-zero continuous vector-function t/J{t), such that


r. ~

K (Itt), 'i (t u), It (to) == O.


'tu (to)' 0,
an cl tne Junctwns

(+

(10)

"'" ,....,
1/J{tlt s:(e)~

u(t)
constitute a solution of the Ha;miltonian system (7), (8), while
~

"'"

"'V

(11)

K{t/J{t}, x (t), u(t)) = N(t/J{t), x(t;

"" x""(t), u(t) is constant, so that


furthermore it turns out that the function K(t/J{t)
4'\"

"'"

K(~(t), x (s), u(t :=

o.

(12)

In order to formulate the necessary caodition in the case when ODe is dealing
with the problem of miDimum time, we set up the Hamiltonian function

H(t/J, s, u) = <rp, b, u.
For fixed wInes of 1/J and x the function H(t/J, x, u) is a

fuoCtiOD of the parameter


u. The upper bound of the values of this function will be denoted by 11<"" s:). We
set up, further, the Hamilt9DiaD system
d:t J
all
( 13)
11) ;
(; = t,
(il = au,
J

d !,, _,J=

oH

Obviously

fix.'

me system (13) coincides


with the
n
d'~i

'\'

dI = - -.; Ylt

(I ~L )

.
(/=J,
... ,I1).

__
.

(/1

a/k (s.

If)

ax';

system (1), and the system (14) is:


(j

== J

11).

(J5)

J~=J

Theorem 2. Suppose that (3) is a control of tAe equation (2) which is optimCJl
for the functional (4), and tAct s(t) is the solution of equation (2) corresponding to

chis control. Then there e%ists a non-zero continuous vector function t/J{t) -= (r/J 1(')'
, '"n (t

sucb. tnat

and the functions


t/J{t), x(t), u(t)
sou fy the Hamiltonian system of equations (13), (14), while
H(t/J{&), s(t), u(&) - M(t/J{t), s(t).

(16,

It turns out, moreover, ,hat the function H(t/J(t), x(t), u(t) is constant, so that
H(t/J{t), :I:(t), U(l ~
(11)

o.

Theorem 2 follows immediately from Theorem 1.


The substance of Theorems 1 aDd 2 is the equalities (11) and (16). Therefore
theorem 2, which was first published as a hypothesis in the Dote [1], is called a
ma%imum principle. la the same sense, it is natural to coofer UUOD Theorem 1 the
130

designatioD of mo%imu11I principle.


3. Proof of the maximum priDciple (Theorems 1 and 2).
We shall prove Theorem 1. In the proof we shall ase certain constructions of
McShane [5]. Suppose that (3) is some coouol of equation (6) and that ~(t) is the
solution of equation (6) corresponding to it. The system of equatioDs in the variatiODS for system (5) Dear the solution x(,) is written, as is well known, in the form

11

, ....

aI' (x (I).

It (t
j
0
ax}
y
(i = , -J. , n ~.
;=0

WritiDg the solutioa of-the system (18) in vector form, we obtaia the vector
y(t) == (yO{t), , rll(t).

dyl _
dt -

L.J

(1~)
0

y(,).

In what follows we shall consider only continuous solutions


The system of
equatiens in the variations, as is well known, may be iaeerpreeed in the following
way. Let Yo be aD arbitrary vector of the space S. We set down the initial eoaditioa"
Xo a Yo -i-ao (a)

for the solution of the equation (6). Thea th e same solution of equatioD (6) with

this initial value may be written in the form

'i (t) + ay (l) -:- aO (3),

yo.

where y(t) is a solution of the system (is), taken with the initial value
We
shall say that the solution y{') of system (18) is the 'ransport of the vector

YO'

given at the initial poiat ~0 of the trajectory ~(t)t along the whole trajectory. In
t~ .same sense we may say that the solution r(I) is the traDsport of the vector
f(r). given at the poiat ~(T) of the trajectory ~(')t aloD, the whole rrajectory.
4"\,

Along with the contravariant vector y (I) which is tbe solution of the system
(18). we consider the covariaot vector '"
.pcl), which is the solutioa of the system (8).
ODe

verifies immediately that


d
d:i

so that

(~(t), )' (t

(~(t),

y(t

= 0.

= canst.

("19)

U one interprets the covariaDt vector .p(e) as a plane pass iDg through the point

~(t), then ODe may say that the plane ~,) is the ttaDSpon of the plane ~r)t giveD
at the point ~(,) of the trajectory ~(t)t aloDI the whole trajectory.
A variation of the
depending

00

CODttO!

(3) will be a control

U* = U* (a, el) = (u* (t), t u' t 1 + (Sz, ~"(u),


me real number a., defined for all sufficiendy

the parameter e and

small positive values of the parameter e and satjsfyiag the following condition:
*10 what (0110w8 the symbol o() is used as a typical notation for quantities teading
to zero alODg with e.

131

The solution i"*(t) of equation (6), corresponding to the coetrel


written, at the point t = t 1 + fa, in the form:

U, may

be

X(/ 1 ) -:- sa (ll*) + so (e).

'" *

where 8(U ) does

Dot

depend on

E.

The family A of variations of one and the same .cmtrol (3) will be said to be

admissible, if along with each two variations U~(, all and U~(Et 0-2) in it, there
exists for any DODnegative }fl' Y2 a third variatioD U*(r, Y1 o,1 + y20-2)' satisfying

iW"') =-t;aw;) 71;B(U:).

the cooditioo:
We DOW

(20)

ceastrace a special variatioo

U*

(~, ~)"= 1; (s,'~,

-=, e, u*).

depending 00 the point r of the half-interval to < t ~ t 1 (while in the case a. < 0
we have to cake r < t 1)' on the nODnegative Dumber 0, and tbe point u of the
space O. VIe defioe the variation Y (i, a, r a, u*) by giving the functiOD u*(t)
by the relations

u* (Ii =

II

to

for

for

-:"-::1

ltV)

for

-: < 1</ 1 ,

(i l ) for

/1

< t -c ":,

ll-='

II

It is easy

(t)

< t ""~ t 1 "7" a~

(21)

I
(if

:1

> 0).

construct an admissible family 11 containing all the variatioas of

type (21). This family will be taken as fundameotal in the farther cODstrucUODS.
~

To each variation U of the admissible family I! there corresponds a vector


~

B(U ) issuing from the poiat x r The set of all these vectors fills out a eeavex
ceae n with vertex at the point ~l (see (20. Let
~ == (-1, 0, .. , 0)

be a vector issuing from the point ~1 aod going in the direction of the negadve
s-u:is in the space S. If the cone

n COOtai05 the cod of the

vector;' as

terior point, thea the control U is DOt optimal. Suppose, indeed, that U*
variation of die control U for which

aD

in-

A is that

""
'"
8(U *) = v.
We denote by ~i the polat into which the point moves UDder the control

U*. We

obtain:

Breaking this equation up into a scalar equation for the Dull coordinate and a vector
equation for the remaining coordinates, we obtain:

L(ll*) =x~* =

x~

- a +eo (a) = L(L) -8720(1),

X~=Xl+ao(e).

Thus, the functional goes down by a quantity of order

era jectory differs


132

f,

and the endpoint of the

from that desired by die qaaauty 0(). By maida, dais CODStruction more e:mct, we
are led to a variation lJ#rA, lex whicb the eodpoiDt
01 the trajectory x#sadsfies the euct equality
== ;1aDd this coatradicts the assamptioD that the
coatrol U was optimal.

xf

av,

if

So, assumioa that the coatrol U is opdmal, we will suppose ia what follows
is CODvex, thea.
there exists a suppordDI plaac r, such tbat the whole CODe lies in ODe (closed)
halfspace defiDed bY. this space. and the vector ;- ia the other. DeaotiDg by ~1
the covariant vector carrespoodias to me plane r, taken with tbe appropriate sip,
we obtain:
(22)
(+1' i (U* <; 0, u* eAt
that the vector ~ is Dot iAterior to the eeee D. Since the eeae

(+I v~ :> o.

(23)

From inequality (23) there follows at eeee the iaequa1ity

t/J~ SO.

(24)

tP<')

the c:ovariaat vector obtaiaed by the traII.port of the vector


~l' siveD at the.riat Sl aloug the whole trajectory
We shan sbow that the
vector*fuacuOJl t/J(,) is the eee whose eDsteace was asserted ill Theorem 1..
We denote by

set).

Let Y{4 0, r, 0', II*) be aay special ftfiarioo (see (21)" of the faaUy ~ aDd
z (t) me solutioD of e~uadoa (6) correspoadial to it. A simple caleulatioa yields:
~

x*(~)=i(t)+.!il(x(~),u*)-f(X(-;),
u(-;J+&o{a}.
,
We denote by

yet)

the vector obta.iaed from the vector

y(~) = f( i (~)t u) - rei (-e), u (-;),


given ar the poiDr ~(r) by traasport sloaB the trajectory ~(e). Thea we have:

x* (t

1)

Xl + 21 (t1) +.0 (a)..

Since the vector Y('l) belODls to the coae

D, thea frOID inequality (22) we obtain:

(+lt1 -c o.

Heeee, from (19), we sec:

<t (-:),1 (i (,;), u*) -1 (x (-;), u (-:) <; o.


Rewritias the last inequality in the aotatiOD of the function K, we obcaia the ioequa1ity

equivaleDt to the equality (11).

Now let U* - Y(f~a, ,~ 0, u). The solutio. of equatiOll (6) caaespaadiD, ID Ibi.
coatrol U*, will be deaoted by i"*(,). We have obTiously:
133

where

Since the vector '"


8(U*) belongs to the ceae TIt then &om inequality (22) we obtain:
:1

<+1' 1 (i l. u (t1 ) -c o.

Taking account of the fact that a. is an arbitrary real number, the last inequality is
possible only UDder the cooditi01l

(~1' 1

(x

lt

u (t 1 ) == 0,

Le., ooly for

(25)

u('

We shall prove finally tha:t the function K(I)" K(~t), ~(')'


of the variable t is CODStaDt. Suppose that to ~ '2 < 's ~ t r while oa the semi-interval ~ <
t ~'3 the fuocti.oa u<t> is cmtinuous. We sba1l show that the functioa K(t) is CODstant
on that semi-.iotenal. Choose two arbitrary poiots '0 IUld '1 of the semi-interval
t2 < t 5'3- In view of (11), we have:

K <+ (0:0 ) , i ('to),

(0: 0 - K (~('to), i (~o),

('t1

> 0,

-K (~('tl)' x('C1), U(1i 1+K(+('=1)' X('t1), It(-:o)) <: O.


Addiag the difference K('l) - K(,O) to both sides of this inequality, we obtaia the inequality

K <1 (~1)' X (-:1)' u(-=o) - K (~'(~o), X(";0)' U ('eo <;

-c K ('t1) -

K (':0) <:K (~ ('t1), i (t 1), U ('=1 ~ K

Further, since the funcu9D K

(1 (t), x(t)t U(~

(+ ('to), i (to), u('t1\26)

of the variable t

00

the seg-

ment '2 <, ~'3 is continuous and has a derivative, equal to zero ill vi~w of (7)
and (8), then the outside terms of the inequality (26) disappear. Thus, K('i) -K('Q)
z:: 0,. Le., K(t) = coast. 011 me selDi-iatenal ; < t S ~.
Now suppose that '0 is a jump point for the fuacaoa a(t) and that 'i > '0 is
a point close to
If K(ro) > K('l)' then for sufficiently small 'i - '0 we have:

'0.

(+ (~l)' i (1:1),

It

(~o

> K <l (1:1), i

which coaaadic:ts equality (11). If DOW K(ro)

<K('l)'

(-=1)' u (~1'
then for sufficieDdy small

'1 -'0 we have:

K (~('to), '; (0: 0) , U ('tl > K (~(~o), i (~o), u ("=0'


134

which also cODtradicts equality (11). Thus, K(,O) -

KC,O + 0).

It follows from what bas been prmcd (see (25 that equality (12) holds for the
entire ioterval '0 ~, ~ t l' which, in particular, proves rhe first of reladoas (10).
The second of relations (10) follows &om iaequality (24) Oil cakiDg aceoaR&: of the
firs t equatioD of (9).

Thus, Theorem 1 is completely proved.


Remark to Theorem 1. Theorem 1 remaiDs valid also in the case that the class
of admissible coetrol functioDs is takeD to be the class of measartJble bounded functions; here equality (11) for the optimal equation is satisfied almost everywhere.

4. Optimality in the sease of fast aaott_ of liaear eGatrol. As an i.partaat


system in the applications, and ao excetleDt illustration of the general results, ODe
ceaalders the ezample of a linear coatrol system
"%1

n
~..

lit = ~

;=.

aJx' +

,.
~.

(i == 1, .... , n),

~ b~Uk
ll=f

where a = (11 u') is a poiat of a convex closed bounded polyhedron n lying


in a linear space E with coordinates .1, ... , u,'. In vector form this system may
be written as follows:

dx

-=~
.1 x+But
dt

(27)

where If is a linear operator in the space R of the variables s1, .. ,

SR

and B

a linear operator from the space E into the space R. We shall consider here ooly
the problem of miDimiziag the fuDcuonal ~ltb, Le., the prob~m of the minimizatiOll
of the time of passage,
0
In order to obtain certaiD results of a uaiqucness character we shan impose
the control equatiea (27) the conditions followioB below, A) and B), whose roles
will be clear in what follows:

OD

A) Let w be some vector whose directioD is dlac of oae of the edges of the
polyhedroa 0; then the ~ector B. does Dot lie in any proper subspace of the space
R which is invariant under the operator A; thus, the ~ectors
Bw, ABw, ..... tAn-lB.

(28)

are linearly independent in the space R whenever " is a vector bavin, as its direction one of the directions of the edges of the polyheclraa O.
B) The origin of coordinates of the space
droa O.

E is

aD

iDteriar poiDt of tbe polyhe-

The mactioD H(J/J, s,.) io our case bas the form

H = (t/J, Ax) +

<", Bu),

and the system (15) may be written in the form

135

(29)

(j == 1, ... , n).
or in vector form

~~ = -A*~.

(30)

Obviously the function H, coasldeeed as a function of the variable u en, admits a maximum simultaDeously with the function

(.p,

Bu).

Accordingly we shall denote themaIimWD.ofthefuocuoo(.;.Ba) by P(t/I), the fUDc:.


tioD be ing considered as a functioD of the variable uen. It follows from Theorem
2 therefore that if
U == (a(t), to' t l' "'0)
is aD opumal cODuol of (27), thea there exists a solatioa f/1{e) of equation (30)
such that

<At), Bu(&)} - P(At)).

(31)

Since equation (30) does Dot contain the unknown functions set) and 0('), then all
me solutions of equauoa (30) may be fouad easily, aad in the same way, UDder coodinon (31), one may easily find also all optimal cODttols aCt) of the equation (27).
The question as to how uniquely condition (31) determines the control u(t) in
terms of

the function At), is solved by the theorem followillg below.

Theorem 3. If condition A) is satisfied, then for any given nontrivial solution


VA.t) of equation (30), Me relation (31) uniquely determines tAe co1JtlOl function 11(1): here
it results that Me function u(t) is piecewlse-continuous andits fJdlues can be only vertice.

of the polylaetlron

n.

Proof. Since the function

(AI), Ba),

(32)

coasidered as a function of the vector a, is linear, then it is either COJlsmat, or it

takes OIl its manmum OIl the bouodary of the polYBOD O. The same colIsideradOD
may be applied to each face of me polyhedron O. Thus, either the fuacuoa (32)
takes 011 its mazimum only OD one vertex of the polyhedron 0, or else it cakes it OIl
OD a whole face of the polyhedron O. We shall show that it follows from CoadiUOD
A) that the latter case is possible only for a finite number of values t: Suppose
that the fuoctiOD (32) takes o.a its muimum (or is coastaat) OD some face r of tbe
polyhedron O. Let - be a vector wbose direction is that of SODle edge of the face
Since the functioD (32) is constaDt on the face r we ba'Ye:

r.

(y,(t), Bw) - O.

If

DOW

this relation were to bold for au infinite set of values of the variable t; mea

it would hold identically in &, and, differentiating this successively with respect
136

to It we would obtain

(4' (t), Bw) = 0,


Bw) = (cf1 (t), ABw) = 0,

(.4~ (t),

(.4*2+ (t), Bw)

= (cf1 (t), ..4!Bw) = 0, f

(33)

(.-t.n~l;' (t), ~)::: (+ (t), ~~-~B~) ~O:


and, siace from conditioo A), the vectors (28) form a basis of the space R, then it
would follow from relations (33) that t/J{t) = 0, which contradicts the supposition
that the solutioD t/J{t) was .noDtrivial.
5. Theorem of aaiqueaess for lioear cODuols. We solve equatioa (27), as all
iDbomogeaeous equation, by the method of variation of CODstaDts. For this purpose we denote by

rpl(t), .,

tPTa (t)

(34)

a fuodameatal system of solutions of the homogeneous equation


dx
Ci't
= A x,

satisfying the initial condition tI>;Uo)::r~. and by

",1(I.), , ",7I(t.)
a fundameDtal system of solutions of the homoBeneous equation (30), satisfying the
iaitial coaditioas tP{(t O) =~. We shall seek the general solution of equation (27)
iu the form
n
X

(l) = ~ fPi (t) c (t).


i

i-t

Substituting this solution in equation (27), we obtain:

't

~
;,J <Pi

dci el)

(t) cr;:- = Bu (/).

i=1

MultiplyiDI this last relation by the scalar

(t!J(t), ~i(t

::a

.pi

and taking accouut of the face that

~. we obtain: .

tlc;~t) = ('l'i (t), Bu (t.

(35)

Thus, the solution of equauoD (27) tor aD arbitrary control U = (U(I),


may be written in the form:
'n

(t)

tI , XO)

= ~ ,dt). (x~ + ~ ('l'i (t), Bu (t)


=1

'0*

dt).

II)

Theorem 4. Suppose tho' equotion (27). .atisfies l;on4itioll A), and let

137

(36)

be two optimal controls for equation (27), carrying the point


point S I' The", these controls co;,ncide:-

o into the same

'i : t 2,
U 1() 5: u (t ).
2
Proof. Since both controls Uland U2 are optimal, then t 1 t 2; .otherwise, if,
for example, t 1 < "2' then the control U2 would Dot be optimal. We thus have the
equality

'1

'1

Xl =

n.

~ ,.(11) (x: -t- ~ (~i (I), ]JUt (l)) d/) - ~


10

;=1

'i

Ii

(ll)

(x~+ ~ (If/;(t),

i=1

BU2

(l)) dl).

It)

Since the vectors tfJ j (t 1). ' , rPn(t 1) are linearly independent, then it follows from
the last equation mat
Ii

11

~ (If/i (I). lJudt dl= ~ (lJIt (I). Bu~ (I


'0
fu

(i = J 11).

dt

(37)

From Theorem 3, it follows that there corresponds to the optimal control Uj a vector function

1/1<&),

which is a solution of equation (30). The initial value of this

function for t = to will be denoted by


tPO = (t/J 10' .pnO);
thea the solutioa .p{t) may be written in the form
n

tfI (1) =- ~ f1iCl~i (t).


i=l

Multiplying relation (37) by .piO and sUliuDing 011 it we obtain:


I

~ (4' (I), Budi)) cit = ~ (~(/). Bu! (I}) dt.

(:iH)

to

10

From Theorem 3 the function a 1(1,) satisfies the condition


<1/I(t), Bo1(t == P{f/At)

and is determined uniquely by this coaditicm. If DOW the funcuoo u 2(t) did not coincide with the functioD Ul(t), thea the coaditioD

(y,{e), Bu 2(t e P(<t,


would not be satisifed, and dJerefore the function (.p{t), Bu2(t , oot exeeedleg the

Bul(l anywhere, will on some interval be less thaD it. Thus, if OD


the segmeot to ~ ,. .:S tIthe identity u let) == u 2(t) did Dot bold, thea equality (39)
would be impossible.
function (<t),

Thus, Theorem 4 is proved.


We shall call the CQDtrol

U = (u(t), "0' t l' 'SO)


extremal, if it satisfies condition (31), where
138

1/1<,)

is some nODtrivial solution of

equatioa (30)Ia order to fiad all optimal cODuols carryiD. the poiat So Ieee me poiat Xl'
oae may Bad first all extremal controls carrying the point X o into the poillt Xl aDd
thea chose &om their Dumber the uniqae one which carries out this passage iD the
shortest time. The quesdOll arises, as to whether there lDay be several extremal
CODUO!s carryiDg the point Xo into the point E l - 'Generally speakhlat there may be
several of them. The theorem followiDg below indicates aD important case of wi

city.
Theorem 5. Suppose

,,1uJ, efJUtJtiora (27) satisfies contlition. A) GntI B),

U1 = (u1 (t), to'

t1 t

U1

xo),

GDIl

Ie'

== (U2 (t), to, tit xo)

be ,",0 es"ernGl ccmtrols, otJlT'1iDg the P?ill' ~o i"to ,lie origin of'coortli7IG'es Xl
.. 0 of Uae spaee R; then tAe control. U1 tJDl U2 comella:

'1 == '2'
Dlft} a~').
Proof. By bypothesis, we have die equality
h

fa

~ CPi. (11) ( x~ + ~ (t i (I),


~t

Bill (t dt)

= 0,

~ cpdls) ( x:+ ~ (t

(4~
l

(I), BUs (l dt ) =

o.

to

1=1

Since the VectorS (34) are linearly iDdependeat fer auy t, then &om equatioa (40) it
follows that
tl

Xo

= ~ (t i (I),

'.

BUI (l dt = ~ (!fIi (I), BlIa (l dt.

1o

(41)

to

We assume for defiaiteaess mac '1


dOD (30) for which me identity

> '2' aDd take fI...') to be that solatioDof equa-

BU1 (t == P (t (t
holds, defiaiD. the fuactioD 8 1(' ). As ia the proof of Theorem 4, the fuactioD ;<,)
lDay be writtea io the form (38). Multiply relatioD (41) by tfiiO aad sum on t. We

(4' (t),

obtaia:

We DOW observe mat it fonows frDlD cOIIdiuoa B) that


P(~,

2: o.

Indeed, siace zere is aD iaterior point of the CODvez body 0, thea the fuDctioa
(~,), Ba), as function of the variable a, is either ideadeally aero or else may
take on both nesaave aad posidve ftlues.
In view of (42) we have die inequality
139

I~

Is

to

~I

~ (HI), BUl(tdl~~ <+(t), B~{t))dt.

Hence, just as in the proof of Theorem 4, we obtain:

DZ(t) == u2(") for "0 ~ t 5'2"


Further, siac'e the equation P(tfi{, - 0 balds only for isolated values
must

have '1

't then we

"r

Thus, Theorem S is proved.

96.

Ezisteace of opcimal cOIIU'ols for liaear syatelDS.

Theorem 6. If there esisu at lea" one con&rol of the equation (27), carrying
'he point Xo into llae point S l' thera tAere esisu GIl optimal control of the B'lutJeloT.
(27), carryin& dae point. So into the poin' s 1.
Proof. The set of all CODtrois of the form

U = (u(t), 0, t, sO),

(43)

carrying the point SoD iDto the point 'S.l' will be denoted by AZO'Zl- To each coouol (43) mere eorrespoads a rime of passaIc s: We denote the lower bound of all

such times for U Aso.z 1 by t*, aDd we shall prove that there exists a control
U* (0*('), 0, '., 'SO)' carrying the point Zo into the paiat z 1We choose. &om the set !lz.O'Zl an infinite sequence of controls

Uh == (u,(t), 0, 'i'

sO)

(It: = 1, 2, .. ),

for which the equality lim tit = t* holds. Obviously, equality


k......oo
n

lim

i.

~ crt (t*) (.1.~ +- ~ (~i (t),

I~-+oo 1=1

BUIe

dt) = x

(44)

holds.
Consider the Hilbert space L 2 of all measurable fuactioDS with integrable
squares, li\'ea 011 the iaterval 0 ~, ~ t*. The cODuol Dlet) is a vector-macnoD;
the i- ch coordiaate of this function will be denoted by
The function
considered on the sepcDt 0 oS'S t*, lies ia the space L 2. The set of aU fuactiODS U1e) (Ie -1, 2 ), ob\'iously lies in some sphere of the space L 2 and
therefore we may select a weakly coover,eDt subsequence from it. For simplicity
we suppose that the oriliaal sequeace

i
(45)
uj<&), '4Ct), , a "Ct),
i
converses weakly to some functioD a (, ) (i -1, . , r).

ut<t).

We shall show that the vector-function


u*(t) - (u1(t), , u'(t

satisfies, for almost all ~lIes " the coadition

140

"1<&),

u*{t)
Let

o.

b (u) :=I ~ biui = b


i=f

be the equauou of a hyperplaae carrying ODe of me (r - l)-dimeasioaal faces of the


polyhedron 0, while the polybedroa 0 lies ill the halfspace
b(a)

s b.

Let m be the set of all values t of the segmeJlt [0, '*1 for which b(a*('
11(1) the cbaracrerisdc function of the set m. We thea have

> b, and

,.

lim

~ v (I) [b (u* (I)) - b (U,. (t))J dt =

k-.CIO 0

me sequence (45), and


6(1I*(t - b(uk(t > 0

in view of the weak cCDvergence of

OD

me set m, then the measure of

III is

0
since

zero.

Thus, changing the vector-functiOD 8*(t) 00 a set of measure zero, we obtain


a aew function, which we shall agaiD. deaote by a*(t), which satisfies
lI*(t) 0, 0 S, t S ,.

me condition

It follows &om relation (44) and the weak convergence of the sequeDce (45)
that:

to.

11

~ Cfi (t) (

i=t

x: + ~ (+1 (t),

Bu"

dt ) = xl'

.,

Thus, U* = (1I*(l), 0, t*, 'So) is a measurable optimal eentrel, carryiog the


point :EO into the point x r
ID view 01 the remark to Theorem 1, by ChBDliug the coatrol u*(t) OD a set of
measure zeee, we may convert it into a coottol satisfying the maximum priaciple,
Le., ia our case the condition

(r/J(t), BU*(I)

It follows obviously from this

COaditiOD

=- P(t/A~.

that the fuocdoD u*(t) is piecewise

COD-

tinuous.

Thus, Theorem 6 is proved.


Theorem 7. If equation (27) satisfies condieions A) anti B) antI the operator A
is stable, i.e., if all of its eigenvolues Aave negative reol parIs, then to each point
So R ther CO"e8pOnU tJIJ oplillltJl conlrol carrying tha' poin' into the origi" of
coortliRtJles 0 R.
Proof. We shall first of all prove that there exists a aeighborhood Y of the point
D in R, each pohle 'So of which may be carried by some control into O.
Choose in 0 a vector y sucb that the vector -y beloags to I} and such that
141

the vector

= By

does Dot belong to any proper subspace of the space R which is invariant under the
operator A. From coaditioas A) and B) it follows that there is such a vector v.
For a sufficiencly small positive ( the operators A. and e- f A have eeiecideet invariant subspaces, and therefore the vectors

e-;Ab. e- 2IAb, ... , e-l1 l Ab


are linearly Iadepeadeae,

Let x(t) be auy real function, defined on some segment 0


ceeding uaity in absolute value; thea

:s t 1 aad Dot ex-

U = (TX<t), 0, f, l' 'So)


is a coauol of equatioo (27), and that control carries me point
(see (36
II.

= ellA (

Xl

xo

'So

Inee the point

+ ~ e-IAbX (I) dt) .

(46)

Now we choose the function XU), dc:pendiag

00

the parameters

a way that the point (46), which we denote by %1(%.0:

e'. ... , e"

iD

r. ..., en). satisfies

such

me

followiog cODditiODS:
s'l(O; 0, , 0) = 0,

and the Jacobian

a(~l ., ~~)

a(~"1 , ...

"'n
C;)

Xo=O....1 =0, ... .1l


; =0

is DODzero. Constructing such a function X(t), we show that the equation


Ej(Xo' ~l '. e") = 0 may be solved for {1, ... J ~n for all values Zo ly..
iag in some neighborhood Y of the origiD O.

e)

First of all we define a mactioa II (t, t;


of the variable I, 0 5 & S t 1- where
< r < t 1 aad is a parameter. The fuDCtiOD a(t, "
as a fuDCUoa of the ftriable I, is equal to zero everywhere outside die interval
r+ el, and OD thac interval it is equal to sigD
Now put

e.

e),
rr,

X (t)

A=1

(t, ka, ~k).

A simple calculation shows that the point %l(S.O' {1, .. '. en). with this choice of
the function )((e), satisfies the stated caadiuons.
Now let "0 be any point of the space R. Suppose that it first moves under the
cooaol u(') i i O. Since all eigeavalues of the operator A have negative real parts,
then after the expiration of some time the point will come into the neighborhood,

after which, as we baye proved,

It must be carried iato the


142

or~8io

of coordinates.

Hence, frOID Theorem 6, it follows that there exists an optimal control, carryioa the
point S.o into the origin.
Thus Theorem 7 is proved.

7. SJDthesis of a lia.ear opdmal control. The problem of syn,hesizing aD optimal ca:alrol has a sense for any eeatrel system (1); however bere we sball treat it
oaly in the case of a linear control system (27), satisfying conditions A) aDd B),
with a stable operator A.. For such a system one bas the theorems 00 ezisteDce and
uuiqueness (Theorems 7 and 5), thanks to which the problem of synthesis is ha priaciple solved. The considerations preseDted here live a cODsuuctive method of solution of the problem. The practicability of this method in each concrete case requires, bowever, a series of cOIlSUUCtiOOS. The syathesis of au optimal coutrol for
the lioear system (27) was carried out, .in its eDtirety, by other methods up until
DOW only for the case of eee cODuol parameter (I.e., for r - 1). Fe!' dbaum [6], hi
the case that the operator A had real roots, aDd Bushaw [7], ill the case that 1& - 2
and the eigenvalues of the operator A were complez.
VIe shall suppose that the equation (27) satisfies conditions A) and B) and bas
a stable operator A. nen for aay poiat So e R there e:asts one (and oaly one) optimal control

USO a (uzO(t), to' t t: xol,

(47)

which carries the point Zoo ioto the origia of coordiaates 0 R. There is uaiqueness, fiDally, up to a time traaslation (see remark 2 to the statement of me problem).
The quantity lI~O('O) depends, mus, only OD the point sO' and Dot 00 the particular
ofigia of the time readiD, '0' and therefore oae may put: v(zO) = uso('a). Let x(t)
be a solutioD of equation (27), correspoodiDg to the eonteel (47); then Uz(r) =
(Uzo(t), r, t I' a:(r (see remark 1 to the statement of the problem), aDd therefore
us.o(r) =v(x(r. Thus,
..Lx(t) AX(l)
dt

+ Bv(s(t),

and we see that tbe solutioo of the equation

~: ... A:a: + BY(x)

(48)

for arbitrary iaitial coDditioos z('O) =- Xo gives a law of optimallDotioD of the


poiot Xo iato the oriBiD of coorclioates ~ III this sense the fuactioa v(s) synthesizes
an optimal solutioD, carrying ally point 'SO into the origin.

We DOW present a method of CODSttucUon of the function v(z). Let ~(e) be


thac solution of equation (30) which UDder Theorem 2 correspoods to die coauol
(47), so that

!!Jf!l.,- A...;ttl,

aad .the funcaoD UXO(t) is defined from the equatioo


143

(49)

(~(e), BuKO(t)} = P(t/1(t.

(50)

Suppose further that set) is a solution of equatioD (27), satisfying the initial condition
(51)

and the end condition


(52)
so that

dx(t)
---;j't

:=I

As(t) + Buso(t).

(53)

Then the function v (z) satisfies the condition:


(~(to)' B~(x(to) = P (""(to).

(54)

From the existence auld uniqueness theorems it follows that there exists one, and
only one (up to a naDslation of time), pair of functions u xO( I,) , x(t), giveD on the
segmeot to ~ t So '1 and satisfying conditions (49)-(53). In view of the possibility
of ttanslatins the time, the numbers to and 'i are not determined uniquely by
these conditions, but the number t1 - to is.

It is Dot perfecdy clear bow to find the functions

z0( t) and set), satisfying

all the conditions (49)-(53), but it is easy to find all the functions uzo(t), set),
satisfying only conditions (49), (50), (52), and (53). To do this we proc-eed as follows: in view of the possibility of aD arbitrary uansladon of time, we fix the number t 1t puttiDB e1 =O. Now let X. be any covariant vector, different from zero,
and f/J (t. X) the solution of equation (49), satisfying the initial condition:
and defined for t

SO.

';(0, X) = X
Further, we define a function a(t, X) from the conditioa

(p(t, X), Bu(t, xl) =P(.p (t, x, t s 0,


and the function z(t, X) from the equation
h(t, X)
dt
Az(t, X> + Bu (t,
tilt

){.I.

From what has beee said above, the function .(z) is defined by the relation:
(p(t, X), Bv(x(t, X)

P<"'Ct, X.

(55)

It follows from the eustence theorem (Theorem 7) that the point s(t, 'X) sweeps
out the wbole space R, as , runs throu,h negative values and the vector X
chaoses arbiuarily. Thus, relation (.55) defines tbe value of the function v(x) for
any point x of the space R.

BIBLIOGRAPHY
(1] V. G. Bolty_nski!, R. V. Gamlaelidze aDd L. S. Ponttyasin, 011 the ';"eory of
optimal processes, Dokl. Akad. Nauk SSSR 110 (1956), 7-10. (Russian)
[2] R. V. Gamkrelidze, On the theory of optima,l processes in linear systems.

144

Dokl. Akad. Nauk SSSR 116 (1957), 9-11. (Russian)

[3] V. G. BolcyaasJcil, The m~imum principle in the theory of optimal processes,


Dold. Akad. Nauk SSSR 119 (1958), 1070-1073. (~ussiaa)
[ .. ] G. A. Bliss, Lectures 011 tAe ctJlculu of tHln_ioM, Uaiversity of ChicaSO
Press, Chicago, Ill., 1946. [The refeeenee in the text is to the Russian edition', and probably corresponds to the first few pages of Chapter VII in the

English editioa.]
[ 5] E.

J. McShane, On multipliers for Lagrange problems, Amer. J. Math. 61 (1939),

809-819.

[6] A. A. Fel' dbaom, On the design of optimal systems by meons of pwe space,
Avtomat. i Telemeh. 16 (1955), 129-149. (Russiao)
[7] D. W. Bushaw, E%perime7&tal towing tOM, Stevens Institute of Technology,
Report 469, Hoboken, N. J., 1953.
Translated by:
j, M. Danskin, Jr.

145

Das könnte Ihnen auch gefallen