Sie sind auf Seite 1von 23

journal of optimization theory and applications: Vol. 123, No. 3, pp.

595–617, December 2004 (© 2004)

Second-Order Sufficient Conditions


for State-Constrained Optimal Control Problems
K. Malanowski,1 H. Maurer,2 and S. Pickenhain3

Communicated by H. J. Pesch

Abstract. Second-order sufficient optimality conditions (SSC) are


derived for an optimal control problem subject to mixed control-
state and pure state constraints of order one. The proof is based
on a Hamilton-Jacobi inequality and it exploits the regularity of the
control function as well as the associated Lagrange multipliers. The
obtained SSC involve the strict Legendre-Clebsch condition and the
solvability of an auxiliary Riccati equation. They are weakened by
taking into account the strongly active constraints.

Key Words. Nonlinear optimal control, mixed control-state con-


straints, first-order state constraints, second-order sufficient optimal-
ity conditions, constraint qualifications, Legendre-Clebsch condition,
solvability of a Riccati equation.

1. Introduction

We shall derive second-order sufficient optimality conditions (SSC)


for an optimal control problem with nonlinear ODEs, which is subject
to mixed control-state and pure state constraints. We follow the approach
based on the Hamilton-Jacobi inequality which was used in Pickenhain
and Tammer (Ref. 1) for multidimensional control problems. For one-
dimensional problems, we are able to obtain more constructive results than
those in Ref. 1. We continue the line of Maurer and Pickenhain (Ref. 2),
where pure state constraints were absent. As in Ref. 2, we try to obtain

1 Professor, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland.


2 Professor, Institut für Numerische Mathematik, Westfälische Wilhelms-Universität,
Münster, Germany.
3 Professor, Institut für Mathematik, Brandenburgische Technische Universität, Cottbus,

Germany.
595
0022-3239/04/1200-0595/0 © 2004 Springer Science+Business Media, Inc.
596 JOTA: VOL. 123, NO. 3, DECEMBER 2004

(SSC) in a possibly weak form which takes into account the strongly active
constraints. In the analysis, the regularity of the control as well as of the
associated Lagrange multipliers plays a crucial role. We consider the case
where the control is a continuous function and we impose constraint qual-
ifications which ensure existence, uniqueness, and regularity of the nor-
mal Lagrange multipliers. By these constraints qualifications, the analysis
is confined to first-order state constraints (cf. Hartl, Sethi, and Vickson,
Ref. 3). The relevant regularity results in Malanowski (Ref. 4) are recalled
in Section 2. Note that, in the regularity analysis, a Lagrangian in the
indirect form is more convenient, while in the analysis of sufficiency in
Sections 3 and 4, a Lagrangian in the direct form is used; see Ref. 3 for
the definitions.
Following the approach in Refs. 1–2, we formulate in Section 3 a
second-order sufficient optimality condition in terms of the solution to a
Hamilton-Jacobi inequality. A quadratic form is chosen as a candidate for
the solution to the HJ inequality and conditions are derived under which
this quadratic form is the needed solution. These conditions involve the
previous constraint qualifications as well as some pointwise coercivity con-
ditions with respect to the control (Legendre-Clebsch condition) and state
variable. The latter is expressed in terms of the solvability of an auxiliary
matrix Riccati equation. These conditions have to be satisfied uniformly
with respect to the time, which is treated as a parameter. An important
point is that the obtained coercivity conditions are weakened by taking
into account all the strongly active constraints. At that point, for mixed
constraints, we follow the approach used in Maurer and Pickenhain (Ref.
2); see also Zeidan (Ref. 5) and Malanowski and Maurer (Ref. 6). How-
ever, this approach is not applicable to pure state-space constraints. To
cope with them, a different concept is used, which leads in Section 4 to a
modified form of the Riccati equation. We believe that this form is orig-
inal; it constitutes the main contribution of this paper to the sufficiency
analysis for optimal control problems. In Section 5, two numerical exam-
ples are presented.

2. Preliminaries

We consider the following optimal control problem:

(O) Find (x0 , u0 ) ∈ W 1,∞ (0, 1; Rn ) × L∞ (0, 1; Rm ) such that


 1
F (x0 , u0 ) = min(x,u) {F (x, u) := ϕ(x(t), u(t))dt + ψ(x(0), x(1))}
0
JOTA: VOL. 123, NO. 3, DECEMBER 2004 597

s.t. ẋ(t) − f (x(t), u(t)) = 0, a.a. t ∈ [0, 1],


ξ(x(0), x(1)) = 0,
θ (x(t), u(t)) ≤ 0, a.a. t ∈ [0, 1],
ϑ(x(t)) ≤ 0, all t ∈ [0, 1],

where

ϕ : Rn × Rm → R, ψ : Rn × Rn → R, f : Rn × Rm → Rn ,
ξ : Rn × Rn → Rd , θ : Rn × Rm → Rk , ϑ : Rn → Rl .

Denote by I = {1, . . . , k} and J = {1, . . . , l} the sets of indices of the


mixed and state inequality constraints. Throughout the paper, we assume
that the data satisfy the following regularity conditions: the functions
ϕ, φ, f, ξ, θ, ϑ are twice Fréchet differentiable and all differentials are Lips-
chitz continuous. For p ∈ [1, ∞], let us introduce the spaces

X p := W 1,p (0, 1; Rn ) × Lp (0, 1; Rm ), (1a)


Y p := W 1,p (0, 1; Rn ) × Rd × Lp (0, 1; Rk ) × W 1,p (0, 1; Rl ), (1b)
1,p
p
Z := W (0, 1; R ) × R × L (0, 1; R ) × L (0, 1; R ) × R × R . (1c)
n d p k p l d d

Define the following Lagrangian and Hamiltonians associated with prob-


lem (O):

L̃ : X∞ ×Y ∞ → R, H : Rn ×Rm ×Rn ×Rk ×Rl → R, (2a)


L̃(x,u,q,ρ,κ,µ) = F (x,u)−(q, ẋ −f (x,u))+ρ,ξ(x(0),x(1))
+(κ,θ (x,u))+µ(0),ϑ(x(0))+(µ̇,Dx ϑ(x)f (x,u)),
(2b)

H(x,u,q) = ϕ(x,u)+q,f (x,u), (3a)


H̃(x,u,q,κ, µ̇) = H(x,u,q)+κ,θ (x,u)+µ̇,Dx ϑ(x)f (x,u). (3b)

Remark 2.1. The Lagrangian (2) is in normal form; i.e., the Lag-
range multiplier corresponding to the functional F (x, u) is equal to one.
The Lagrangian is in the so-called indirect or Pontryagin form with an
absolutely continuous adjoint function q; cf. Section 7 in Hartl, Sethi, and
Vickson (Ref. 3) as well as Hager (Ref. 7) and Neustadt (Ref. 8). The state
constraints are considered in W 1,∞ (0, T ; Rl ), rather than in C(0, 1; Rl ).
This form is convenient in the analysis of existence and regularity of the
Lagrange multipliers (cf. Malanowski, Ref. 4). In the sequel, we still intro-
duce another form of the Lagrangian.
Let (x0 , u0 ) with u0 ∈ C(0, 1; Rm ) be a fixed pair that is admissible for
problem (O). We are going to analyze conditions under which (x0 , u0 ) is
598 JOTA: VOL. 123, NO. 3, DECEMBER 2004

a locally isolated local solution to problem (O). For the sake of simplic-
ity, the functions evaluated at (x0 , u0 ) will be denoted by the subscript 0,
e.g.,

θ0 (t) = θ0 (x0 (t), u0 (t)), ϑ0 (t) = ϑ0 (x0 (t)).

Remark 2.2. Similar techniques can be used in the more general sit-
uation of piece-wise continuous control. In that case, jumps can appear in
the solutions of the adjoint equation and the Hamilton-Jacobi inequality.
Accordingly, the assumed regularity of the matrix function Q in (26) has
to be modified.

First, we will consider the stationarity conditions of the Lagrangian


(2) at (x0 , u0 ). To get existence and uniqueness of a Lagrange multiplier
associated with (x0 , u0 ), we need some constraints qualifications. To this
end, we introduce the following sets of active constraints for t ∈ [0, 1] :
j
I0 (t) = {i ∈ I | θ0i (t) = 0}, J0 (t) = {j ∈ J | ϑ0 (t) = 0}, (4)

and define the following matrices:

0 (t) := [Dx θ0i (t)]i∈I0 (t) ,


0 (t) := [Du θ0i (t)]i∈I0 (t) ,
j
ϒ0 (t) := [Dx ϑ0 (t)]j ∈J0 (t) . (5)

We make the following assumptions:


(A1) Linear Independence. There exists β > 0 such that
  

0 (t)∗ Du f0 (t)∗ ϒ ∗ (t) χ  ≥ β|χ |,
0
∀χ of appropriate dimension and ∀t ∈ [0, 1]. (6)

(A2) Controllability. For any g ∈ Rd , there exists a solution (y, v) ∈


X∞ of the following equations:

ẏ(t) − Dx f0 (t)y(t) − Du f0 (t)v(t) = 0, (7a)


D0 ξ0 y(0) + D1 ξ0 y(1) = g, (7b)
0 (t)y(t) +
0 (t)v(t) = 0, (7c)
ϒ0 (t)y(t) = 0. (7d)

For the sake of simplicity, here and in the sequel we denote

Di ξ := Dx(i) ξ, i = 0, 1.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 599

Similarly,

Dij2 ξ := Dx(i)x(j
2
) ξ, i, j = 0, 1.

Note that Condition (A1) implies in particular that


  
 0 (t)∗
0 (t)∗ 
 η ≥ β|η|,
 ϒ0 (t)∗ 0
∀η of appropriate dimension and all t ∈ [0, 1]; (8)

i.e., pointwise along the trajectory (x0 , u0 ), the gradients of all the active
constraints (control-state and state) are linearly independent, uniformly on
[0,1].
The following result is proved in Ref. 4, Theorem 4.3.

Proposition 2.1. If Assumptions (A1) and (A2) hold, then there


exists a unique normal Lagrange multiplier λ0 := (q0 , κ0 , ρ0 , µ0 ) ∈ Y ∞ for
problem (O) associated with (x0 , u0 ); i.e., the following first-order optimal-
ity conditions are satisfied:

Dx L̃(x0 , u0 , q0 , κ0 , ρ0 , µ0 ) = 0, (9)
Du L̃(x0 , u0 , q0 , κ0 , ρ0 , µ0 ) = 0, (10)

κ0 ∈ K + ,
(κ0 , θ (x0 , u0 )) = 0, (11a)
µ0 (0), ϑ(x(0)) + (µ̇0 , Dx ϑ(x0 )f (x0 , u0 )) = 0, µ0 ∈ L̃+ , (11b)

where

K + = {κ ∈ L∞ (0, 1; Rk )|κ i (t) ≥ 0, ∀i ∈ I and a.a. t ∈ [0, 1]}, (12a)


L̃+ = {µ ∈ W 1,∞ (0, 1; Rl )| 0 ≤ µ̇j (t) ≤ µj (t),
µ̇j (t) is nonincreasing ∀j ∈ J and a.a. t ∈ [0, 1]}. (12b)

Remark 2.3. In Ref. 4, Assumptions (A1) and (A2) were formulated


for α-active rather than for active constraints. However, for continuous
control functions, both formulations are equivalent. Note that Condition
(A1) restricts the analysis to problems subject to first-order state con-
straints (see Ref. 3).

In addition to (A1) and (A2), we assume the following condition:


(B) Legendre-Clebsch Condition. There exists γ > 0 such that
2
v, Duu H̃0 (t)v ≥ γ |v|2 , ∀ v ∈ ker
0 (t) and a.a. t ∈ [0, 1]. (13)
600 JOTA: VOL. 123, NO. 3, DECEMBER 2004

In the same way as in Theorem 5.2 in Ref. 4, we obtain the corollary


below.

Corollary 2.1. If (A1), (A2), (B) hold, then ẋ0 , u0 , q̇0 , κ0 , µ̇0 are Lips-
chitz continuous on (0, 1).

Taking advantage of the above regularity, we can define a Lagrangian


L : X∞ × Z ∞ → R for problem (O) in the so-called direct form (cf. Ref. 3):

L(x, u, p, ρ, κ, ν, σ 0 , σ 1 ) = F (x, u) − (p, ẋ − f (x, u)) + ρ, ξ(x(0), x(1))


+(κ, θ (x, u)) + (ν, ϑ(x))
+σ 0 , ϑ(x(0)) + σ 1 , ϑ(x(1). (14)

 : R n × Rm × R n × Rk × Rl →
We introduce also still another Hamiltonian H
R,

 u, p, κ, ν) = H(x, u, p) + κ, θ (x, u) + ν, ϑ(x).


H(x, (15)

The stationarity of the Lagrangian (14) yields the following first-order


optimality conditions:

 0 , u0 , p0 , κ0 , ν0 )
0 = p˙0 + Dx H(x
= ṗ0 + Dx f (x0 , u0 )∗ p0 + Dx ϕ(x0 , u0 )
+Dx θ (x0 , u0 )∗ κ0 + Dx ϑ(x0 )∗ ν0 , (16)
0 = p0 (0) + Dx(0) ψ(x0 (0), x0 (1))
+Dx(0) ξ(x0 (0), x0 (1))∗ ρ0 + Dx ϑ(x0 (0))∗ σ00 , (17)
0 = −p0 (1) + Dx(1) ψ(x0 (0), x0 (1))
+Dx(1) ξ(x0 (0), x0 (1))∗ ρ0 + Dx ϑ(x0 (1))∗ σ01 , (18)
 0 , u0 , p0 , κ0 , ν0 )
0 = Du H(x
= Du ϕ(x0 , u0 ) + Du f (x0 , u0 )∗ p0 + Du θ (x0 , u0 )∗ κ0 , (19)
κ0 , θ (x0 , u0 ) = 0, (ν0 , ϑ(x0 )) = 0,
σ00 , ϑ(x0 (0)) = 0, σ01 , ϑ(x0 (1)) = 0,

where

κ0 ∈ K + , σ00 ∈ Rd+ , σ01 ∈ Rd+ ,


ν0 ∈ L+ := {ν ∈ L∞ (0, 1; Rl )|ν i (t) ≥ 0, ∀j ∈ J and a.a. t ∈ [0, 1]}.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 601

In view of Corollary 2.1, simple calculations show that

ν0 = −µ̈0 ∈ L∞ (0, 1; Rl ), p0 = Dx ϑ(x0 )∗ µ̇0 + q0 ∈ W 1,∞ (0, 1; Rn ),


σ00 = µ0 (0) − µ̇0 (0), σ01 = µ̇0 (1).

In particular,

 0 , u0 , p0 , κ0 , ν0 ) − ν0 , ϑ(x0 ),


H̃(x0 , u0 , q0 , κ0 , µ˙0 ) = H(x (20a)

that is,

 0 , u0 , p0 , κ0 , ν0 ),
Du H̃(x0 , u0 , q0 , κ0 , µ˙0 ) = Du H(x (20b)
2 2 
Duu H̃(x0 , u0 , q0 , κ0 , µ˙0 ) = Duu H(x0 , u0 , p0 , κ0 , ν0 ). (20c)

3. SSC via the Hamilton-Jacobi Inequality

To derive second-order sufficient optimality conditions for problem


(O), we follow the approach used in Pickenhain and Tammer (Ref. 1) and
Maurer and Pickenhain (Ref. 2), which is based on the Hamilton-Jacobi
inequality. By

BX (x) := {y ∈ X|
y − x
X ≤ },

we denote a closed ball of radius  in a normed space X centered at a


point x. For a given  > 0 and t ∈ [0, 1], denote

U (t) := {(x, u) ∈ BR


n+m
((x0 (t), u0 (t))|θ (x, u) ≤ 0, ϑ(x) ≤ 0}. (21)

The following theorem can be proved in the same way as Theorem 3.1
in Ref. 2; see also Assertion 2 in Ref. 1.

Theorem 3.1. Suppose that there exists a function V : [0, 1] × Rn → R


that is of class C 1 with respect to x and Lipschitz continuous with respect
to t, such that the following conditions hold for suitable  > 0 and c > 0:

Dt V (t, x) + H(x, u, Dx V (t, x)) ≥ c{|x − x0 (t)|2 + |u − u0 |2 },


∀(x, u) ∈ U (t) and a.a. t ∈ [0, 1], (22)
Dt V (t, x0 (t)) + H(x0 (t), u0 (t), Dx V (t, x0 (t))) = 0, a.a. t ∈ [0, 1], (23)
[V (1, x0 (1)) − V (0, x0 (0))] − [V (1, x 1 ) − V (0, x 0 )]
+ψ(x 0 , x 1 ) − ψ(x0 (0), x0 (1)) ≥ c{|x 0 − x0 (0)|2 + |x 1 − x0 (1)|2 },
∀x i such that x i ∈ BR (x0 (i)), ϑ(x i ) ≤ 0, i = 0, 1, ξ(x 0 , x 1 ) = 0.
n
(24)
602 JOTA: VOL. 123, NO. 3, DECEMBER 2004

Then, the estimate

F (x, u) ≥ F (x0 , u0 ) + c
(x, u) − (x0 , u0 )
2X2 (25)

holds for all admissible pairs (x, u) ∈ BX (x0 , u0 ); i.e., (x0 , u0 ) is a locally
isolated weak local minimizer.

As in Refs. 1, 2, the function V : [0, 1] × Rn → R used in Theorem 3.1


is defined in the form of the following quadratic expression:

V (t, x) = e(t) + p0 (t)∗ (x − x0 (t)) + (1/2)(x − x0 (t))∗ Q(t)(x − x0 (t)),


(26)

where e ∈ W 1,∞ (0, 1; R), p0 ∈ W 1,∞ (0, 1; Rn ) is the adjoint function given in
(16), while Q(t) is a symmetric matrix and Q ∈ W 1,∞ (0, 1; Rn×n ). To sim-
plify notation, introduce the function F0 : R1+n+m → R defined by

F0 (t, x, u) = Dt V (t, x) + H(x, u, Dx V (t, x))


= ė(t)+ṗ0 (t)∗ (x−x0 (t))−p0 (t)∗ x˙0 (t)−x˙0 (t)∗ Q(t)(x−x0 (t))
+(1/2)(x − x0 (t))∗ Q̇(t)(x − x0 (t)) + H(x, u, p0 (t)
+Q(t)(x − x0 (t))). (27)

The function e is chosen in such a way that the HJ equality (23) holds
along (x0 , u0 ):

0 = F0 (t, x0 (t), u0 (t)) = ė(t) − p0 (t)∗ ẋ0 (t) + H0 (t),

i.e.,
ė(t) = p0 (t)∗ x˙0 (t) − H0 (t).

The HJ inequality (22) amounts to

F0 (t, x, u) ≥ c(|x − x0 (t)|2 + |u − u0 (t)|2 ),


∀(x, u) ∈ U (t) and a.a. t ∈ [0, 1]. (28)

We are going to find conditions under which (28) is satisfied. To this end,
for almost all t ∈ [0, 1], consider the following mathematical program:

(P(t)) min F0 (t, x, u),


(x,u)∈Rn+m
s. t. θ (x, u) ≤ 0 and ϑ(x) ≤ 0.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 603

Let us introduce the following standard Lagrangian for problem


(P(t)):

l(t, x, u, κ, ν) = F0 (t, x, u) + κ, θ (x, u) + ν, ϑ(x). (29)

Note that it follows from (27) that

Dx F0 (t, x0 (t), u0 (t)) = ṗ0 (t) + Dx H0 (t),


Du F0 (t, x0 (t), u0 (t)) = Du H0 (t).

Hence, choosing

κ = κ0 (t) ≥ 0 and ν = ν0 (t) ≥ 0,

we find that conditions (16) and (19) can be rewritten as

Dx l(t, x0 (t), u0 (t), κ0 (t), ν0 (t)) = 0,


Du l(t, x0 (t), u0 (t), κ0 (t), ν0 (t)) = 0,

i.e., the Lagrangian l assumes a stationary point at (x0 (t),u0 (t),κ0 (t),ν0 (t)).
On the other hand, condition (8) ensures that the gradients of all con-
straints of (P(t)) active at (x0 (t), u0 (t)) are linearly independent, uniformly
with respect to t ∈ [0, 1]. Hence, the multipliers κ0 (t) and ν0 (t) are defined
uniquely. For a fixed α ≥ 0, introduce the following sets:

Iα+ (t) = {i ∈ I0 (t)|κ0 (t) > α}, (30a)


Jα+ (t) = {i ∈ J0 (t)|ν0 (t) > α}, (30b)

where I0 (t) and J0 (t) are given in (4). These are the sets of those active
constraints for which pointwise strict complementarity is satisfied with
margin α. In view of definitions (27) and (29), we obtain
2
D(x,u),(x,u) l(t, x0 (t), u0 (t), κ0 (t), ν0 (t))
 
Q̇ + QDx f0 + Dx f0∗ Q + Dxx 2 H 0 Dxu 0 + QDu f0
2 H
= 2 H0 + Du f ∗ Q 0
2 H (t). (31)
Dux 0 Duu

We assume that the following coercivity condition holds:


(C) There exist constants α > 0 and δ > 0, independent of t ∈ [0, 1],
such that
2
(y, v),D(x,u),(x,u) l(t, x0 (t), u0 (t), κ0 (t), ν0 (t))(y, v)≥δ(|y|2 +|v|2 ),
(32)
604 JOTA: VOL. 123, NO. 3, DECEMBER 2004

a.a. t ∈ [0, 1] and ∀(y, v) ∈ Rn+m such that

Dx θ0i , y + Du θ0i , v = 0, ∀i ∈ Iα+ (t), (33a)


∈ Jα+ (t).
j
Dx ϑ0 , y = 0, ∀j (33b)

By a well-known second-order sufficient optimality condition for mathe-


matical programs [see e.g., Fiacco (Ref. 9) and Maurer and Zowe (Ref.
10)], condition (32) together with (8) ensures that, for almost each t ∈ [0, 1],
there exists (t) > 0 and c(t) > 0 such that

F0 (t, x, u) = F0 (t, x, u) − F0 (t, x0 (t), u0 (t), t)


≥ c(t){|x − x0 (t)|2 + |u − u0 (t)|2 },
R n+m
∀(x, u) ∈ B(t) (x0 (t), u0 (t)).

Moreover, since the constants β, δ, α in (8), (32), (33), respectively, are


independent of t, we can choose  = (t) > 0 and c = c(t) > 0 independent
of t; thus, (28) is satisfied. It remains to work out conditions that ensure
the boundary conditions (24) in Theorem 3.1. To simplify notation, intro-
duce the function F : R2n → R given by the left-hand side of (24),

F(x 0 , x 1 ) := [V (1, x0 (1)) − V (0, x0 (0))] − [V (1, x 1 ) − V (0, x 0 )]


+ψ(x 0 , x 1 ) − ψ(x0 (0), x0 (1)). (34)

Obviously,

F(x0 (0), x0 (1)) = 0.

Consider now the following mathematical program:

(P(0,1) ) min F(x 0 , x 1 ),


(x 0 ,x 1 )∈R2n
0 1
s.t. ξ(x , x ) = 0, ϑ(x 0 ) ≤ 0, ϑ(x 1 ) ≤ 0.

Introduce the Lagrangian for (P(0,1) ),

(x 0 , x 1 , ρ, σ 0 , σ 1 ) := F(x 0 , x 1 ) + ρ, ξ(x 0 , x 1 )


+σ 0 , ϑ(x 0 ) + σ 1 , ϑ(x 1 ). (35)

Notice that (17) and (18) can be interpreted as stationarity conditions of


the Lagrangian  at (x0 (0), x0 (1), ρ0 , σ00 , σ01 ),

D0 (x0 (0), x0 (1), ρ0 , σ00 , σ01 ) = 0,


D1 (x0 (0), x0 (1), ρ0 , σ00 , σ01 ) = 0.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 605

Moreover, Conditions (A1) and (A2) imply (see Lemma 4.1 in Ref. 4) that
the matrix
 
D0 ξ(x0 (0), x0 (1)) D1 ξ(x0 (0), x0 (1))
has full row-rank; (36)
ϒ0 (0) ϒ0 (1)
i.e., the gradients of all constraints of (P(0,1) ) active at (x0 (0), x0 (1)) are
linearly independent. Note that, in view of (34) and (35), we have
2 2
D00 0 = Q(0) + D00 (ψ(x0 (0), x0 (1))
+ξ(x0 (0), x0 (1))∗ ρ0 + ϑ(x0 (0))∗ σ00 ), (37a)
2
D01 2
0 = D01 (ψ(x0 (0), x0 (1)) + ξ(x0 (0), x0 (1))∗ ρ0 ), (37b)
2 2
D11 0 = −Q(1) + D11 (ψ(x0 (0), x0 (1))
+ξ(x0 (0), x0 (1))∗ ρ0 + ϑ(x0 (1))∗ σ01 ). (37c)

As in the case of problem (P(t)), introduce the following second-order


sufficient optimality conditions for problem (P(0,1) ):

(A3) (y 0 , y 1 ), D(0,1),(0,1)


2
(x0 (0), x0 (1), ρ0 , σ00 , σ01 )(y 0 , y 1 ) > 0,
∀(y 0 , y 1 ) ∈ R2n such that
Dx ϑ j (x0 (0)), y 0  = 0, j ∈ J0+ (0),
Dx ϑ j (x0 (1)), y 1  = 0, j ∈ J0+ (1),
D0 ξ(x0 (0), x0 (1))y 0 + D1 ξ(x0 (0), x0 (1))y 1 = 0.

In view of (36), Condition (A3) implies that there exist  > 0 and c > 0
such that

F(x 0 , x 1 ) = F(x 0 , x 1 ) − F(x0 (0), x0 (1)) ≥ c{|x 0 − x0 (0)|2 + |x 1 − x0 (1)|2 },


∀(x 0 , x 1 ) ∈ BR (x0 (0), x0 (1)) such that
2n

ϑ(x i ) ≤ 0, i = 0, 1, ξ(x 0 , x 1 ) = 0, i.e., (24) holds.

All results obtained so far are summarized in the following theorem:

Theorem 3.2. Suppose that (A1), (A2), (B) hold and there exists a
symmetric matrix function Q ∈ W 1,∞ (0, 1; Rn×n ) such that Conditions (C)
and (A3), together with (31) and (37), are satisfied. Then, there exist c > 0
and  > 0 such that

F (x, u) − F (x0 , u0 ) ≥ c
(x, u) − (x0 , u0 )
2X2 ,

for all admissible (x, u) ∈ BX (x0 , u0 ); i.e., (x0 , u0 ) is a locally isolated
strong local minimizer of problem (O).
606 JOTA: VOL. 123, NO. 3, DECEMBER 2004

4. Checking Positive-Definiteness with Riccati Equations

In this section, we are going to show that Conditions (C) and (A3) of
positive definiteness can be expressed in terms of the solvability of an aux-
iliary matrix Riccati differential equation along with appropriate boundary
conditions. In view of (31), conditions (32) and (33) in (C) take the form
   
∗ ∗ Q̇ + QDx f0 + Dx f0∗ Q + Dxx
2 H0 Dxu 0 + QDu f0
2 H y
[y , v ] 2 H0 + Du f ∗ Q 
2 H (t)
Dux 0 D uu 0 v
≥ δ(|y|2 + |v|2 ), ∀(y, v) ∈ Rn+m such that (38)
  α (t)v = 0,
α (t)y +
(39)

ϒα (t)y = 0, (40)

where


α (t) := Dx θ0i (t) , (41a)
+
 i∈Iα (t)
α (t) := Du θ i (t)

, (41b)
0 +
 i∈Iα (t)
α (t) := Dx ϑ j (t)
ϒ . (41c)
0 +
j ∈Jα (t)

We will derive a weak form of the Riccati equation, where the con-
straints (39) and (40) are taken into account. For this purpose, we need
the following lemma; see Theorem 1 in Haynsworth (Ref. 11). Here, π(S)
denotes the number of positive eigenvalues of a symmetric matrix S.

Lemma 4.1. Let A, B, C be n × n, m × n, m × m dimensional matrices,


respectively, where A and C are symmetric and C is invertible. Define
 
A B∗
D= .
BC
Then,

π(D) = π(C) + π(A − B ∗ C −1 B),

where A − B ∗ C −1 B is the Schur complement of C.

First, let us recall the strong form of the Riccati equation, which is
obtained if we require that the inequality (38) is valid for all (y, v) ∈ Rn+m
ignoring the constraints (39) and (40). To this end, we need the following
strong form of the Legendre Clebsch condition;
2 
v, Duu H0 (t)v ≥ γ |v|2 , for some γ > 0 and ∀v ∈ Rn , t ∈ [0, 1]. (42)
JOTA: VOL. 123, NO. 3, DECEMBER 2004 607

Choosing the matrices

2 
A = Q̇ + QDx f0 + Dx f0∗ Q + Dxx H0 ,
∗ 2 
B = Dxu H0 + QDu f0 ,
2 
C = Duu H0 ,

and applying Lemma 4.1, we find that (32) is satisfied for all (y, v) ∈ Rn+m
if there exists δ > 0 such that

A(t) − B ∗ (t)C(t)−1 B(t) ≥ δI, ∀t ∈ [0, 1].

But this holds if the following Riccati equation has a bounded solution:

2 
Q̇ + QDx f0 + Dx f0∗ Q + Dxx H0
2  2  −1 2 
−(Dxu H0 + QDu f0 )(Duu H0 ) (Dux H0 + Du f ∗ Q) = 0. 0 (43)

This property follows from a well-known stability result for ODEs (see
the proof of Proposition 4.1 below). Essentially, this strong form of the
Riccati equation has been used in all the examples where a test for SSC
has been performed and also in sensitivity analysis (Refs. 12–15). To get a
weak form of the Riccati equation, we take into account the constraints
(39) and (40). This can be done by projecting the matrix in (38) onto the
subspace defined by (39) and (40). Note that, in view of (A1), the matrices
α (t) and ϒ

α (t) are of full row rank. Denote

ı(t) = card Iα+ (t),  (t) = card Jα+ (t).

In the same way as in Section 5 of Ref. 2, we choose the submatrices


1α (t) and

 2α (t) of dimensions ı(t) × ı(t) and ı(t) × (m − ı(t)), respectively,


such that
 1α (t) is invertible and the submatrices are measurable functions
of t. Denote by v1 and v2 the subvectors of v corresponding to
 1α (t) and
α (t), respectively. Then, we can use (39) to express v1 ∈ R

2 ı(t)
in terms of
v2 ∈ Rn−ı(t) and y:

v1 (t) = R1 (t)y + R2 (t)v2 , (44)

where
 −1  −1
R1 (t) = −
 1α (t) 
α (t), R2 (t) = −
 1α (t)  2α (t).

608 JOTA: VOL. 123, NO. 3, DECEMBER 2004

Moreover, denote by P(t) : Rn → Rn the projection map onto the


 (t)-dimensional subspace {y ∈ Rn |ϒ α (t)y = 0}. Then, the inequality (38)
subject to (39) and (40) reduces to
 
y
[y ∗ , v2∗ ]M(t)   ≥ δ(|P(t)y|2 + |v2 |2 ),
v2
∀y ∈ Rn , v2 ∈ Rı(t) and a.a. t ∈ [0, 1], (45)

where
 
P ∗ [Q̇ + QA11 + A∗11 Q + B11 ]P P ∗ [QA12 + B12 ]
M= , (46a)
[A∗12 Q + B12
∗ ]P C22
A11 = Dx f0 + Du1 f0 R1 , (46b)
2
B11 = Dxx 2
H0 + Dxu1
H0 R1 + R1∗ Du21 x H0 , (46c)
A12 = Du1 f0 R1 + Du2 f0 , (46d)
2
B12 = Dxu1
2
H0 R2 + Dxu2
H0 + R1∗ Du21 u1 H0 R2 + R1∗ Du21 u2 H0 , (46e)
C22 = R2∗ Du21 u1 H0 R2 + R2∗ Du21 u2 H0 + Du22 u1 H0 R2 + Du22 u2 H0 . (46f )

We have to strengthen the Legendre-Clebsch Condition (B) and


assume the following condition:
(A4) Strengthened Legendre-Clebsch Condition. There exist α > 0
and γ > 0 such that
2   α (t) and a.a. t ∈ [0, 1].
v, Duu H0 (t)v ≥ γ |v|2 , ∀v ∈ ker

Note that (A4) implies

v2 , C22 (t)v2  ≥ γ |v2 |2 , ∀v2 ∈ Rm−ı(t) and a.a. t ∈ [0, 1]. (47)

Proposition 4.1. Suppose that (A1), (A2), (A4) hold. Let S ∈ L∞ (0, 1;
Rn×n
) be a symmetric matrix function, such that

P(t)∗ S(t)P(t) = 0, a.a. t ∈ (0, 1). (48)

Assume that the matrix Riccati equation

−1
Q̇ + QA11 + A∗11 Q + B11 + (QA12 + B12 )C22 (A∗12 Q + B12

)+S =0 (49)

has a solution uniformly bounded on (0, 1). Then, (45) is satisfied for a.a.
t ∈ (0, 1).
JOTA: VOL. 123, NO. 3, DECEMBER 2004 609

Proof. By well-known stability results for ODEs, it follows from (47)


and from the existence of a solution to (49) that, for δ ∈ (0, γ ) sufficiently
small, the equation

Q̇ + QA11 + A∗11 Q + B11 + (QA12 + B12 )(C22 − δI)−1 (A∗12 Q + B12



)
+S = 2δI (50)

has a bounded solution. Here, I denotes the unit matrix of appropriate


dimension. Multiplying (50) from the left and right by P(t)∗ and P(t),
respectively, and taking into account (48), we get

R : = P ∗ Q̇ + QA11 + A∗11 Q + B11 + (QA12 + B12 )(C22 − δI)−1 (A∗12 Q + B12

) P
−δP ∗ P = δP ∗ P. (51)

Simple calculations show that R(t) is the Schur complement of the subm-
atrix C22 − δI of the matrix M(t) − δI, where M(t) is defined in (46).
Clearly, (51) yields that

π(R(t)) = n −  (t).

On the other hand, in view of (47), we have

π [C22 (t) − δI ] = m − ı(t).

Hence, by Lemma 4.1,

π [M(t) − δI] = n + m − ı(t) −  (t).

This implies that (45) holds.

Using Theorem 3.2 and Proposition 4.1, we arrive at the principal


result of this paper.

Theorem 4.1. Sufficient Optimality Condition. Suppose that Condi-


tions (A1), (A2), (A4) are satisfied and that, moreover:

(A5) there exists a symmetric matrix function S ∈ L∞ (0, 1; Rn×n ) sat-


isfying (48), such that the Riccati equation (49) has a solu-
tion Q uniformly bounded on [0, 1] for which Condition (A3)
holds.
610 JOTA: VOL. 123, NO. 3, DECEMBER 2004

Then, there exist c > 0 and  > 0 such that

F (x, u) − F (x0 , u0 ) ≥ c
(x, u) − (x0 , u0 )
2X2 ,

for all admissible (x, u) ∈ BX ((x0 , u0 ));

i.e., (x0 , u0 ) is locally isolated strong local minimizer of problem (O).

5. Numerical Examples

Example 5.1. Weak Riccati Test for SSC. Consider the following var-
iational problem in a time interval [0, T ] with fixed endtime T > 0 but free
endpoint x(T ),
 T
min F (x, u) = 0.5 [u(t)2 − x(t)2 ]dt, (52)
0
s.t. ẋ(t) = u(t), x(0) = x0 , t ∈ [0, T ], (53)
−a ≤ x(t) ≤ b, t ∈ [0, T ], 0 < a ≤ b. (54)

The initial value x0 is supposed to satisfy

−a < x0 < b.

This is the same control problem as in Example 7.1 of Kawasaki and Ze-
idan (Ref. 16), except that we consider a free endpoint. The Hamiltonian
(15) takes the form

 u, p, νa , νb ) = 0.5(u2 − x 2 ) + pu + νa (−x − a) + νb (x − b),


H(x, (55)

with two multipliers νa , νb associated with the state constraints −x − a ≤ 0


and x − b ≤ 0. The adjoint equation (16) is

ṗ = x + νa − νb .

In view of

u = u + p = 0,
Du H

the optimal control is given by

u = −p.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 611

It is obvious that the strengthened Legendre–Clebsch condition (A4)


holds and that the state constraint is of order one. Depending on the ini-
tial value x0 ∈ (−a, b), the optimal solution is composed of an interior arc
with −a < x(t) < b in [0, t1 ) and a boundary arc either on the lower bound-
ary x(t) ≡ −a or on the upper boundary x(t) ≡ b, for t1 ≤ t ≤ T . The solu-
tions on the boundary arcs are as follows:

x(t) ≡ −a : u = 0, p = 0, νa = a > 0, (56a)


x(t) ≡ b : u = 0, p = 0, νb = b > 0. (56b)

To determine the unknown entry time t1 , we use the continuity of the con-
trol and obtain the entry conditions

x(t1 ) ∈ (−a, b), p(t1 ) = 0.

Thus, the following boundary value problem has to be solved on the inte-
rior arc:

ẋ = −p, ṗ = x, x(0) = x0 , x(t1 ) ∈ {−a, b}, p(t1 ) = 0. (57)

The solution is

x(t) = α sin(t) + β cos(t),

with

β = x0 , α = x0 tan(t1 ).

The entry time is determined by the condition cos(t1 ) = x0 /b on the upper


boundary [resp., cos(t1 ) = −x0 /a on the lower boundary]. Thus, for initial
values with −a < x0 < a, the last equation is solvable with 0 < t1 < π for
both the upper and the lower boundary. Hence, we have found two extre-
mal solutions. For x0 = a, we also get two extremals with x(t1 ) = −a for
t1 = π and x(t1 ) = b with t1 < π. Finally, for all initial values a < x0 < b,
there exists only one extremal joining the upper boundary with x(t1 ) = b
for t1 < π .
We are going to use the weak form of SSC derived in Section 4 to
check the optimality of all these extremals. On the interior arc [0, t1 ), the
function S in the Riccati equation (49) becomes zero and the equation
takes the form

Q̇ − Q2 − 1 = 0.
612 JOTA: VOL. 123, NO. 3, DECEMBER 2004

This equation has the bounded solution

Q(t) = tan(t − t1/2 ), for t1 < π.

However, there is no bounded solution in the whole interval [0, T ] if the


endtime satisfies T ≥ π. Hence, for t1 < π and T ≥ π , the test of SSC in
its strong form (43) is not applicable. Here, we can take advantage of the
freedom in choosing any S satisfying (48). In our case, S ∈ W 1,∞ (t1 , T ) can
be arbitrary. Setting

S(t) ≡ Q(t1 )2 + 1,

we find that (49) reduces to Q̇ = 0. The boundary condition for Q(0) and
Q(T ) are both vacuous, since the only admissible variations in (A3) are
y 0 = y 1 = 0. Hence, the solution of (49) needed in Theorem 4.1 takes the
form

tan(t − t1/2 ), t ∈ [0, t1 ],
Q(t) =
tan(t1/2 ), t ∈ [t1 , T ],

and by that theorem the extremal is the solution of the optimal control
problem.

Example 5.2. Optimal Control of the van der Pol Oscillator. The fol-
lowing optimal control model for the van der Pol oscillator has been dis-
cussed in Augustin and Maurer (Ref. 14) using the multiplier µ in the
Lagrangian (2). It is instructive to discuss the same example on the basis
of the multiplier ν in the Hamiltonian (15), which was considered through-
out the paper. Moreover, we shall verify the SSC test in its weak form
as given in Theorem 4.3. The state variables are the voltage x1 (t) and the
electric current x2 (t); the control u(t) represents the voltage at the gener-
ator. The optimal control problem is
 5
min F (x, u) = [u2 (t) + x12 (t) + x22 (t)] dt, (58)
0
s.t. ẋ1 = x2 , ẋ2 = −x1 + x2 (1 − x12 ) + u, t ∈ [0, 5], (59)
x1 (0) = 1, x2 (0) = 0, (60)
−x2 (t) − 0.4 ≤ 0, t ∈ [0, 5]. (61)

Since

Du f (x, u)∗ Dx ϑ(x)∗ = 1,


JOTA: VOL. 123, NO. 3, DECEMBER 2004 613

the state constraint has order one and satisfies the regularity condition (6).
The Hamiltonian (15) takes the form

(x1 , x2 , u, p1 , p2 , ν) = u2 + x 2 + x 2 + p1 x2 + p2 [−x1 + x2 (1 − x 2 ) + u]
H 1 2 1
+ν(−x2 − 0.4). (62)

Then, the adjoint equations (16) are

ṗ1 = −2x1 + p2 (1 + x1 x2 ), p1 (5) = 0, (63a)


ṗ2 = −2x2 − p1 + p2 (x12 − 1) + ν, p2 (5) = 0. (63b)

The optimal control is determined by

u = 2u + p2 = 0
Du H ⇒ u = −p2 /2. (64)

The Legendre–Clebsch Condition (A4) obviously holds. After studying the


unconstrained solution first, one may guess that the constrained solution
has one boundary arc with

x2 (t) ≡ −0.4, t ∈ [t1 , t2 ] and 0 < t1 < t2 < 5.

On the boundary arc, we have

0 = ẋ2 = −x1 + x2 (1 − x12 ) + u,

which gives the control in the feedback form

u = x1 + x2 (x12 − 1), x2 = −0.4. (65)

In view of the relation

u = −p2 /2,

we obtain the following adjoint equation on the boundary:

Ṗ2 = −2(x2 + 2x1 x22 ), t ∈ [t1 , t2 ], (66)

whereas the adjoint equation for p1 in (63) remains unchanged. In view of


the second adjoint equation (63b), the last equation yields the multiplier ν
explicitly,

ν = p1 + p2 (1 − x12 ) + 2x1 + 2x2 (1 + 2x1 x2 ). (67)


614 JOTA: VOL. 123, NO. 3, DECEMBER 2004

Fig. 1. State variables x1 (t), x2 (t) (top row) and adjoint variables p1 (t), p2 (t) (bottom row).

Due to the continuity of the optimal control and the control law u =
−p2 /2, the junction conditions at the entry and exit point t1 and t2 are
 
x2 (t1 ) = −0.4, x12 (ti ) − 1 x2 (ti ) + x1 (ti ) + 0.5p2 (ti ) = 0, i = 1, 2.
(68)

The multipoint boundary-value problem (59)–(60) and (63)–(68) can be


treated by the code BNDSCO of Oberle and Grimm (Ref. 17). We
obtain the following results which allow to generate the solution shown in
Figure 1:
t1 = 0.62752939, p1 (0) = 5.66576692, x1 (5) = −0.07420849, p1 (5) = 0.0,
t2 = 1.46908215, p2 (0) = 0.82392266, x2 (5) = 0.02929680, p2 (5) = 0.0.

Using the above solution data, we get

F (x, u) = 2.95370134

and compute the multiplier ν(t) in (67),

ν(t) ≥ ν(t2 ) = 1.047301, ν̇(t) < 0, ∀ t ∈ [t1 , t2 ].

Thus, we conclude that there exists α > 0 such that the index sets Jα+ (t) =
J0 (t) in (30) coincide for all t ∈ [t1 , t2 ]. This allows us to verify the SSC
JOTA: VOL. 123, NO. 3, DECEMBER 2004 615

developed in Section 4 in its weak form. Consider the symmetric 2 × 2


matrix
 
Q11 (t) Q12 (t)
Q(t) = .
Q12 (t) Q22 (t)
On the interior arcs [0, t1 ) and (t2 , 5], setting S(t) ≡ 0 in the Riccati equa-
tion (49) we obtain

Q̇11 − 2Q12 (1 + 2x1 x2 ) + 2 − 2p2 x2 − 0.5Q212 = 0, (69a)


Q̇12 + Q11 + Q12 (1 − x12 )
−Q22 (1 + 2x1 x2 ) − 2p2 x1 − 0.5Q12 Q22 = 0, (69b)
Q̇22 + 2Q12 + 2Q22 (1 − x12 ) + 2 − 0.5Q222 = 0. (69c)

The boundary conditions (A3) together with (31) and (37) lead to
the terminal condition Q(5) < 0 (negative definite). Obviously, it suffices to
require the condition

Q(5) = 0,

i.e.,

Q11 (5) = 0, Q12 (5) = 0, Q22 (5) = 0.

In view of (48), on the boundary interval [t1 , t2 ] we can choose the matrix
function S in (49) in the form
 
0 S12 (t)
S= ,
S12 (t) S22 (t)
where S12 and S22 are arbitrary essentially bounded functions. We choose
these functions in such a way that Q̇12 (t) ≡ 0 and Q̇22 (t) ≡ 0 holds in
(t1 , t2 ), i.e.,

Q12 (t) ≡ Q12 (t2 ) and Q22 (t) ≡ Q22 (t2 ) in [t1 , t2 ].

To this end, we set

S12 (t) = −Q11 (t) − Q12 (t2 )(1 − x1 (t)2 ) + Q22 (t2 )(1 + 2x1 (t)x2 (t))
+2p2 (t)x1 (t) + 0.5Q12 (t2 )Q22 (t2 ), (70a)
2 2
S22 (t) = −2Q12 (t2 ) − 2Q22 (t2 )(1 − x1 (t) ) − 2 + 0.5Q22 (t2 ) . (70b)

The first equation in (69) then becomes the following linear equation:
Q̇11 (t) − 2Q12 (t2 )(1 + 2x1 (t)x2 (t)) + 2 − 2p2 (t)x2 (t) − 0.5Q12 (t2 )2 = 0.
(71)
616 JOTA: VOL. 123, NO. 3, DECEMBER 2004

Fig. 2. Solutions Q11 , Q12 , Q22 to the Riccati equations (69) and (71).

To calculate S12 (t) and S22 (t), we integrate first the Riccati equation (69)
on [t2 , 5], backward from Q(5) = 0. We find numerically

Q11 (t2 ) = 5.8574256, Q12 (t2 ) = 2.5219157, Q22 (t2 ) = 11.787467.

Then, we obtain Q11 (·) by integrating (71) backward on [t1 , t2 ]. Substi-


tuting Q11 (·), as well as Q12 (t2 ) and Q22 (t2 ) into (70), we get S12 (·) and
S22 (·). We continue with backward integration of (69) on [0, t1 ], starting
with Q11 (t1 ), Q12 (t1 ) = Q12 (t2 ), Q22 (t1 ) = Q22 (t2 ). The following initial val-
ues were obtained:

Q11 (0) = 7.1082463, Q12 (0) = 1.8601589, Q22 (0) = 11.566050.

The results of the numerical calculations are shown in Figure 2. Thus, we


arrive at the conclusion that the solution shown in Figure 1 is a local min-
imum.

References

1. Pickenhain, S., and Tammer, K., Sufficient Conditions for Local Optimality
in Multidimensional Control Problems with State Restrictions, Zeitschrift für
Analysis und ihre Anwendungen, Vol. 10, pp. 397–405, 1991.
2. Maurer, H., and Pickenhain, S., Second-Order Sufficient Conditions for Con-
trol Problems with Mixed Control-State Constraints, Journal of Optimization
Theory and Applications, Vol. 86, pp. 649–667, 1995.
3. Hartl, R. F., Sethi, S. P., and Vickson, R. G., A Survey of the Maximum
Principle for Optimal Control Problems with State Constraints, SIAM Review,
Vol. 37, pp. 181–218, 1995.
4. Malanowski, K., On Normality of Lagrange Multipliers for State-Constrained
Optimal Control Problems, Optimization, Vol. 52, pp. 75–91, 2003.
5. Zeidan, V., The Riccati Equation for Optimal Control Problems with Mixed
State-Control Problems: Necessity and Sufficiency, SIAM Journal on Control
and Optimization, Vol. 32, pp. 1297–1321, 1994.
JOTA: VOL. 123, NO. 3, DECEMBER 2004 617

6. Malanowski, K., and Maurer, H., Sensitivity Analysis for Parametric Prob-
lems with Control-State Constraints, Computational Optimization and Appli-
cations, Vol. 5, pp. 253–283, 1996.
7. Hager, W. W., Lipschitz Continuity for Constrained Processes, SIAM Journal
on Control and Optimization, Vol. 17, pp. 321–338, 1979.
8. Neustadt, L. W., Optimization: A Theory of Necessary Conditions, Princeton
University Press, Princeton, New Jersey, 1976.
9. Fiacco, A. V., and McCormick, G. P., Nonlinear Programming: Sequential
Unconstrained Minimization Techniques, John Wiley, New York, NY, 1968.
10. Maurer, H., and Zowe, J., First and Second-Order Necessary and Sufficient
Optimality Conditions for Infinite-Dimensional Programming Problems, Mathe-
matical Programming, Vol. 16, pp. 98–110, 1979.
11. Haynsworth, E. V., Determination of the Inertia of a Partitioned Hermitian
Matrix, Linear Algebra and Applications, Vol. 1, pp. 73–81, 1968.
12. Maurer, H., First and Second-Order Sufficient Optimality Conditions in
Mathematical Programming and Optimal Control, Mathematical Programming
Study, Vol. 14, pp. 163–177, 1981.
13. Malanowski, K., and Maurer, H., Sensitivity Analysis for State–Constrained
Optimal Control Problems, Discrete and Continuous Dynamical Systems,
Vol. 4, pp. 241–272, 1998.
14. Augustin, D., and Maurer, H., Computational Sensitivity Analysis for State-
Constrained Control Problems, Annals of Operations Research, Vol. 101,
pp. 75–99, 2001.
15. Augustin, D., and Maurer, H., Second-Order Sufficient Conditions and Sensi-
tivity Analysis for the Optimal Control of a Container Crane under State Con-
straints, Optimization, Vol. 49, pp. 351–368, 2001.
16. Kawasaki, H., and Zeidan, V., Conjugate Points for Variational Problems
with Equality State Constraints, SIAM Journal on Control and Optimization,
Vol. 39, pp. 433–456, 2000.
17. Oberle, H. J., and Grimm, W., BNDSCO : A Program for the Numerical Solu-
tion of Optimal Control Problems, Report 515-89/22, Institute for Flight Sys-
tems Dynamics, DLR, Oberpfaffenhofen, Germany, 1989.

Das könnte Ihnen auch gefallen