Beruflich Dokumente
Kultur Dokumente
To cite this article: Akbar H. Borzabadi , Omid S. Fard & Hamed H. Mehne (2012) A hybrid algorithm
for approximate optimal control of nonlinear Fredholm integral equations, International Journal of
Computer Mathematics, 89:16, 2259-2273, DOI: 10.1080/00207160.2012.705279
To link to this article: http://dx.doi.org/10.1080/00207160.2012.705279
(Received 14 February 2011; revised version received 11 October 2011; Second revision received 25 April 2012;
accepted 16 June 2012)
In this paper, a novel hybrid method based on two approaches, evolutionary algorithms and an iterative scheme, for obtaining the approximate solution of optimal control governed by nonlinear Fredholm
integral equations is presented. By converting the problem to a discretized form, it is considered as a
quasi-assignment problem and then an iterative method is applied to find an approximate solution for the
discretized form of the integral equation. An analysis for convergence of the proposed iterative method
and its implementation for numerical examples are also given.
Keywords: optimal control; Fredholm integral equation; evolutionary algorithm; iterative method;
discretization; approximation
2010 AMS Subject Classifications: 65R20; 93C65
1.
Introduction
Evolutionary algorithms as well as optimization are two prominent fields of research in applied
science and engineering. The major purpose of optimization is to determine procedures with
regard to how to optimally change or influence real systems to achieve a desired result. This
requires to realize large-scale optimization strategies with increasing complexity which in turn
motivates the development of numerical techniques for optimization purposes [11].
On the other hand, in mathematical formulation of physical phenomena, integral equations are
always encountered and have attracted much attention. In fact, integral equations are as important as differential equations and appear in a variety of applications in many fields, including
continuum mechanics, potential theory, geophysics, electricity and magnetism, kinetic theory of
gases, hereditary phenomena in biology, quantum mechanics, mathematical economics, population genetics, medicine, fluid mechanics, steady-state heat conduction, and radiative heat transfer
problems [4,6,8,9,12,14,15].
2260
In particular, optimal control of systems governed by Fredholm integral equations are momentous in applications such as the optimal control problem related to the OrnsteinUhlenbeck process
which arises from statistical communication theory [7].
In this paper, we focus on the formulation of a class of optimal control problems governed by
nonlinear Fredholm integral equations as follows:
T
(t, x(t), u(t)) dt,
(1)
Minimize J(x, u) =
where the control function u() and the corresponding state x() are subjected to
T
(t, s, x(s), u(s)) ds, a.e. on [0, T ].
x(t) = y(t) +
(2)
2.
To find the optimal solution, we must examine the performance index of all possible control
state pairs. Let us first define the set of admissible pairs consisting of all pairs (x, u) satisfying
Equation (2) and denote it P. In this section, we present a control discretization method
by equidistance partitioning of [0, T ] as n = {0 = t0 , t1 , . . . , tn1 , tn = T } with discretization
parameter h = ti+1 ti , i = 0, 1, . . . , n 1. The time interval is divided into n sub-intervals
[t0 = 0, t1 ], [t1 , t2 ], . . . , [tn1 , tn = T ]. On the other hand, the set of control values is divided into
constants u0 , u2 , . . . , un . In this way, the timecontrol space is discretized if the control function
assumes to be constant at each time sub-interval. Using the characteristic function
1 t [tk1 , tk ),
[tk1 ,tk ) (t) =
0 otherwise,
the control function may be presented as
u(t) =
n
k=1
(3)
2261
x(ti ) = y(ti ) +
i = 0, 1, . . . , n.
(4)
In Equation (4), the term integral can be estimated by a numerical method of integration, for
example, one of NewtonCotes methods. Therefore, by taking equidistance partition n , as above
with h = ti+1 ti , i = 0, 1, . . . , n 1, and also the weights wi , i = 0, 1, . . . , n, equality (4) can be
written as
xi = yi +
n
wj (ti , sj , xj , uj ) + O(h ),
i = 0, 1, . . . , n,
(5)
j=0
where xi = x(ti ), yi = y(ti ), i = 0, 1, . . . , n, and depends upon the used method of Newton
Cotes for the estimation of the integral in Equation (4). The same partition and weights can be
used to convert the objective function (1) to the following form:
J(x, u) =
n
j=0
wj (tj , xj , uj ) + O(h ).
(6)
2262
For partition n , by neglecting the truncation error of Equations (5) and (6), the following nonlinear
optimization problem may be considered:
Minimize Jn =
n
wj (tj , j , j ),
(7)
j=0
subject to: i = yi +
n
wj (ti , sj , j , j ),
i = 0, 1, . . . , n,
(8)
j=0
3.
Convergence
The solution of nonlinear programming (7) and (8) approximates the original problem by minimizing J(x, u) over the subset Pn of P consist of all piece-wise linear function x() and u()
with nodes at 0 , 1 , . . . n and 0 , 1 , . . . , n satisfying Equation (7). Our first aim is to show that
P1 P2 P3 . . . in an embedding fashion.
Lemma 1 There exists an embedding that maps Pn to a subset of Pn+1 for all n = 1, 2, . . ..
Proof For simplicity in notation, we prove the statement only for n = 1. The proof for n 2 is
obtained analogously. Let us consider an arbitrary pair (x, u) in P1 represented by 1 , 2 , 1 , 2 .
We have to find a corresponding pair (x , u ) in P2 with 1 , 2 , 3 , 1 , 2 , 3 as nodes such that
corresponds to (x, u). Since (x, u) belongs to P1 , if we use, for example, the combined trapezoidal
rule for integration, then we have
xj = yj +
T
((tj , s0 , 0 , 0 ) + (tj , s1 , 1 , 1 )),
2
j = 0, 1.
(9)
T
((tj , s0 , 0 , 0 ) + 2(tj , s1 , 1 , 1 ) + (tj , s2 , 2 , 2 )).
4
(10)
x j = yj +
This shows that the constructed pair (x , u ) corresponds to (x, u) and belongs to P2 .
The above lemma has an important result in decreasing the behaviour of the optimal value of
the objective function which leads to the following theorem.
Theorem 1 If n = inf Pn Jn for n = 1, 2, . . . , and = inf P J(x, u), then limn n = .
2263
4.
Combination approach
Undoubtedly, finding a solution of nonlinear programming (7)(8) is not easy. But, it seems
that by combination of two approaches, the successive iterative scheme of solving nonlinear
Fredholm integral equations [1] and an evolutionary algorithm, for example, GA or particle
swarm optimization (PSO), we can take acceptable results. Consider a partition n on the time
interval [0, T ] and discretization of the control space on the basis of this partition. The following
iterative formula
i(k+1) = yi +
n
wj (ti , sj , j(k) , j ),
i = 0, 1, . . . , n, k = 0, 1, . . . ,
(11)
j=0
with some conditions on the kernel guarantees that the sequence of vectors { (k) } converges to
the exact solution of Equation (8) corresponding to the piece-wise control function
(t) =
n
k=1
where ij {u0 , u2 , . . . , un }, j = 1, 2, . . . , n.
Theorem 2 Suppose
(i) (t, s, (s), (s)) C([0, T ] [0, T ] R R),
(ii) (t, s, (s), (s)) exists on [0, T ] [0, T ] R R and < 1/T , where
= sup | (t, s, (s), (s))|.
s,t[0,T ]
Then, the produced sequence { (k) } from the iteration process (11) tends to the exact solution of
Equation (8) for any arbitrary initial vector (0) .
Proof
Now an evolutionary algorithm, as GA, can be applied by considering the performance index
(7) for an approximate admissible pair ( , ) consisting of the piece-wise constant control function and the corresponding approximate discrete state function pairs obtained by the iteration
process (11).
Note that the condition (ii) Theorem 2 may be in a very restrictive condition. To overcome this
difficulty, the original problem can be divided into subproblems on equidistance sub-intervals of
time with admissible lengths.
2264
It is natural to wonder when iterations in the given procedure (11) can be determined. Assuming
(k) be the vector obtained in the kth iteration, a stopping criteria might be considered as follows:
(k+1) (k)
< ,
(k)
(12)
for a prescribed small positive number that should be chosen according to the desired accuracy,
where is a norm on vectors.
6.
Numerical results
In this section, the proposed algorithm in the previous section is examined by some numerical
examples. To show the precision of the approximate solution in some examples, we define the
following trajectory error function
e(t) = |x (t) x(t)|,
where x (t) and x(t) are the exact and approximate optimal trajectory obtained by exact and
approximate optimal control functions, respectively. We have applied two efficient evolutionary
algorithms, that is, GA and PSO.
2265
Exact trajectory
Approximate trajectory (GA)
Aproximate trajectory (PSO)
0.4
Trajectory
0.35
0.3
0.25
0.2
0.1
0.05
0
0
0.05
0.1
0.15
0.2
0.25
Time
0.3
0.35
0.4
0.45
0.5
0.35
0.4
0.45
0.5
Exact control
Approximate control (GA)
Approximate control (PSO)
0.35
0.3
Control
0.15
0.25
0.2
0.15
0.1
0.05
0
0
0.05
0.1
0.15
0.2
0.25
Time
0.3
Example 1 For the first example, let us consider the following optimal control:
0.5
Minimize
subject to:
0.5
x(t) = y(t) +
(13)
(14)
0
1
1
where y(t) = t et ( 64
t + 24
). The exact optimal trajectory and control of the problem are x (t) =
t and u (t) = t, respectively. An equidistance partition of interval [0, 1] with 10 nodes has been
considered. The results of applying the proposed algorithm with the number of iterations =
100 and population size = 10 are illustrated in Figures 2 and 3, where the approximate optimal
trajectories and controls are compared with the exact ones, respectively. The trajectory error
functions are shown in Figure 4.
In order to check for the numerical stability of the proposed algorithm, the given problem in
this example is solved for perturbed data. For this purpose, we consider two kinds of perturbation
in state and control functions of integral equation, by substitution x(t) + and u(t) + for =
0.01 and we apply the given algorithm considering perturbed integral equations. By substitution,
the perturbed state function x(t) + , the Fredholm integral equation will be converted to the
2266
7.5
7
6.5
6
Error
5.5
5
4.5
3
2.5
0
0.05
0.1
0.15
0.2
0.25
Time
0.3
0.35
0.4
0.45
0.5
0.5
Exact control
Approximate control (GA)
Approximate control (PSO)
0.45
0.4
0.45
0.4
0.35
Exact control
Approximate control (GA)
Approximate control (PSO)
0.35
0.3
Control
Control
4
3.5
0.25
0.2
0.15
0.3
0.25
0.2
0.15
0.1
0.1
0.05
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0
0
0.1
Time
0.2
0.3
0.4
0.5
Time
Figure 5. The exact and the approximate optimal control functions after considering perturbation in state with parameters
= 0.01 and = +0.01 in left and right diagrams, respectively, in Example 1.
following form:
.5
x(t) = y(t) +
y(t) = t
1 t
1
1
1
1
1
e + tet + tet + tet 2 + et 2 + et ,
4
64
12
8
2
24
where the exact optimal trajectory and control with the above objective function are x (t) = t
and u (t) = t, respectively. The results of considering perturbations = 0.01 and applying the
algorithm are shown in Figures 5 and 6.
Also, considering the perturbed control function u(t) + , the integral equation will be giving
rise to the following integral equation system:
.5
x(t) = y(t) +
y(t) = t
1 t
1
1
te + tet + et ,
64
24
24
0.5
Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)
0.5
Trajectory
0.3
0.2
0.2
0.1
0.1
0
0
Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)
0.4
0.3
0.1
0.2
0.3
0.4
0
0
0.5
0.1
0.2
Time
0.3
0.4
0.5
Time
Figure 6. The exact and the approximate optimal trajectory functions after considering perturbation in state with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.
0.5
0.45
0.4
0.5
Exact control
Approximate control (GA)
Approximate control (PSO)
0.45
0.4
0.3
0.25
0.2
0.15
0.3
0.25
0.2
0.15
0.1
0.1
0.05
0.05
0
0
Exact control
Approximate control (GA)
Approximate control (PSO)
0.35
Control
Control
0.35
0.1
0.2
0.3
0.4
0
0
0.5
0.1
0.2
Time
0.3
0.4
0.5
Time
Figure 7. The exact and the approximate optimal control functions after considering perturbation in control with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.
0.5
Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)
0.5
0.3
0.3
0.2
0.2
0.1
0.1
0
0
Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)
0.4
Trajectory
0.4
Trajectory
Trajectory
0.4
2267
0.1
0.2
0.3
Time
0.4
0.5
0
0
0.1
0.2
0.3
0.4
0.5
Time
Figure 8. The exact and the approximate optimal trajectory functions after considering perturbation in control with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.
where the exact optimal trajectory and control are the same x (t) = t and u (t) = t, respectively.
The results of considering perturbations = 0.01 and applying the algorithm are shown in
Figures 7 and 8.
2268
Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)
2.4
Trajectory
2.2
2
1.8
1.4
1.2
1
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
0.7
0.8
0.9
0.8
Exact control
Approximate control (GA)
Approximate control (PSO)
0.6
Control
1.6
0.4
0.2
0.2
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
Figure 10. The exact and the approximate optimal controls in Example 2.
(1 t 2 + su(s))
ds,
x(s)
(15)
(16)
where y(t) = et 7 + t 2 (1/e)(t 2 17). The exact optimal trajectory and control of the problem are x (t) = et and u (t) = t 2 , respectively. All parameters in the evolutionary algorithms
have been considered as the previous example. One can see in Figures 9 and 10, the graph of the
exact and the approximate optimal trajectories and controls, respectively. Also, the graph of the
trajectory error functions for this example can be observed in Figure 11.
Table 1 shows the computational time with respect to the parameters of GA and PSO, that is,
the number of iterations and the population size.
Besides, in Figures 12 and 13, the approximate optimal control and the trajectory error functions
resulted from applying the hybrid algorithm, by implementing the GA algorithm as an evolutionary
algorithm, with a fixed iteration number = 100 and different population sizes = 4, 6, 8 and 10
are illustrated. It is seen from Table 1, Figures 12 and 13 that the computational time increases
with the number of population size, although the computation errors decrease.
x 10
2269
7
6
Error
5
4
2
1
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
0.1719
0.1875
0.3125
0.3750
0.0625
0.1250
0.2031
0.3281
4
6
8
10
1.2
1
0.8
Control
Exact control
Population size=4
Population size=6
Population size=8
Population size=10
0.6
0.4
0.2
0
0.2
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
Figure 12. The exact and the approximate optimal controls by applying GA with a fixed iteration number and different
population sizes in Example 2.
Also by considering a fixed population size=10 with different iteration numbers=20, 40, 60, 80
and 100, the results of applying the given algorithm are shown in Figures 14 and 15.
Table 2 shows the computational time with respect to a fixed population size = 10 for GA and
PSO with different iteration numbers = 20, 40, 60, 80 and 100.
Example 3 (Hanging chain) Finally, in this example, we study a problem from classical mechanics with applications, for example, in power lines. If a string or flexible chain is suspended from
the two ends, then it bends due to its own weight (Figure 16). Known as the catenary curve, the
resulting shape of the chain depends to the mass distribution along the chain. The displacement
2270
Population size=4
Population size=6
Population size=8
Population size=10
Error trajectory
0.012
0.01
0.008
0.004
0.002
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
Figure 13. The trajectory error functions by applying GA with a fixed iteration number and different population sizes
in Example 2.
1.2
1
Control
0.8
Exact control
Number of iterations=20
Number of iterations=40
Number of iterations=60
Number of iterations=80
Number of iterations=100
0.6
0.4
0.2
0
0.2
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
Figure 14. The exact and approximate optimal controls after applying GA with a fixed population size and different
iteration numbers in Example 2.
0.016
0.014
0.012
Error trajectory
0.006
Number of iterations = 20
Number of iterations = 40
Number of iterations = 60
Number of iterations = 80
Number of iterations = 100
0.01
0.008
0.006
0.004
0.002
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
Figure 15. The trajectory error functions after applying GA with a fixed population size and different iteration numbers
in Example 2.
of the chain at x is computed from the following Fredholm integral equation [5]:
L
y(x) = g
G(x, s)(s) ds,
0
(17)
2271
Figure 16.
0.1394
0.1752
0.2807
0.3256
0.3750
0.1286
0.1572
0.2544
0.3034
0.3281
Exact displacement
Approximate displacement (GA)
Approximate displacement (PSO)
0.1
0.2
0.3
y(x)
20
40
60
80
100
0.4
0.5
0.6
0.7
1
0.9
0.8
0.7
0.6
0.5
x
0.4
0.3
0.2
0.1
where
G(x, s) =
x(L s)
T L
0 x s,
s(L x)
T0 L
s x L.
(18)
Here, (x) is mass density of the chain, T0 is the constant tension and g is the gravitational
constant. Solving the above integral equation with a known mass density, we could determine
the resulting shape of the chain. In this example, we are interested in finding the mass density
distribution leading to a prescribed shape yd (x). Then, the density function () is control and
2272
0.37
0.36
(x)
0.35
0.34
0.32
0.31
0.31
0.32
0.33
0.34
x
0.35
0.36
0.37
3.5
x 10
2.5
Error
0.33
2
1.5
1
0.5
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
0.9
displacement y() is its corresponding state. Therefore, the problem is formulated as minimizing
J(y, ) =
(19)
subject to Equation (17). Optimal solution corresponding to yd (x) = g(xL 2 x 3 )/6T0 is (x) = x.
For applying the proposed algorithm, we consider the following parameters, L = 1, g = 9.8,
T0 = 1, the number of iterations = 300 and population size = 10. The graphs of the exact
and approximate trajectories and controls and the trajectory error functions can be seen in
Figures 1719.
7.
Conclusions
In this paper, a hybrid approach for finding an approximate solution of optimal control problems
governed by nonlinear Fredholm integral equations is presented. Numerical results show the superiority of the approach. Through the given scheme, the nonlinearity of the objective functional and
the kernel of integral equation have no direct effect on the procedure of extracting an approximate
solution.
2273
Acknowledgements
The authors thank the anonymous referees for their valuable and constructive suggestions that led to an improved presentation. Also the financial support of the Research Council of Damghan University with the grant number 88/math/66/130
is acknowledged.
References
[1] A.H. Borzabadi and O.S. Fard, A numerical scheme for a class of nonlinear Fredholm integral equations of the
second kind, J. Comput. Appl. Math. 232 (2009), pp. 449454.
[2] A.H. Borzabadi and H.H. Mehne, Ant colony optimization for optimal control problems, J. Inf. Comput. Sci. 4(4)
(2009), pp. 259264.
[3] O.S. Fard and A.H. Borzabadi, Optimal control problem, quasi-assignment problem and genetic algorithm,
Enformatika, Trans. Eng. Comput. Tech. 19 (2007), pp. 422424.
[4] S.C. Huang and R.P. Shaw, The Trefftz method as an integral equation, Adv. Eng. Softw. 24 (1995), pp. 5763.
[5] A.J. Jerri, Introduction to Integral Equations with Applications, Wiley-Interscience, New York, 1999.
[6] S. Jiang and V. Rokhlin, Second kind integral equations for the classical potential theory on open surface II,
J. Comput. Phys. 195 (2004), pp. 116.
[7] T. Kailat, Some integral equations with nonrationalkernels, IEEE Trans. Inf. Theory IT-12(4) (1966), pp. 442447.
[8] P.K. Kythe and P. Puri, Computational Methods of Linear Integral Equations, Birkhauser, Boston, Springer-Verlag,
New York, 2002.
[9] D. Liang and B. Zhang, Numerical analysis of graded mesh methods for a class of second kind integral equations
on real line, J. Math. Anal. Appl. 294 (2004), pp. 482502.
[10] T. Rubicek, Optimal control of nonlinear Fredholm integral equations, J. Optim. Appl. 19 (1998), pp. 707729.
[11] W.H. Schmidt, Numerical Methods for Optimal Control Problems with ODE or Integral Equations, Lecture Notes
in Computer Science, Vol. 3743, Springer, Berlin/Heidelberg, 2006.
[12] W. Wang, A new mechanical algorithm for solving the second kind of Fredholm integral equation, Appl. Math.
Comput. 172 (2006), pp. 946962.
[13] A. Wuerl, T. Crain, and E. Braden, Genetic algorithms and calculus of variations based trajectory optimization
technique, J. Spacecraft Rockets, 40(6) (2003), pp. 882888.
[14] S.C. Yang, An investigation into integral equation methods involving nearly singular kernels for acoustic scattering,
J. Sound Vib. 234 (2000), pp. 225239.
[15] S. Zhang, Integral Equations, Chongqing Press, Chongqing, 1987.