Sie sind auf Seite 1von 16

This article was downloaded by: [Indian Institute of Technology - Kharagpur]

On: 29 October 2013, At: 08:39


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Computer


Mathematics
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/gcom20

A hybrid algorithm for approximate


optimal control of nonlinear Fredholm
integral equations
a

Akbar H. Borzabadi , Omid S. Fard & Hamed H. Mehne

Department of Applied Mathematics , Damghan University ,


Damghan , Iran
b

Aerospace Research Institute , Tehran , Iran


Published online: 09 Aug 2012.

To cite this article: Akbar H. Borzabadi , Omid S. Fard & Hamed H. Mehne (2012) A hybrid algorithm
for approximate optimal control of nonlinear Fredholm integral equations, International Journal of
Computer Mathematics, 89:16, 2259-2273, DOI: 10.1080/00207160.2012.705279
To link to this article: http://dx.doi.org/10.1080/00207160.2012.705279

PLEASE SCROLL DOWN FOR ARTICLE


Taylor & Francis makes every effort to ensure the accuracy of all the information (the
Content) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

International Journal of Computer Mathematics


Vol. 89, No. 16, November 2012, 22592273

A hybrid algorithm for approximate optimal control


of nonlinear Fredholm integral equations
Akbar H. Borzabadia *, Omid S. Farda and Hamed H. Mehneb
a Department

of Applied Mathematics, Damghan University, Damghan, Iran; b Aerospace Research


Institute, Tehran, Iran

(Received 14 February 2011; revised version received 11 October 2011; Second revision received 25 April 2012;
accepted 16 June 2012)
In this paper, a novel hybrid method based on two approaches, evolutionary algorithms and an iterative scheme, for obtaining the approximate solution of optimal control governed by nonlinear Fredholm
integral equations is presented. By converting the problem to a discretized form, it is considered as a
quasi-assignment problem and then an iterative method is applied to find an approximate solution for the
discretized form of the integral equation. An analysis for convergence of the proposed iterative method
and its implementation for numerical examples are also given.
Keywords: optimal control; Fredholm integral equation; evolutionary algorithm; iterative method;
discretization; approximation
2010 AMS Subject Classifications: 65R20; 93C65

1.

Introduction

Evolutionary algorithms as well as optimization are two prominent fields of research in applied
science and engineering. The major purpose of optimization is to determine procedures with
regard to how to optimally change or influence real systems to achieve a desired result. This
requires to realize large-scale optimization strategies with increasing complexity which in turn
motivates the development of numerical techniques for optimization purposes [11].
On the other hand, in mathematical formulation of physical phenomena, integral equations are
always encountered and have attracted much attention. In fact, integral equations are as important as differential equations and appear in a variety of applications in many fields, including
continuum mechanics, potential theory, geophysics, electricity and magnetism, kinetic theory of
gases, hereditary phenomena in biology, quantum mechanics, mathematical economics, population genetics, medicine, fluid mechanics, steady-state heat conduction, and radiative heat transfer
problems [4,6,8,9,12,14,15].

*Corresponding author. Email: akbar.h.borzabadi@gmail.com

ISSN 0020-7160 print/ISSN 1029-0265 online


2012 Taylor & Francis
http://dx.doi.org/10.1080/00207160.2012.705279
http://www.tandfonline.com

2260

A.H. Borzabadi et al.

In particular, optimal control of systems governed by Fredholm integral equations are momentous in applications such as the optimal control problem related to the OrnsteinUhlenbeck process
which arises from statistical communication theory [7].
In this paper, we focus on the formulation of a class of optimal control problems governed by
nonlinear Fredholm integral equations as follows:
 T
(t, x(t), u(t)) dt,
(1)
Minimize J(x, u) =

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

where the control function u() and the corresponding state x() are subjected to
 T
(t, s, x(s), u(s)) ds, a.e. on [0, T ].
x(t) = y(t) +

(2)

Here, C([0, T ] R R) and , x (= /x) C([0, T ] [0, T ] R R). Moreover, it is


assumed that this problem has a unique solution. One of the best references in the theoretical view
of this class of problems has been presented by Rubicek [10].
Recently, evolutionary and heuristic algorithms have been introduced as powerful tools in
solving optimal control problems [2,3,13], where a combination of these approaches with usual
numerical approaches of solving ordinary differential equations based on discretization of control
space may lead to efficient numerical schemes for detecting approximate optimal control and state
functions in the classical optimal control problems.
The main purpose of this study is to construct a numerical technique based on two approaches,
evolutionary algorithms and an iterative scheme, for obtaining an approximate solution of
nonlinear Fredholm integral equations to obtain the approximate solution of the problem (1)
and (2).
The remainder of the paper is organized as follows. In Section 2, using discretization of control
space, we obtain a nonlinear optimization problem corresponding to the problem (1) and (2).
The convergence of this modification will be proved in Section 3. Sections 4 and 5 present the
combination method and an implemented algorithm. In Section 6, the applicability of the method
is illustrated by some examples in which the exact solution and the computed results are compared
with each other. Finally, Section 7 contains a brief summary and conclusions.

2.

Discretization of control space

To find the optimal solution, we must examine the performance index of all possible control
state pairs. Let us first define the set of admissible pairs consisting of all pairs (x, u) satisfying
Equation (2) and denote it P. In this section, we present a control discretization method
by equidistance partitioning of [0, T ] as n = {0 = t0 , t1 , . . . , tn1 , tn = T } with discretization
parameter h = ti+1 ti , i = 0, 1, . . . , n 1. The time interval is divided into n sub-intervals
[t0 = 0, t1 ], [t1 , t2 ], . . . , [tn1 , tn = T ]. On the other hand, the set of control values is divided into
constants u0 , u2 , . . . , un . In this way, the timecontrol space is discretized if the control function
assumes to be constant at each time sub-interval. Using the characteristic function

1 t [tk1 , tk ),
[tk1 ,tk ) (t) =
0 otherwise,
the control function may be presented as
u(t) =

n

k=1

uik [tk1 ,tk ] (t),

(3)

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

International Journal of Computer Mathematics

2261

Figure 1. A typical control function in timecontrol.

where uij {u0 , u2 , . . . , un }, j = 1, 2, . . . , n. A typical discretization is given in Figure 1 with


n = 7. The bold pattern in this figure shows the control function. Discretization proposes to
consider the control function as a sequence of uj segments corresponding to time sub-intervals.
Now a trivial way to find the near-optimal solution is to calculate all possible patterns and compare
the corresponding trade-offs. This trivial method of total enumeration needs mn evaluation. To
avoid such a huge number of computation, we use a method based on one evolutionary algorithm,
as genetic algorithm (GA), for evaluating special patterns guiding us to the optimal one. For
each pattern of control, we need its corresponding trajectory to evaluate the performance index.
Trivially, the corresponding trajectory should be in a discretized form. Thus, a discretized form
of the problem (1) and (2) should be considered such that its solution converges to the solution
of the original problem.
Now, if (x, u) be an admissible pair, then for the partition n on [0, T ], we have


x(ti ) = y(ti ) +

k(ti , s, x(s), u(s)) ds,

i = 0, 1, . . . , n.

(4)

In Equation (4), the term integral can be estimated by a numerical method of integration, for
example, one of NewtonCotes methods. Therefore, by taking equidistance partition n , as above
with h = ti+1 ti , i = 0, 1, . . . , n 1, and also the weights wi , i = 0, 1, . . . , n, equality (4) can be
written as
xi = yi +

n


wj (ti , sj , xj , uj ) + O(h ),

i = 0, 1, . . . , n,

(5)

j=0

where xi = x(ti ), yi = y(ti ), i = 0, 1, . . . , n, and depends upon the used method of Newton
Cotes for the estimation of the integral in Equation (4). The same partition and weights can be
used to convert the objective function (1) to the following form:

J(x, u) =

n

j=0

wj (tj , xj , uj ) + O(h ).

(6)

2262

A.H. Borzabadi et al.

For partition n , by neglecting the truncation error of Equations (5) and (6), the following nonlinear
optimization problem may be considered:
Minimize Jn =

n


wj (tj , j , j ),

(7)

j=0

subject to: i = yi +

n


wj (ti , sj , j , j ),

i = 0, 1, . . . , n,

(8)

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

j=0

3.

Convergence

The solution of nonlinear programming (7) and (8) approximates the original problem by minimizing J(x, u) over the subset Pn of P consist of all piece-wise linear function x() and u()
with nodes at 0 , 1 , . . . n and 0 , 1 , . . . , n satisfying Equation (7). Our first aim is to show that
P1 P2 P3 . . . in an embedding fashion.
Lemma 1 There exists an embedding that maps Pn to a subset of Pn+1 for all n = 1, 2, . . ..
Proof For simplicity in notation, we prove the statement only for n = 1. The proof for n 2 is
obtained analogously. Let us consider an arbitrary pair (x, u) in P1 represented by 1 , 2 , 1 , 2 .
We have to find a corresponding pair (x , u ) in P2 with 1 , 2 , 3 , 1 , 2 , 3 as nodes such that
corresponds to (x, u). Since (x, u) belongs to P1 , if we use, for example, the combined trapezoidal
rule for integration, then we have
xj = yj +

T
((tj , s0 , 0 , 0 ) + (tj , s1 , 1 , 1 )),
2

j = 0, 1.

(9)

On the other hand, a typical element (x , u ) in P2 satisfies for j = 0, 1, 2 in


x j = yj +

T
((tj , s0 , 0 , 0 ) + 2(tj , s1 , 1 , 1 ) + (tj , s2 , 2 , 2 )).
4

(10)

It is clear that here we have s0 = s0 , s2 = s1 , t0 = t0 and t2 = t1 . Now, from the definition of


, we may choose 1 , 2 , 3 , 1 , 2 , 3 in such a way that (tj , s1 , 1 , 1 ) = 0, (tj , s0 , 0 , 0 ) =
2(tj , s0 , 0 , 0 ) and (tj , s2 , 2 , 2 ) = 2(tj , s1 , 1 , 1 ). Therefore, from Equations (9) and (10),
we have
T
((tj , s0 , 0 , 0 ) + 2(tj , s1 , 1 , 1 ) + (tj , s2 , 2 , 2 ))
4
ba
= yj +
((tj , s0 , 0 , 0 ) + (tj , s1 , 1 , 1 ))
2
= xj , j = 0, 1.

x j = yj +

This shows that the constructed pair (x , u ) corresponds to (x, u) and belongs to P2 .

The above lemma has an important result in decreasing the behaviour of the optimal value of
the objective function which leads to the following theorem.
Theorem 1 If n = inf Pn Jn for n = 1, 2, . . . , and = inf P J(x, u), then limn n = .

International Journal of Computer Mathematics

2263

Proof By Lemma 1, we have 1 2 . Therefore, this decreasing and bounded


sequence converges to the limit 0 . Now, it would be enough to show that 0 = . If
0 > , then = 0 > 0 and by continuity of J(x, u), we may find a piece-wise linear
pair (xn0 , un0 ) such that |J(xn0 , un0 ) | < , then J(xn0 , un0 ) < 0 and hence n0 < 0 which
is incorrect and therefore 0 = .


Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

4.

Combination approach

Undoubtedly, finding a solution of nonlinear programming (7)(8) is not easy. But, it seems
that by combination of two approaches, the successive iterative scheme of solving nonlinear
Fredholm integral equations [1] and an evolutionary algorithm, for example, GA or particle
swarm optimization (PSO), we can take acceptable results. Consider a partition n on the time
interval [0, T ] and discretization of the control space on the basis of this partition. The following
iterative formula
i(k+1) = yi +

n


wj (ti , sj , j(k) , j ),

i = 0, 1, . . . , n, k = 0, 1, . . . ,

(11)

j=0

with some conditions on the kernel guarantees that the sequence of vectors { (k) } converges to
the exact solution of Equation (8) corresponding to the piece-wise control function
(t) =

n


ik [tk1 ,tk ] (t),

k=1

where ij {u0 , u2 , . . . , un }, j = 1, 2, . . . , n.
Theorem 2 Suppose
(i) (t, s, (s), (s)) C([0, T ] [0, T ] R R),
(ii) (t, s, (s), (s)) exists on [0, T ] [0, T ] R R and < 1/T , where
= sup | (t, s, (s), (s))|.
s,t[0,T ]

Then, the produced sequence { (k) } from the iteration process (11) tends to the exact solution of
Equation (8) for any arbitrary initial vector (0) .
Proof

Proof is similar to the proof of Theorem 3.1 in [1].

Now an evolutionary algorithm, as GA, can be applied by considering the performance index
(7) for an approximate admissible pair ( , ) consisting of the piece-wise constant control function and the corresponding approximate discrete state function pairs obtained by the iteration
process (11).
Note that the condition (ii) Theorem 2 may be in a very restrictive condition. To overcome this
difficulty, the original problem can be divided into subproblems on equidistance sub-intervals of
time with admissible lengths.

2264

A.H. Borzabadi et al.

It is natural to wonder when iterations in the given procedure (11) can be determined. Assuming
(k) be the vector obtained in the kth iteration, a stopping criteria might be considered as follows:
(k+1) (k)
< ,
(k)

(12)

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

for a prescribed small positive number that should be chosen according to the desired accuracy,
where is a norm on vectors.

5. Algorithm of the approach


In this section, we present an algorithm on the basis of the previous discussions. This algorithm
consists of three stages, initialization step, subordinate steps and main steps. In the subordinate steps, the process of constructing a fitness function for an n-tuple (1 , . . . , n ) from the
controltime space, where i , i = 1, . . . , n, are equidistance nodes on the set of control values, is
specified. The main steps contain the main structure of the algorithm, considering initialization
and subordinate steps.
Initialization step:
Choose > 0 for the desired accuracy, an equidistance partition of the time interval [0, T ] as
n , with h = ti+1 ti , i = 0, 1, . . . , n 1, n equidistance nodes on the set of control values as
{1 , . . . , n } and an initial vector (0) = (0(0) , 1(0) , . . . , n(0) )T .
Subordinate steps:
Step 1. Compute (k+1) by Equation (11).
Step 2. Compute (k+1) (k) and (k) .
Step 3. If the stop criteria (12) holds, stop; otherwise, set k = k + 1 and go to Step 1.
Main steps:
Step 1. Choose a population of random individuals, that is, random n-tuples as (1 , . . . , n ) from
the controltime space.
Step 2. Fitness scores are assigned to each individual by subordinate steps.
Step 3. Apply the rules of generating the new population, for example, reproduction, crossover
and mutation in GA.
Step 4. Consider the new population becomes the current population.
Step 5. If the termination conditions are satisfied stop, otherwise go to Step 3.

6.

Numerical results

In this section, the proposed algorithm in the previous section is examined by some numerical
examples. To show the precision of the approximate solution in some examples, we define the
following trajectory error function
e(t) = |x (t) x(t)|,
where x (t) and x(t) are the exact and approximate optimal trajectory obtained by exact and
approximate optimal control functions, respectively. We have applied two efficient evolutionary
algorithms, that is, GA and PSO.

International Journal of Computer Mathematics


0.5
0.45

2265

Exact trajectory
Approximate trajectory (GA)
Aproximate trajectory (PSO)

0.4

Trajectory

0.35
0.3
0.25
0.2

0.1
0.05
0
0

0.05

0.1

0.15

0.2

0.25
Time

0.3

0.35

0.4

0.45

0.5

0.35

0.4

0.45

0.5

Figure 2. The exact and the approximate optimal trajectories in Example 1.


0.5
0.45
0.4

Exact control
Approximate control (GA)
Approximate control (PSO)

0.35
0.3
Control

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

0.15

0.25
0.2
0.15
0.1
0.05
0
0

0.05

0.1

0.15

0.2

0.25
Time

0.3

Figure 3. The exact and the approximate optimal controls in Example 1.

Example 1 For the first example, let us consider the following optimal control:


0.5

Minimize

(x(t) u(t))2 dt,

subject to:

0.5

x(t) = y(t) +

et (tu(s) + 1)x 2 (s) ds,

(13)
(14)

0
1
1
where y(t) = t et ( 64
t + 24
). The exact optimal trajectory and control of the problem are x (t) =
t and u (t) = t, respectively. An equidistance partition of interval [0, 1] with 10 nodes has been
considered. The results of applying the proposed algorithm with the number of iterations =
100 and population size = 10 are illustrated in Figures 2 and 3, where the approximate optimal
trajectories and controls are compared with the exact ones, respectively. The trajectory error
functions are shown in Figure 4.

In order to check for the numerical stability of the proposed algorithm, the given problem in
this example is solved for perturbed data. For this purpose, we consider two kinds of perturbation
in state and control functions of integral equation, by substitution x(t) + and u(t) + for =
0.01 and we apply the given algorithm considering perturbed integral equations. By substitution,
the perturbed state function x(t) + , the Fredholm integral equation will be converted to the

2266

A.H. Borzabadi et al.


x 10

7.5

Trajectory error (GA)


Trajectory error (PSO)

7
6.5
6

Error

5.5
5
4.5

3
2.5
0

0.05

0.1

0.15

0.2

0.25
Time

0.3

0.35

0.4

0.45

0.5

Figure 4. The trajectory error functions in Example 1.


0.5

0.5
Exact control
Approximate control (GA)
Approximate control (PSO)

0.45
0.4

0.45
0.4

0.35

Exact control
Approximate control (GA)
Approximate control (PSO)

0.35

0.3

Control

Control

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

4
3.5

0.25
0.2
0.15

0.3
0.25
0.2
0.15

0.1

0.1

0.05

0.05

0
0

0.1

0.2

0.3

0.4

0.5

0
0

0.1

Time

0.2

0.3

0.4

0.5

Time

Figure 5. The exact and the approximate optimal control functions after considering perturbation in state with parameters
= 0.01 and = +0.01 in left and right diagrams, respectively, in Example 1.

following form:


.5

x(t) = y(t) +

y(t) = t

et (tu(s) + 1)(x(s) + )2 ds,


1 t
1
1
1
1
1
e + tet + tet + tet 2 + et 2 + et ,
4
64
12
8
2
24

where the exact optimal trajectory and control with the above objective function are x (t) = t
and u (t) = t, respectively. The results of considering perturbations = 0.01 and applying the
algorithm are shown in Figures 5 and 6.
Also, considering the perturbed control function u(t) + , the integral equation will be giving
rise to the following integral equation system:


.5

x(t) = y(t) +

y(t) = t

et (t(u(s) + ) + 1)x 2 (s) ds,


1 t
1
1
te + tet + et ,
64
24
24

International Journal of Computer Mathematics

0.5

Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)

0.5

Trajectory

0.3

0.2

0.2

0.1

0.1

0
0

Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)

0.4

0.3

0.1

0.2

0.3

0.4

0
0

0.5

0.1

0.2

Time

0.3

0.4

0.5

Time

Figure 6. The exact and the approximate optimal trajectory functions after considering perturbation in state with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.

0.5
0.45
0.4

0.5
Exact control
Approximate control (GA)
Approximate control (PSO)

0.45
0.4

0.3
0.25
0.2
0.15

0.3
0.25
0.2
0.15

0.1

0.1

0.05

0.05

0
0

Exact control
Approximate control (GA)
Approximate control (PSO)

0.35

Control

Control

0.35

0.1

0.2

0.3

0.4

0
0

0.5

0.1

0.2

Time

0.3

0.4

0.5

Time

Figure 7. The exact and the approximate optimal control functions after considering perturbation in control with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.

0.5

Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)

0.5

0.3

0.3

0.2

0.2

0.1

0.1

0
0

Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)

0.4

Trajectory

0.4

Trajectory

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

Trajectory

0.4

2267

0.1

0.2

0.3
Time

0.4

0.5

0
0

0.1

0.2

0.3

0.4

0.5

Time

Figure 8. The exact and the approximate optimal trajectory functions after considering perturbation in control with
parameters = 0.01 and = +0.01 in the left and right diagrams, respectively, in Example 1.

where the exact optimal trajectory and control are the same x (t) = t and u (t) = t, respectively.
The results of considering perturbations = 0.01 and applying the algorithm are shown in
Figures 7 and 8.

2268

A.H. Borzabadi et al.


2.8
2.6

Exact trajectory
Approximate trajectory (GA)
Approximate trajectory (PSO)

2.4

Trajectory

2.2
2
1.8

1.4
1.2
1
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

0.7

0.8

0.9

Figure 9. The exact and the approximate optimal trajectories in Example 2.


1

0.8

Exact control
Approximate control (GA)
Approximate control (PSO)

0.6
Control

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

1.6

0.4

0.2

0.2
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

Figure 10. The exact and the approximate optimal controls in Example 2.

Example 2 Consider the following optimal control


 1
(x(t) et )2 + (u(t) t 2 )2 dt,
Minimize
0

subject to: x(t) = y(t) +


0

(1 t 2 + su(s))
ds,
x(s)

(15)
(16)

where y(t) = et 7 + t 2 (1/e)(t 2 17). The exact optimal trajectory and control of the problem are x (t) = et and u (t) = t 2 , respectively. All parameters in the evolutionary algorithms
have been considered as the previous example. One can see in Figures 9 and 10, the graph of the
exact and the approximate optimal trajectories and controls, respectively. Also, the graph of the
trajectory error functions for this example can be observed in Figure 11.
Table 1 shows the computational time with respect to the parameters of GA and PSO, that is,
the number of iterations and the population size.
Besides, in Figures 12 and 13, the approximate optimal control and the trajectory error functions
resulted from applying the hybrid algorithm, by implementing the GA algorithm as an evolutionary
algorithm, with a fixed iteration number = 100 and different population sizes = 4, 6, 8 and 10
are illustrated. It is seen from Table 1, Figures 12 and 13 that the computational time increases
with the number of population size, although the computation errors decrease.

International Journal of Computer Mathematics


8

x 10

2269

Trajectory error (GA)


Trajectory error (PSO)

7
6

Error

5
4

2
1
0
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

Figure 11. The trajectory error functions in Example 2.

Table 1. GAs and PSOs computational time for Example 2 with


iteration = 100.
Population size

GAs time (s)

PSOs time (s)

0.1719
0.1875
0.3125
0.3750

0.0625
0.1250
0.2031
0.3281

4
6
8
10

1.2
1
0.8

Control

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

Exact control
Population size=4
Population size=6
Population size=8
Population size=10

0.6
0.4
0.2
0
0.2
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

Figure 12. The exact and the approximate optimal controls by applying GA with a fixed iteration number and different
population sizes in Example 2.

Also by considering a fixed population size=10 with different iteration numbers=20, 40, 60, 80
and 100, the results of applying the given algorithm are shown in Figures 14 and 15.
Table 2 shows the computational time with respect to a fixed population size = 10 for GA and
PSO with different iteration numbers = 20, 40, 60, 80 and 100.
Example 3 (Hanging chain) Finally, in this example, we study a problem from classical mechanics with applications, for example, in power lines. If a string or flexible chain is suspended from
the two ends, then it bends due to its own weight (Figure 16). Known as the catenary curve, the
resulting shape of the chain depends to the mass distribution along the chain. The displacement

2270

A.H. Borzabadi et al.


0.018
0.016
0.014

Population size=4
Population size=6
Population size=8
Population size=10

Error trajectory

0.012
0.01
0.008

0.004
0.002
0
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

Figure 13. The trajectory error functions by applying GA with a fixed iteration number and different population sizes
in Example 2.
1.2
1

Control

0.8

Exact control
Number of iterations=20
Number of iterations=40
Number of iterations=60
Number of iterations=80
Number of iterations=100

0.6
0.4
0.2
0
0.2
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

Figure 14. The exact and approximate optimal controls after applying GA with a fixed population size and different
iteration numbers in Example 2.
0.016
0.014
0.012
Error trajectory

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

0.006

Number of iterations = 20
Number of iterations = 40
Number of iterations = 60
Number of iterations = 80
Number of iterations = 100

0.01
0.008
0.006
0.004
0.002
0
0

0.1

0.2

0.3

0.4

0.5
Time

0.6

0.7

0.8

0.9

Figure 15. The trajectory error functions after applying GA with a fixed population size and different iteration numbers
in Example 2.

of the chain at x is computed from the following Fredholm integral equation [5]:
 L
y(x) = g
G(x, s)(s) ds,
0

(17)

International Journal of Computer Mathematics

2271

Table 2. GAs and PSOs computational time for Example 2 with


population size = 10.
Iteration number

Figure 16.

PSOs time (s)

0.1394
0.1752
0.2807
0.3256
0.3750

0.1286
0.1572
0.2544
0.3034
0.3281

Hanging chain geometry.

Exact displacement
Approximate displacement (GA)
Approximate displacement (PSO)

0.1
0.2
0.3
y(x)

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

20
40
60
80
100

GAs time (s)

0.4
0.5
0.6
0.7
1

0.9

0.8

0.7

0.6

0.5
x

0.4

0.3

0.2

0.1

Figure 17. The exact and approximate optimal trajectories in Example 3.

where

G(x, s) =

x(L s)

T L

0 x s,

s(L x)

T0 L

s x L.

(18)

Here, (x) is mass density of the chain, T0 is the constant tension and g is the gravitational
constant. Solving the above integral equation with a known mass density, we could determine
the resulting shape of the chain. In this example, we are interested in finding the mass density
distribution leading to a prescribed shape yd (x). Then, the density function () is control and

2272

A.H. Borzabadi et al.


0.38
Exact density
Approximate density (GA)
Approximate density (PSO)

0.37
0.36

(x)

0.35
0.34

0.32
0.31
0.31

0.32

0.33

0.34
x

0.35

0.36

0.37

Figure 18. The exact and approximate optimal controls in Example 3.

3.5

x 10

Displacement error (GA)


Displacement error (PSO)

2.5

Error

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

0.33

2
1.5
1
0.5
0
0

0.1

0.2

0.3

0.4

0.5
x

0.6

0.7

0.8

0.9

Figure 19. The trajectory error functions in Example 3.

displacement y() is its corresponding state. Therefore, the problem is formulated as minimizing


J(y, ) =

((yd (x) y(x))2 + ((x) x)2 ) dx,

(19)

subject to Equation (17). Optimal solution corresponding to yd (x) = g(xL 2 x 3 )/6T0 is (x) = x.
For applying the proposed algorithm, we consider the following parameters, L = 1, g = 9.8,
T0 = 1, the number of iterations = 300 and population size = 10. The graphs of the exact
and approximate trajectories and controls and the trajectory error functions can be seen in
Figures 1719.

7.

Conclusions

In this paper, a hybrid approach for finding an approximate solution of optimal control problems
governed by nonlinear Fredholm integral equations is presented. Numerical results show the superiority of the approach. Through the given scheme, the nonlinearity of the objective functional and
the kernel of integral equation have no direct effect on the procedure of extracting an approximate
solution.

International Journal of Computer Mathematics

2273

Acknowledgements
The authors thank the anonymous referees for their valuable and constructive suggestions that led to an improved presentation. Also the financial support of the Research Council of Damghan University with the grant number 88/math/66/130
is acknowledged.

Downloaded by [Indian Institute of Technology - Kharagpur] at 08:39 29 October 2013

References
[1] A.H. Borzabadi and O.S. Fard, A numerical scheme for a class of nonlinear Fredholm integral equations of the
second kind, J. Comput. Appl. Math. 232 (2009), pp. 449454.
[2] A.H. Borzabadi and H.H. Mehne, Ant colony optimization for optimal control problems, J. Inf. Comput. Sci. 4(4)
(2009), pp. 259264.
[3] O.S. Fard and A.H. Borzabadi, Optimal control problem, quasi-assignment problem and genetic algorithm,
Enformatika, Trans. Eng. Comput. Tech. 19 (2007), pp. 422424.
[4] S.C. Huang and R.P. Shaw, The Trefftz method as an integral equation, Adv. Eng. Softw. 24 (1995), pp. 5763.
[5] A.J. Jerri, Introduction to Integral Equations with Applications, Wiley-Interscience, New York, 1999.
[6] S. Jiang and V. Rokhlin, Second kind integral equations for the classical potential theory on open surface II,
J. Comput. Phys. 195 (2004), pp. 116.
[7] T. Kailat, Some integral equations with nonrationalkernels, IEEE Trans. Inf. Theory IT-12(4) (1966), pp. 442447.
[8] P.K. Kythe and P. Puri, Computational Methods of Linear Integral Equations, Birkhauser, Boston, Springer-Verlag,
New York, 2002.
[9] D. Liang and B. Zhang, Numerical analysis of graded mesh methods for a class of second kind integral equations
on real line, J. Math. Anal. Appl. 294 (2004), pp. 482502.
[10] T. Rubicek, Optimal control of nonlinear Fredholm integral equations, J. Optim. Appl. 19 (1998), pp. 707729.
[11] W.H. Schmidt, Numerical Methods for Optimal Control Problems with ODE or Integral Equations, Lecture Notes
in Computer Science, Vol. 3743, Springer, Berlin/Heidelberg, 2006.
[12] W. Wang, A new mechanical algorithm for solving the second kind of Fredholm integral equation, Appl. Math.
Comput. 172 (2006), pp. 946962.
[13] A. Wuerl, T. Crain, and E. Braden, Genetic algorithms and calculus of variations based trajectory optimization
technique, J. Spacecraft Rockets, 40(6) (2003), pp. 882888.
[14] S.C. Yang, An investigation into integral equation methods involving nearly singular kernels for acoustic scattering,
J. Sound Vib. 234 (2000), pp. 225239.
[15] S. Zhang, Integral Equations, Chongqing Press, Chongqing, 1987.

Das könnte Ihnen auch gefallen