Sie sind auf Seite 1von 7

Effective Coverage Control using Dynamic Sensor Networks

with Flocking and Guaranteed Collision Avoidance


Islam I. Hussein
Mechanical Engineering
Worcester Polytechnic Institute
Worcester, MA 01581 USA
ihussein@wpi.edu
Du san M. Stipanovi c
Department of Industrial
and Enterprise Systems Engineering
Coordinated Science Laboratory
University of Illinois
Urbana, IL 61801 USA
dusan@uiuc.edu
AbstractThis paper studies the problem of dynamically
covering a given region D in R
2
using a set of N mobile
sensor agents. The coverage goal is to sample each point in
the mission domain to a desired preset level. It is crucial in
many dynamic coverage missions that the vehicles ock to
guarantee reliable wireless communication links between the
agents, while avoiding the risk of collisions. Dynamic coverage
control strategies developed by the authors in their previous
publications are modied for both ocking and guaranteed
collision avoidance. Several numerical examples are provided
to illustrate the main ideas.
I. INTRODUCTION
Many applications have emerged in recent years that
rely on the use of a network of dynamic multi-agent
sensors to collect and process data. Applications include
emergency response, surveillance, and multiple-agent (in
particular, satellite) imaging systems for exo-solar planet
identication. These and other applications often involve
tasks in adversarial, highly dynamic environments that are
hazardous to human operators. Hence, there is a pressing
need to develop autonomous multi-agent systems that seek to
collect and process data under constrained resources such as
time restrictions to mission accomplishment, fuel limitations,
and dynamic and/or limited communication structures.
The literature on mobile sensor coverage networks is
relatively new due to novel sensor and wireless technologies
that only emerged recently. In [1], the authors present a
survey of the most recent activities in the control and
design of both static and dynamic sensor networks. The
authors in [2] discuss some challenges in modeling of robotic
networks, motion coordination algorithms, sensing and es-
timation tasks, and complexity of distributed algorithms. In
[3], the authors consider a probabilistic network model and a
density function to represent the frequency of random events
taking place over the mission space. The authors develop
an optimization problem that aims at maximizing coverage
using sensors with limited ranges, while minimizing com-
munication cost. Starting with initial sensor positions, the
authors develop a gradient algorithm to converge to a (local)
solution to the optimization problem. The sequence of sensor
distributions along the solution is seen as a discrete time
trajectory of the mobile sensor network until it converges to
the local minimum.
In [4], the authors address the same question, but instead
of converging to a local solution of some optimization
problem, the trajectory converges to the centroid of a cell
in a Voronoi partition of the search domain. The authors
propose stable control laws, in both continuous and discrete
time, that converge to the centroids. In [5], the authors
use a Voronoi-based polygonal path approach and aim at
minimizing exposure of a UAV eet to radar. Voronoi-based
approaches, however, require exhaustive computational effort
to compute the Voronoi cells continuously during a real-
time implementation of the controllers. In the work presented
herein, domain partitioning is not required and, hence, the
computational complexity is reduced.
In this paper, we address the following question: Given
a sensor network and mission (or, search) domain D, how
should the motion of each sensor agent be controlled such
that the entire network surveys D by sensing each point in D
by an amount of effective coverage equal to C

? The precise
denition of effective coverage will be made in Section
II. Hence, the aim is to actively sense the mission domain
while the agents are moving in space, which is applicable to
search and rescue, and surveillance problems. This question
has been studied in [6] for (optimal and suboptimal) mo-
tion planning of multiple spacecraft interferometric imaging
systems (MSIIS). In [7], the authors formulate a problem
that allows the study of coverage-type problems. They then
develop gradient-type control strategies that ensure coverage
of a given domain, including the possibility of a partially
connected, bidirectional communication structure. In [8], the
authors adapt the result to guarantee collision avoidance.
They rely on avoidance feedback control strategies originally
developed by Leitmann and Skoronski [9], [10], which were
later generalized for multi-agent systems in noncooperative
[11] and cooperative settings [12].
In this paper, we rst review the problem formulation in
Section II. We then summarize results that appear in [7], [8]
in Section III. In Section IV, we redene the error metric
to include a term that penalizes inter-agent distances that
are larger than a desired relative distance. This maximum
relative distance is usually dictated according to inter-vehicle
communication considerations. Analytical results are given
that show that safe (i.e., without collisions) coverage is
guaranteed with vehicle ocking behavior. Numerical results
are given in Section V that show the performance of the
proposed control laws. The paper is concluded in Section VI
with current and future research.
II. PROBLEM FORMULATION
In this section we review the problem formulation intro-
duced in [7], [8]. An agent is denoted by A. Let R
+
= {a
R : a 0}, Q = R
2
the conguration space of each agent
and D be a compact subset of R
2
which represents a region
in R
2
that the network is required to cover. Let N be the
number of agents in the eet and let q
i
Q denote the
position of agent A
i
, i S = {1, 2, 3, . . . , N}. Each agent
A
i
, i S, satises the following simple kinematic equations
of motion
q
i
= u
i
, i S (1)
where u
i
R
2
is the control velocity of agent A
i
.
Dene the instantaneous coverage function

A
i
: DQ
R
+
as a C
1
-continuous map that describes how effective an
agent A
i
senses a point q D. Without loss of generality,
we consider the following sensor model SM. We emphasize,
however, that this model is not an assumption for the ensuing
theoretical results to be valid. In the following sections,
via straightforward modications, similar results can easily
be obtained for other sensor models. The most important
property of any sensor model allowed by the theory is the
fact that the sensors can have a nite sensor range.
Sensor Model. In this paper we consider sensors with the
following properties:
SM1 Each agent has a peak sensing capacity of M
i
exactly at the position q
i
of agent A
i
. That is, we
have

A
i
(q
i
, q
i
) = M
i
>

A
i
( q, q
i
), q = q
i
.
SM2 Each agents sensor has a circular sensing sym-
metry about the position q
i
, i S, in the sense
that all points in D that are on the same circle
centered at q
i
are sensed with the same intensity.
That is,

A
i
( q, q
i
) is constant q D such that
q
i
q = c, c, 0 c r
i
, where r
i
is
the range of the sensor of agent A
i
. Hence, we
introduce the new function A
i
: R
+
R
+
such
that

A
i
( q, q
i
) = A
i
_
q
i
q
2
_
.
SM3 Each agent has a limited sensory domain W
i
(t)
with a sensory range r
i
. The sensory domain of
each agent is given by
W
i
(t) = { q D : q
i
(t) q r
i
} . (2)
Mathematically, under assumption SM2, this re-
quires that A
i
(q
i
(t) q
2
) = 0 q D \
-2
-1
0
1
2
-2
-1
0
1
2
0
0.25
0.5
0.75
1
-2
-1
0
1 q
x
q
x
q
y
Fig. 1. Instantaneous coverage function A
i
with q
i
= 0, M
i
= 1 and
r
i
= 2.
W
i
(t) =
_

q :
_
_
q
i
(t)

q
_
_
> r
i
_
. Let the union
of all coverage regions be denoted by
W(t) =
iS
W
i
(t).
An example of such a sensor function is a second order
polynomial function of s = q
i
q
2
within the sensor
range and zero otherwise. In particular, consider the function
A
i
(s) =
_
Mi
r
4
i
_
s r
2
i
_
2
if s r
i
0 if s > r
i
. (3)
All simulations conducted in this paper employ the coverage
function given by A
i
_
q
i
q
2
_
, with A
i
as given in equa-
tion (3). One can check that this sensor coverage function
satises the model properties SM1-SM3. An example for the
instantaneous coverage function (3) is given in Figure 1.
Fixing a point q, the effective coverage achieved by an
agent A
i
surveying q from the initial time t
0
= 0 to time t
is dened to be
T
i
( q, t) :=
_
t
0
A
i
(q
i
() q
2
)d
and the effective coverage by a subset of agents A
K
=
{A
j
|j K S} in surveying q is then given by
T
K
( q, t) :=

iK
T
i
( q, t) =
_
t
0

iK
A
i
(q
i
() q
2
)d.
It can easily be checked that T
K
( q, K, t) is a non-decreasing
function of time t. In fact, note that

t
T
K
( q, t) =

iK
A
i
_
q
i
q
2
_
0.
Let C

be the desired attained effective coverage at all


points q D. The goal is to attain a network coverage of
T
S
( q, t) = C

for all q D at some time t. The quantity


C

guarantees that, when T


S
( q, t) = C

, one can judge,


with some level of condence, whether an event happened
at q D or not. Consider the following error function
e
cov
(t) =
_
D
h(C

T
S
( q, t)) ( q)d q, (4)
where h(x) is a penalty function that is positive denite,
twice differentiable, strictly convex on (0, C

] and that satis-


es h(x) = h

(x) = h

(x) = 0 for all x 0. Positivity and


strict convexity in our case mean that h(x), h

(x), h

(x) > 0
for all x (0, C

]. The penalty function penalizes lack


of coverage of points in D. It incurs a penalty whenever
T
S
( q, t) < C

. Once T
S
( q, t) C

at a point in D, the
error at this point is zero no matter how much additional time
agents spend surveying that point. The extra time spent there
is benecial, since it results in increasing the probability of
event detection, and is hence not penalized. As will be seen,
accumulated error will generate an attractive force on an
agent, while excessive coverage has no effect on the motion.
An example for the function h(x) is
h(x) = (max(0, x))
n
, (5)
where n > 1, n R
+
. The total error is an average over
the entire domain D weighted by the density function ( q).
When e
cov
(t) = 0, one says that the mission is accomplished.
The map : D R
+
, called a distribution density function,
may be used as a weighting function in the denition of
error.
III. A CONTROL LAW FOR DYNAMIC COVERAGE AND
COLLISION AVOIDANCE
In this section, we review the control laws developed in
[7], [8]. The error function in Equation (4) is used to derive
control laws that guarantee, under appropriate assumptions,
coverage of the entire domain D and with guaranteed col-
lision avoidance. In this paper we restrict the discussion to
dynamic sensor networks with fully-connected bidirectional
communication structures. This assumption was relaxed in
[7], [8]. We will make, without any loss of generality, the
following natural assumption, whose utility will become
obvious later.
IC1 The initial coverage is identically zero:
T
S
( q, 0) = 0, q D.
Consider the following condition.
Condition C1. T
S
( q, t) = C

, q W
i
(t), i S.
This condition expresses a coverage state where each
point within the range of each agent A
i
is satisfactorily
covered.
Lemma III.1 ( [7], [8]). With the control law
u
cov
i
(t) = k
cov
i
_
D
h

(C

T
S
( q, t))
A
i
(s)
s

s=qi(t) q
2
(q
i
(t) q) ( q)d q, (6)
where k
i
> 0 are xed feedback gains, the multi-agent sensor
network will converge to the state described in Condition C1.
The proof can be found in [7], [8]. The control law (6)
is guaranteed to always converge to the coverage state C1.
Essentially, this control law is a gradient-type algorithm,
where the coverage metric is given by equation (4). Hence,
whenever the gradient of the metric is zero (within the sensor
domains W), the control law for all agents is zero and no
motion is possible. If this happens and the total coverage
error is zero, then the task is complete. If C1 is met but
the global error is not zero, then a perturbation control
law is needed to move the system away from the state C1.
Once this is done, the control law is switched back to the
control (6), called the nominal control, which is guaranteed
to converge to C1. This control strategy was shown to ensure
full coverage of the domain [7], [8]. In this paper, we only
study the control laws that guarantee convergence to C1 and
refer the reader to [7], [8] for just one example of many
possible perturbation control approaches.
Next, we review results from [8] on how the above control
law can be modied such that collision avoidance as well as
coverage are guaranteed. Let R > r > 0 be some xed
real numbers. Two agents with a distance between them of
R or more are said to be in the collision-inactivity region.
If the distance is less than R but greater than r, they are
said to be within the safety region. Finally if the distance
between an agent pair is less than r, they are said to be in
the avoidance region [9], [10]. Following the methodology
proposed in [12], consider the following functions:
P
ij
(q
i
, q
j
) =
_
min
_
0,
q
i
q
j

2
R
2
q
i
q
j

2
r
2
__
2
. (7)
The derivative of these mutual avoidance functions is
given by [12]
P
ij
q
i
=
_

_
0 q
i
q
j
R
4
(R
2
r
2
)(qiqj
2
R
2
)(qiqj)
(qiqj
2
r
2
)
3
R > q
i
q
j
> r
not dened q
i
q
j
= r
0 q
i
q
j
< r
(8)
This will be used as the control force acting on each agent
to guarantee collision avoidance. Note that it is only when
r < q
i
q
j
< R that the mutual repulsive force between
the agents is nonzero. Moreover, observe that
P
ij
= P
ji
,
P
ij
q
i
=
P
ij
q
j
. (9)
Let the nominal control law be given by
u
i
(t) = u
cov
i
(t) +u
col
i
(t), (10)
where u
cov
i
(t) is the coverage control given by equation (6)
and
u
col
i
(t) = k
col
i

j=1, =i
P
ij
q
i
, (11)
where k
col
i
> 0 are feedback gains.
To include collision avoidance into consideration, we
need to introduce a modied version of Condition C1:
Condition C1. T
S
( q, t) = C

, q W
i
(t), i S and all
agents having relative distances
q
i
(t) q
j
(t) > R,
for all i, j S.
Lemma III.2. If

2
i
=

k
cov
i

k
col
j
= w for all i S and with
initial conguration such that q
i
(0) q
j
(0) > r, then
under the control law (10) a fully connected multi-agent
sensor network will safely converge the state described in
Condition C1.
The proof of this result can be found in [7], [8]. The
proof can also be inferred from the proof of Theorem IV.1
with e
prox
= 0 and w
2
= 0, which we dene and discuss in
the next section.
Remark. If Condition C1 alone is satised and some relative
distances are such that r < q
i
(t) q
j
(t) < R, then due
to the repulsive nature of P
ij
the agents repel each other
until all agent pairs satisfy q
i
(t) q
j
(t) > R, at which
point P = 0. If C1 is not satised but P = 0, the agents
move according to the nominal controller u
cov
i
only.
Remark. Note here that we are not restricting the motion
of the agents to remain inside D. In the case when, for
example, R is larger than the domain size (dened, say, as
the maximum Euclidean distance between any two points
in D) the repulsive force will cause some agents to move
outside the boundary of D, which we allow in this paper. At
this point, along with the satisfaction of the Condition C1,
the control will be switched to the perturbation control law
u
i
discussed in [7], [8].
IV. COVERAGE CONTROL WITH FLOCKING BEHAVIOR
AND GUARANTEED COLLISION AVOIDANCE
In this section we introduce a new denition for the error
such that it is a weighted sum of a coverage component e
cov
and an inter-agent proximity component e
prox
. The proximity
metric e
prox
penalizes the deviation of inter-vehicle distances
away from a maximum acceptable inter-agent distance d.
Inter-vehicle distances less than d are not penalized. One
can use the function h again to implement this idea. For
example, consider the proximity error function
e
prox
(t) =
N

i,j=1,i=j
h
_
d
2
q
i
(t) q
j
(t)
2
_
(12)
and the overall error function is now
e(t) = e
cov
(t) + e
prox
(t), (13)
where e
cov
(t) is given in equation (4). In this paper we
assume that d > R. That is, the acceptable inter-agent
communication range lies in the collision-inactivity region.
Again, we need to modify Condition C1 to include a
ocking element.
Condition C1. T
S
( q, t) = C

, q W
i
(t), i S and
all agents having relative distances
d > q
i
(t) q
j
(t) > R,
for all i, j S.
Theorem IV.1. Let

i
=
k
col
i
k
cov
i
,
i
=
k
col
i
k
prox
i
.
If
2w
2
1
+ 2w
1

i
2
2
1
+ 4w
1
w
2

i
+ 4w
2

i
8w
2
2

2
i
0, (14)
for some positive parameters w
1
and w
2
, and initial congu-
ration such that q
i
(0) q
j
(0) > r, then under the control
law
u
i
(t) = u
cov
i
(t) +u
col
i
(t) +u
prox
i
(t), (15)
where u
cov
i
is given in equation (6), u
col
i
is given in equation
(11) and
u
prox
i
(t) = k
prox
i
N

j=1, j=i
h

(d
2
q
i
q
j

2
) (q
i
q
j
) ,
(16)
k
prox
i
being a positive constant, a fully connected multi-agent
sensor network will safely converge to the state described in
Condition C1.
Before proceeding with the proof of this theorem, we
rst note that the control law (15) achieves the same goal
as (10) but attempts to also minimize the inter-agent mutual
distances. The nal zero motion state is one where the entire
domain is fully covered, with all agents within a distance that
is greater than R (i.e., safe) and less than d (i.e., acceptable
inter-agent distance).
To prove this theorem, we consider the Lyapunov-like
function
V = e
cov
t
+ w
1
P + w
2
e
prox
. (17)
Hence,

V is given by

V = e
cov
tt
+ w
1

P + w
2
e
prox
,
where
e
prox
= 2
N

i,j=1,i=j
h

_
d
2
q
i
q
j

2
_
(q
i
q
j
) (u
i
u
j
)
= 4
N

i=1
_
N

j=1,j=i
h

_
d
2
q
i
q
j

2
_
(q
i
q
j
)
_
u
i
, (18)
where we note that e
prox
satises similar properties as P:
e
prox
ij
= e
prox
ji
and
e
prox
ij
q
i
=
e
prox
ij
q
j
and the derivation of e
prox
is similar to that of

P given in
the previous section. Here, we dene
e
prox
ij
= h
_
d
2
q
i
q
j

2
_
.
Using the above denitions of u
cov
i
, u
col
i
and u
prox
i
, we
get

V =

V
1
2
N

i=1
u
cov
i
k
cov
i
_
u
cov
i
+u
col
i
+u
prox
i
_
(19)
2w
1
N

i=1
u
col
i
k
col
i
_
u
cov
i
+u
col
i
+u
prox
i
_
4w
2
N

i=1
u
prox
i
k
prox
i
_
u
cov
i
+u
col
i
+u
prox
i
_
=

V
1
2
N

i=1
_
u
cov
i
u
col
i
u
prox
i

Q
i
_
_
u
cov
i
u
col
i
u
prox
i
_
_
,
where

Q
i
=
1
2
_

_
2
k
cov
i
w1
k
col
i
+
1
k
cov
i
2w2
k
prox
i
+
1
k
cov
i
w1
k
col
i
+
1
k
cov
i
2w1
k
col
i
2w2
k
prox
i
+
w1
k
col
i
2w2
k
prox
i
+
1
k
cov
i
2w2
k
prox
i
+
w1
k
col
i
4w2
k
prox
i
_

_ I
22
,
and where is the Kronecker product. The matrix

Q
i
has
real eigenvalues including two at 0.We need to ensure that

Q
i
is positive semi-denite. This is guaranteed if condition
(14) is met. We are now ready to prove Theorem IV.1.
Proof 1 (Theorem IV.1). The function V is equal to zero if
and only if the Condition C1 is satised, P = 0 and e
prox
=
0. P = 0 if and only if all agents are outside each others safe
distances R. e
prox
= 0 if and only if all agents are located
within a distance d from each other. That is V = 0 if and
only Condition C1 is satised. Checking the expression for

V we note that

V
1
0 with equality holding if and only if
C1 is satised. From the condition (14) we note that

V 0
with equality holding if and only if

V
1
, u
cov
i
, u
col
i
, and u
prox
i
are all zero. The quantities

V
1
and u
cov
i
are zero if and only
if C1 is satised, u
col
i
is zero if and only if P = 0, and u
prox
i
is zero if and only if e
prox
= 0. (Note that u
cov
i
, u
col
i
and
u
prox
i
can not be in the null space of

Q
i
and

V
1
zero unless
u
cov
i
= u
col
i
= u
prox
i
= 0 because if u
cov
i
is non-zero, then
so must

V
1
.) This proves convergence to the condition C1.
With initial conditions q
i
(0) q
j
(0) > r, the sys-
tem is safely initialized. By construction of the avoidance
functions P
ij
, if the unsafe region (the circle of radius r
centered at all agents centers of masses) is to be violated,
then P
ij
as q
i
q
j
r
+
. This implies that the
function

V . However, this is a contradiction since the
function

V is nite and is strictly decreasing.
Remark. When the perturbation control is included in a
control-switching strategy as in [7], [8], one has to ask if
innite switching between the nominal and the perturbation
control law is possible. In this work, innite switching is
not possible. Since after every single switch (from nominal
to perturbation control) an area of measure larger than zero is
covered and since the domain D is compact, there will be no
innite switching. Clearly if D is open or unbounded, there is
no guarantee that after each switch the nonzero measure area
covered by the eet will eventually cover the entire domain.
This is the main reason for requiring that D be compact.
V. SIMULATION RESULTS
In this section we give two numerical simulations. In
the rst, we compare the performance of coverage control
strategy (including guaranteed collision avoidance) with and
without the ocking. For the purpose of this simulation, we
do not incorporate a perturbation control law that ensures
global coverage. In the second simulation result, we run the
coverage control law with guaranteed collision avoidance and
ocking and switch to the perturbation control law proposed
in [7], [8] to show convergence to the zero coverage error
state.
In the following simulation results the domain D is a
square region of side length l = 32 units length. There are 4
agents (N = 4) with a randomly selected initial deployment.
Let the desired effective coverage C

be 40. We choose a
uniform distribution for . For the sensor model, we have set
M
i
= 2, r
i
= 4 for all i = 1, . . . , 4. The unsafe region has a
radius of r = 2 units length and the safety zone has a radius
of R = 4 units length. Below, we normalize the error e(t)
by dividing it by (C

)
n
l
2
so that the initial error (which is
precisely the volume under the graph of the function C

for
all q D), where n = 2 is dened in equation (5). Finally,
we used a simple trapezoidal method to compute integration
over D and a simple rst order Euler scheme to integrate
with respect to time.
A. Control Performance
We rst compare the control law
Case 1 without ocking using the control law in equation
(10), and
Case 2 with ocking using the control law in equation
(15).
For Case 1, we used control gains k
cov
i
= 0.00001
and k
col
i
= 0.05 for all agents. The results are shown in
Figures 2 and 3. Since the control law from Lemma III.2
only guarantees convergence to the Condition C1, the total
error e(t) (shown in Figure 2(c)) converges to a nonzero
value. Figure 2(d) shows the time evolution of the two
dimensional function h(C

T
S
( q, t)). Finally, Figure 3
shows the relative distance between all agents. Note that
the inter-vehicle distances are never smaller than r = 2 and
0 5 10 15 20 25 30 35
0
5
10
15
20
25
30
35
x
y
(a) Motion in the plane
0 200 400 600 800 1000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
t

u
i
(
t
)

,
i

S
(b) u
i
, i S
0 200 400 600 800 1000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
e
t
(c) Error e
cov
(t)


0 5 10 15 20 25 30
0
5
10
15
20
25
30
10
20
30
40
50
60
(d) Coverage at t = 1000
Fig. 2. Case 1: (a) Motion in the plane up to time t = 1000s (start at
green dot and end at red dot), (b) control effort, (c) error function, and (d)
attained coverage at t = 1000s.
0 200 400 600 800 1000
0
5
10
15
20
25
30
35
40

q
i

q
j

i
,
j

S
t
Fig. 3. Case 1: Relative distances between all agent pairs in the eet for
the control law (10). The dashed red line represents the minimum distance
r allowed between any agent pair and the dash-dotted green line represents
the safe distance R.
converge to values larger than R = 4 as predicted by Lemma
III.2.
For Case 2, we again used control gains k
cov
i
= 0.00001
and k
col
i
= 0.05 for all agents. For the proximity gain, we
set k
prox
i
= 0.0001 for all agents. The results are shown
in Figures 4 and 5. Since the control law from Lemma
IV.1 only guarantees convergence to the Condition C1,
the total error e(t) (shown in Figure 4(c)) converges to a
nonzero value. Figure 4(d) shows the time evolution of the
two dimensional function h(C

T
S
( q, t)). Finally, Figure
5 shows the relative distance between all agents. Note that
the inter-vehicle distances are never smaller than r = 2 and
converge to values larger than R = 4 but smaller than d as
predicted by Theorem IV.1. Moreover, comparing Figure 3
to Figure 5, we note that the agents in Case 2 maintain a
much closer range to each other (mostly within the required
proximity distance d) than in Case 1.
0 5 10 15 20 25 30 35
0
5
10
15
20
25
30
35
x
y
(a) Motion in the plane
0 200 400 600 800 1000
0
0.1
0.2
0.3
0.4
0.5
0.6
t

u
i
(
t
)

,
i

S
(b) u
i
, i S
0 200 400 600 800 1000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
e
t
(c) Error e
cov
(t)


0 5 10 15 20 25 30
0
5
10
15
20
25
30
10
20
30
40
50
60
(d) Coverage at t = 1000
Fig. 4. Case 2: (a) Motion in the plane up to time t = 1000s (start at
green dot and end at red dot), (b) control effort, (c) error function, and (d)
attained coverage at t = 1000s.
0 200 400 600 800 1000
0
5
10
15
20
25
30

q
i

q
j

i
,
j

S
t
Fig. 5. Case 2: Relative distances between all agent pairs in the eet for
the control law (15). The dashed red line represents the minimum distance
r allowed between any agent pair and the dash-dotted green line represents
the safe distance R.
B. Convergence to Zero Coverage Error
If the Condition C1 is satised and the global cover-
age/proximity error is zero, then the coverage task is com-
plete. However, Condition C1 by itself does not guarantee
that the global error is zero. Hence, we employ a linear
perturbation control as suggested in [7], [8] to drive the total
global coverage error to zero. The control is switched to the
perturbation control law whenever C1 is satised but the
global coverage error is not zero. This perturbation control
is one which associates a unique uncovered point (i.e., a
point q such that T
S
(t, q) < C

) to each agent. Each agent


is then driven in a linear fashion to that point. Once there, the
Condition C1 is violated and the control is switched back
again to the nominal control law given in equation (15).
We used control gains k
cov
i
= 0.00001 and k
col
i
= 0.015
for all agents. For the proximity gain, we set k
prox
i
=
0.00001 for all agents. Finally, for the linear symmetry
breaking gain (see [7] for details), we use a value of 0.2
for all agents. The results are shown in Figures 6 and 7.
As can be seen the global coverage error converges to zero
with time. Moreover, the relative distances converge to the
desired range between R and d. During the application of the
perturbation control, we switch off the proximity control and
hence we note that the relative distances tend to be higher
than Case 2 above but much lower than those for Case 1.
0 5 10 15 20 25 30 35
0
5
10
15
20
25
30
35
x
y
(a) Motion in the plane
0 500 1000 1500 2000 2500
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
t

u
i
(
t
)

,
i

S
(b) u
i
, i S
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
e
t
(c) Error e
cov
(t)


0 5 10 15 20 25 30
0
5
10
15
20
25
30
10
20
30
40
50
60
(d) Coverage at t = 1000
Fig. 6. (a) Motion in the plane up to time t = 2726s (start at green dot
and end at red dot), (b) control effort, (c) error function, and (d) attained
coverage at t = 2726s. Note that three of the four agents have converged
to locations immediately outside the domain D.
0 500 1000 1500 2000 2500
0
5
10
15
20
25

q
i

q
j

i
,
j

S
t
Fig. 7. Relative distances between all agent pairs in the eet for the control
law (15) with linear pertubation control from [7], [8]. The dashed red line
represents the minimum distance r allowed between any agent pair and the
dash-dotted green line represents the safe distance R. The solid black line
represents the distance d.
VI. CONCLUSION
In this paper we revisited the dynamic sensor net-
work coverage control problem. We developed control laws
that guarantee coverage of the domain with ocking and
guaranteed collision avoidance. Analytical results that use
Lyapunov-like functions prove convergence. Several numer-
ical examples are provided to illustrate the main ideas. Future
research will focus on addressing issues such as unreliable
communication architectures and sensor degradation. More-
over, of particular interest is the issue of decentralization of
the methods discussed in this paper.
ACKNOWLEDGEMENTS
This work has been supported by the Boeing Company
via the Information Trust Institute. The rst author grate-
fully acknowledges nancial support from the WPI Faculty
Development Grant.
REFERENCES
[1] C. G. Cassandras and W. Li, Sensor networks and cooperative
control, European Journal of Control, vol. 11, no. 45, pp. 436463,
2005.
[2] A. Ganguli, S. Susca, S. Martnez, and F. B. J. Cort es, On col-
lective motion in sensor networks: sample problems and distributed
algorithms, IEEE Conference on Decision and Control, 2005.
[3] W. Li and C. G. Cassandras, Distributive cooperative coverage control
of mobile sensing networks, IEEE Conference on Decision and
Control, 2005.
[4] J. Cort es, S. Martnez, T. Karatus, and F. Bullo, Coverage control
for mobile sensing networks, IEEE Transactions on Robotics and
Automation, vol. 20, no. 2, pp. 243255, 2004.
[5] P. R. Chandler, M. Pachter, and S. Rasmussen, UAV cooperative
control, The American Control Conference, pp. 5055, 2001.
[6] I. I. Hussein, Motion planning for multi-spacecraft interferometric
imaging systems, Ph.D. dissertation, University of Michigan, 2005.
[Online]. Available: http://decision.csl.uiuc.edu/ihussein/
[7] I. I. Hussein and D. Stipanovi c, Effective coverage control
for mobile sensor networks, 2006 IEEE Conference on
Decision and Control, 2006, to appear. [Online]. Available:
decision.csl.uiuc.edu/ihussein/Publications/HuStCDC06.pdf
[8] , Effective coverage control for mobile sensor networks with
guaranteed collision avoidance, IEEE Transactions on Control Sys-
tems Technology, 2006, to appear.
[9] G. Leitmann and J. Skowronski, Avoidance control, Journal of
Optimization Theory and Applications, vol. 23, no. 4, pp. 581591,
December 1977.
[10] G. Leitmann, Guaranteed avoidance strategies, Journal of Optimiza-
tion Theory and Applications, vol. 32, no. 4, pp. 569576, 1980.
[11] D. M. Stipanovi c, Sriram, and C. J. Tomlin, Multi-agent avoidance
control using an M-matrix property, Electronic Journal of Linear
Algebra, vol. 12, pp. 6472, 2005.
[12] D. M. Stipanovi c, P. F. Hokayem, M. W. Spong, and D. D.

Siljak, Co-
operative avoidance control for multi-agent systems, ASME Journal
of Dynamic Systems, Measurement and Control, 2006, under review.

Das könnte Ihnen auch gefallen