Sie sind auf Seite 1von 8

Artificial neural networks for calculating the

association probabilities in multi-target tracking


I. Turkmen and K. Guney
Abstract: A simple method based on the multilayered perceptron neural network architecture for
calculating the association probabilities used in target tracking is presented. The multilayered
perceptron is trained with the Levenberg Marquardt algorithm. The tracks estimated by using the
proposed method for multiple targets in cluttered and non-cluttered environments are in good
agreement with the original tracks. Better accuracy is obtained than when using the joint
probabilistic data association filter or the cheap joint probabilistic data association filter methods.

Introduction

The subject of multi-target tracking (MTT) has applications in both civilian and military areas. The aim of
MTT is to partition the sensor data into sets of
observations, or tracks produced by the same source.
Once tracks are formed and confirmed, the number of
targets can be estimated, and the positions and velocities
of the targets can be computed from each track. A number
of methods [1 3] have been presented and used to
estimate the states of multiple targets. These methods
have different levels of complexity and require vastly
different computational effort. The joint probabilistic data
association filter (JPDAF) [1] is a powerful and reliable
algorithm for MTT. It works without any prior information about the targets and clutter. In the JPDAF
algorithm, the association probabilities are computed from
the joint likelihood functions corresponding to the joint
hypotheses associating all the returns to different permutations of the targets and clutter points. The computational
complexity of the joint probabilities increases exponentially as the number of targets increases. To reduce this
computational complexity significantly, Fitzgerald [4]
proposed a simplified version of the JPDAF, called the
cheap JPDAF algorithm (CJPDAF). The association
probabilities were calculated in [4] using an ad hoc
formula. The CJPDAF method is very fast and easy to
implement; however, in either a dense target or a highly
cluttered environment the tracking performance of the
CJPDAF decreases significantly.
In this article, a method based on artificial neural
networks (ANNs) for computing the association probabilities is presented. These computed association probabilities
are then used to track the multiple targets in cluttered and
noncluttered environments. ANNs are developed from
neurophysiology by morphologically and computationally

q IEE, 2004

mimicking human brains. Although the precise details of the


operation of ANNs are quite different from those of human
brains, they are similar in three aspects: they consist of
a very large number of processing elements (the neurons),
each neuron connects to a large number of other neurons,
and the functionality of networks is determined by
modifying the strengths of connections during a learning
phase. Ability and adaptability to learn, generalisability,
smaller information requirement, fast real-time operation,
and ease of implementation features have made ANNs
popular in recent years [5, 6].
To calculate the association probabilities, different
structures and architectures of ANNs, i.e. the standard
Hopfield, the modified Hopfield, the Boltzmann
machine and the mean-field Hopfield network were
proposed in [7 10], respectively. In these works [7 10],
the task of finding association probabilities is viewed as a
constrained optimisation problem. The constraints were
obtained by careful evaluation of the properties of the JPDA
rule. Some of these constraints are analogous to those of the
classical travelling salesman problem. Usually, there are
five constants to be decided arbitrarily [7 9]. In practice, it
is very difficult to choose the five constants to ensure
optimisation. On the other hand, the Boltzmann machines
[9] convergence speed is very slow, even though it can
achieve an optimal solution. To cope with these problems,
the mean-field Hopfield network, which is an alternative to
the Hopfield network and the Boltzmann machine, was
proposed by Wang et al. [10]. This mean-field Hopfield
network has the advantages of both the Hopfield network
and the Boltzmann machine; however, the higher performance of the mean-field Hopfield network is achieved at the
expense of the complexity of equipment structure.
In this paper, the multilayered perceptron (MLP) neural
network [5] is used to calculate accurately the association
probabilities. MLPs are the simplest and therefore most
commonly used neural network architectures. In this paper,
MLPs are trained using the Levenberg Marquardt
algorithm [11 13].

IEE Proceedings online no. 20040739


doi: 10.1049/ip-rsn:20040739
Paper received 28th January 2004
I. Turkmen is with the Civil Aviation School, Department of Aircraft
Electrical and Electronics Engineering, Erciyes University, 38039, Kayseri,
Turkey
K. Guney is with the Faculty of Engineering, Department of Electronics
Engineering, Erciyes University, 38039, Kayseri, Turkey
IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

JPDAF and CJPDAF

The JPDAF is a moderately complex algorithm


designed for MTT [1]. It calculates the probabilities of
measurements being associated with the targets, and uses
them to form a weighted average innovation for updating
each target state.
181

The update equation of the Kalman filter is


x^ i tjt x^ i tjt  1 K i tyi t

where x^ i tjt  1 is the predicted state vector, K i t is the


Kalman gain, and yi t is the combined innovation given by

yi t

m
i t
X

bij yij t

j1

where mi t is the number of validated measurements for


track i, bij is the probability of associating track i with
measurement j, and yij t is the innovation of track i and
measurement j. The measurement innovation term, yij t; is
given by

yij t zj t  Hi t^xi tjt  1

where zj t is the set of the all validated measurements


received at the time t and Hi t is the measurement matrix
for target i.
A method that is as simple as possible for calculating the
association probabilities needs to be obtained, but the
estimated tracks obtained using these association probabilities must be in good agreement with the true tracks. In this
work, a method based on the ANN for efficiently solving
this problem is presented. First, the parameters related
to the association probabilities are determined, then the
association probabilities depending on these parameters are
calculated using the neural model.
In the standard JPDAF, the association probabilities bij
are calculated by considering every possible hypothesis
concerning the association of the new measurements with
the existing tracks. To reduce the complexity of the JPDAF,
the CJPDAF algorithm was proposed in [4]. It avoids the
formation of hypotheses and approximates the association
probabilities by
bij t

Gij t
Sti t Smj t  Gij t B

4a

with
Sti t

mi t
X

Gij t

4b

Gij t

4c

j1

and
Smj t

M
X

layers. The power of neural computations comes from


weight connection in a network. Each neuron has weighted
inputs, a summation function, transfer function, and output.
The behaviour of a neural network is determined by the
transfer functions of its neurons, by the learning rule, and by
the architecture itself. The weights are the adjustable
parameters and, in that sense, a neural network is a
parameterised system. The weighted sum of the inputs
constitutes the activation of the neuron. The activation
signal is passed through a transfer function to produce the
output of a neuron. The transfer function introduces
nonlinearity to the network. During training, inter-unit
connections are optimised until the error in predictions is
minimised and the network reaches the specified level of
accuracy. Once the network is trained, new unseen input
information is entered to the network to calculate the output
for test. The ANN represents a promising modelling
technique, especially for data sets having nonlinear
relationships that are frequently encountered in engineering.
In terms of model specification, ANNs require no knowledge of the data source but, since they often contain many
weights that must be estimated, they require large training
sets. In addition, ANNs can combine and incorporate both
literature-based and experimental data to solve problems.
ANNs have many structures and architectures [5, 6]. In this
paper, the multilayered perceptron (MLP) neural network
architecture [5] is used to compute the association
probabilities.

3.1 Multilayered perceptrons (MLPs)


MLPs are the simplest and therefore most commonly used
neural network architectures. MLPs can be trained using
many different learning algorithms [5, 6]. In this paper,
MLPs are trained using the Levenberg Marquardt algorithm [11 13], because this algorithm given is capable of
fast learning and good convergence. This algorithm is a
least-squares estimation method based on the maximum
neighbourhood idea. It combines the best features of the
Gauss Newton and the steepest-descent methods, but
avoids many of their limitations. As shown in Fig. 1, an
MLP consists of three layers: an input layer, an output layer,
and one or more hidden layers. Each layer is composed of a
predefined number of neurons. The neurons (indicated in
Fig. 1 by a circle) in the input layer only act as buffers for
distributing the input signals xi to neurons in the hidden
layer. Each neuron j in the hidden layer sums up its input
signals xi after weighting them with the strengths of the

i1

where Gij t is the distribution of the innovation yij t;


usually assumed to be Gaussian, and B is a bias term
introduced to account for the nonunity probability of
detection and clutter. The elements of the yij vector are
defined by x~ ij and y~ ij for a Cartesian sensor. It is clear that
only two parameters, x~ ij and y~ ij ; are needed to describe the
association probabilities.
3

Artificial neural networks (ANNs)

ANNs are biologically inspired computer programs


designed to simulate the way in which the human brain
processes information [5]. ANNs gather their knowledge by
detecting the patterns and relationships in data and learn (or
are trained) through experience, not by programming. An
ANN is formed from hundreds of single units, artificial
neurons or processing elements connected with weights,
which constitute the neural structure and are organised in
182

Fig. 1

General form of multilayered perceptrons


IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

respective connections wji from the input layer, and


computes its output yj as a function f of the sum, namely
X

yj f
wji xi
5
where f can be a simple threshold function, a sigmoidal or
hyperbolic tangent function. The output of neurons in the
output layer is similarly computed.
Training a network consists of adjusting the network
weights using the different learning algorithms. A learning
algorithm gives the change Dwji t in the weight of a
connection between neurons i and j at time t. For the
Levenberg Marquardt learning algorithm, the weights are
updated according to the following formula
wji t 1 wji t  Dwji t

Dwji J T wJw mI1 J T wEw

with
where J is the Jacobian matrix, m is a constant, I is an
identity matrix, and E(w) is an error function. The Jacobian
matrix contains the first derivatives of the errors with
respect to the weights and biases. It can be calculated by
using a standard back propagation algorithm. The value of
m is decreased after each successful step and is increased
only when a tentative step would increase the sum of
squares of errors. Detailed discussion of the Levenberg
Marquardt learning algorithm can be found in [11].

3.2 Application of ANN to the calculation of


the association probabilities
The ANN has been adapted for the computation of the
association probabilities bij : For the neural model, the inputs
are the absolute values of the elements of the measurement
innovation vector yij j~xij j and j~yij j; and the output is the
association probabilities bij : A neural model used in
calculating the bij is shown in Fig. 2.
The accuracy of the ANN model strongly depends on the
data sets used for training. If the training data sets are
insufficient or do not cover all the necessary representative
features of the problem, it can cause large errors with test
data sets. If the training data sets are too large, this may
cause overfitting and training may take a long time. When a
suitable network configuration is found and the training data
sets are selected, network training can be started. Good
training of ANN is observed when the tracks estimated
using ANN are close to the true tracks in different test
scenarios.
Training an ANN using a learning algorithm to compute
the association probabilities involves presenting it sequentially with different sets (j~xij j and j~yij j) and corresponding
desired bij values. Differences between the desired output bij
and the actual output of the ANN are evaluated by a learning
algorithm. Adaptation is carried out after the presentation of
each set (j~xij j and j~yij j) until the calculation accuracy of the
network is deemed satisfactory according to some criterion
(for example, when the error between the desired bij and the
actual output for all the training set falls below a given
threshold) or when the maximum allowable number of
epochs is reached.

Fig. 2 Neural model for association probabilities computation


IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

The values of the input variables j~xij j and j~yij j used in this
paper are between 0 and 1.2 km. The bij values, which
depend on the absolute values of the input variables, must be
between 0 and 1. While the values of the input variables
approach zero, the value of bij approaches 1. After many
trials, the desired bij values, which lead to excellent
agreement between the true tracks and the estimated tracks,
are determined. In this paper, 630 data sets most suitable for
correct modelling computation of the association probabilities were used to train the networks.
In the MLP, the input and output layers have linear
transfer functions and the hidden layers have hyperbolic
tangent functions. After several trials, it was found that the
most suitable network configuration was two hidden layers
with twelve neurons. The input and output data tuples were
scaled between 0.0 and 1.0 before training. The number of
epochs was 1000 for training. The seed number was fixed at
16 755. The seed number is a number for the random
number generator to ensure randomness when initialising
the weights of networks. The value of m in (7) was chosen
as 0.2.
After training, the association probabilities bij are
computed rapidly using the neural model under different test
scenarios. These computed association probabilities are
used in (2) to determine the combined innovation, and then
the estimated states of the targets are found using the
Kalman filter equations. The approach proposed in this
paper can be termed an ANN data association filter
(ANNDAF).
4

Simulation results

The aim in this section is to test the performance of the


present ANNDAF model against the JPDAF and the
CJPDAF algorithms. For this purpose, five different
tracking scenarios are considered. The trajectories of two
crossing targets in scenario 1, four crossing targets in
scenario 2, six crossing targets in scenario 3, two parallel
targets in scenario 4, and four parallel targets in scenario 5,
are shown in Figs. 3 7, respectively. These trajectories are
similar to those widely used in the literature. The initial
positions and velocities of crossing targets are listed in the
first two rows, in the first four rows, and in all rows of
Table 1 for scenarios 1, 2 and 3, respectively. The initial
positions and velocities of parallel targets are given in the
first two rows and in all rows of Table 2 for scenarios 4 and
5, respectively. The targets were assumed to have constant
velocities in a two-dimensional plane. For scenarios in a
cluttered environment, a uniform clutter density of 0:6 km2
was selected, which produced on average two clutter points
per validation gate. In the simulation the sampling interval
was assumed to be 1 s. The covariance matrix Qi t of the
process noise wi t is given by
"
#
2
0
six t
 i 2
Qi t
0
sy t
The associated variances were chosen as
 i 2
sx t 0:005 km2 s4
 i 2
sy t 0:005 km2 s4
It was assumed that only position measurements were
available so that


1 0 0 0
Ht
for all t
0 1 0 0
183

Fig. 3

Tracking two crossing targets in scenario 1 using CJPDAF and ANNDAF

a Non-cluttered environment
b Cluttered environment

Fig. 4

Tracking four crossing targets in scenario 2 using CJPDAF and ANNDAF

a Non-cluttered environment
b Cluttered environment
184

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

Fig. 5 Tracking six crossing targets in scenario 3 using CJPDAF and ANNDAF
a Non-cluttered environment
b Cluttered environment

Fig. 6 Tracking two parallel targets in scenario 4 using CJPDAF and ANNDAF
a Non-cluttered environment
b Cluttered environment
IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

185

Fig. 7

Tracking four parallel targets in scenario 5 using CJPDAF and ANNDAF

a Non-cluttered environment
b Cluttered environment

Table 1: Initial positions and velocities for crossing


targets in scenarios 1, 2, and 3
y, km

x_ , km=s

y_ , km=s

Crossing targets

x, km

1.55

3.55

0.42

0.56

1.01

4.02

0.52

0.44

0.05

8.02

0.56

0.08

1.65

17.02

0.42

20.09

2.51

2.22

0.46

0.32

0.01

18.52

0.44

20.11

Table 2: Initial positions and velocities for parallel targets


in scenarios 4 and 5
Parallel targets

x, km

y, km

x_ , km=s

y_ , km=s

1.01

1.41

0.41

0.001

1.02

2.82

0.39

0.002

1.01

4.81

0.43

0.001

1.39

6.72

0.40

0.003

The measurement noise covariance matrix was Rt


diag0:1; 0:1 assuming all measurement noise to be
uncorrelated. The probability of detection was selected as
0.9. The threshold of the validation gate was set to 10.
The tracking performances of the CJPDAF and the
ANNDAF are compared in Figs. 3 7 for the five test
scenarios in both cluttered and non-cluttered environments.
186

It can be seen from Figs. 3 7 that the ANNDAF tracks are


closer to the true tracks than tracks predicted by the
CJPDAF for all scenarios. The results of JPDAF are not
shown in Figs. 3 7 for clarity but the RMS tracking errors
of the JPDAF algorithm are given in Table 3. Table 3 also
gives comparative performances of the JPDAF, CJPDAF,
and ANNDAF methods in terms of RMS tracking error. The
percentage improvement obtained using ANNDAF is
calculated as the ratio of the difference between the RMS
errors of the ANNDAF method and the competing method
(JPDAF or CJPDAF) to the RMS error of the competing
method. It is clear from Table 3 that in all cases the results of
the ANNDAF method are better than those of the JPDAF
and the CJPDAF methods. The RMS tracking error values
show that a significant improvement is obtained over the
results of the JPDAF and CJPDAF methods. When
ANNDAF is used, the average percentage improvement
with respect to JPDAF and CJPDAF is 35% and 33%;
respectively. Accurate computation of the association
probabilities using ANNs leads to good accuracy in tracking
multiple targets.
Accurate, fast, and reliable ANN models can be
developed from measured=simulated data. Once developed,
these ANN models can be used in place of computationally
intensive models to speed up target tracking. The ANN,
using simple addition, multiplication, division, and
threshold operations in the basic processing element, can
be readily implemented in analogue VLSI or optical
hardware, or can be implemented on special purpose
massively parallel hardware. If the ANN can be
implemented in simple analogue integrated circuits, then
the ANNDAF method presented in this paper could provide
IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

Table 3: Performance comparison of JPDAF, CJPDAF and ANNDAF methods


RMS tracking errors, km

Percentage improvement

Percentage improvement

Scenarios

Targets

JPDAF

CJPDAF

ANNDAF

with respect to JPDAF

with respect to CJPDAF

0.7981

0.7049

0.4516

43

36

0.8732

0.8851

0.5546

36

37

Two crossing targets in


non-cluttered environment
Two crossing targets in
cluttered environment

Four crossing targets in


non-cluttered environment

0.8256

0.5359

37

35

0.9987

0.6917

30

31

0.8468

0.7182

0.6851

19

1.0693

0.8585

0.7156

33

17

1.4175

1.5054

0.9828

31

35

1.6524

1.6646

0.8237

50

51

1.1323

1.2320

0.5891

48

52

cluttered environment

1.1378

1.0363

0.5198

54

50

1.4707

1.4563

1.3950

1.9010

1.9404

1.0595

44

45

Six crossing targets in

Six crossing targets in


cluttered environment

Two parallel targets in


non-cluttered environment
Two parallel targets in
cluttered environment

0.8473
0.9905

Four crossing targets in

non-cluttered environment

1
2

Four parallel targets in


non-cluttered environment

Four parallel targets in


cluttered environment

0.8094

0.7426

0.6656

18

10

1.0401

0.8468

0.4644

55

45

1.3653

1.3542

0.6656

51

51

1.7306

1.6933

1.2781

26

25

1.4138

1.2496

1.0498

26

16

1.0726

1.1868

0.5151

52

57

1.0786

0.8762

0.5476

49

38

1.2461

1.3838

1.225

11

1.4631

1.5054

1.2041

18

20

2.0205

2.2326

0.8732

57

61

1.6201

1.5016

0.8614

47

43

1.0401

1.1022

0.9641

13

1.1129

1.0857

0.6996

37

36

0.7981

0.8851

0.4648

42

47

1.1937

1.2816

0.6734

44

47

0.9672

1.0176

0.5808

40

43

1.0498

0.9

0.5929

44

34

0.9151

0.8791

0.6786

26

23

1.1731

1.3141

1.1022

16

1.8361

1.7472

0.9548

48

45

1.2602

1.1424

0.9703

23

15

0.906

0.8821

0.4601

49

48

1.156

1.2816

1.0117

12

21

1.8105

1.6687

1.1492

37

31

a practical and efficient solution to the data association


problem. A distinct advantage of neural computation is that,
after proper training, a neural network completely bypasses
the repeated use of complex iterative processes for new
cases presented to it. Thus, neural computation is very fast
after the training phase. Therefore, ANN can be used
effectively in real-time applications.

innovation. The estimated states of the targets are then


found using Kalman filter equations. It was shown that
ANNDAF tracks are in good agreement with true tracks.
This good agreement supports the validity of the approach
proposed in this paper. Better accuracy is obtained using
ANNDAF than with the well known JPDAF and CJPDAF
algorithms.

Conclusions

A neural network approach has been presented for tracking


multiple targets in both cluttered and non-cluttered
environments. In this approach, association probabilities
are computed using ANNs. These computed association
probabilities are used to determine the combined
IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

References

1 Bar-Shalom, Y., and Fortmann, T.E.: Tracking and data association


(Academic Press, San Diego, CA, USA, 1988)
2 Blackman, S.S.: Multiple target tracking with radar applications
(Artech House, Boston, USA, 1986)
3 Bar-Shalom, Y.: Multitarget-multisensor tracking: principles and
techniques (YBS Publishing, 1995)
4 Fitzgerald, R.J.: Development of practical PDA logic for multitarget
187

5
6
7
8
9

tracking by microprocessor. Proc. American Control Conf., Seattle,


WA, USA, 1986, pp. 889 897
Haykin, S.: Neural networks: a comprehensive foundation (Macmillan
College Publishing Company, New York, USA, 1994)
Haykin, S.: Kalman filtering and neural networks (John Wiley & Sons,
2001)
Sengupta, D., and Iltis, R.A.: Neural solution to the multitarget
tracking data association problem, IEEE Trans. Aerosp. Electron.
Syst., 1989, 25, (1), pp. 96 108
Leung, H.: Neural-network data association with application to
multiple-target tracking, Opt. Eng., 1996, 35, (3), pp. 693 700
Iltis, R.A., and Ting, P.Y.: Computing association probabilities using

188

10
11
12
13

parallel Boltzmann machines, IEEE Trans. Neural Netw., 1993, 4, (2),


pp. 221 233
Wang, F., Litva, J., Lo, T., and Bosse, E.: Performance of neural
data associator, IEE Proc., Radar Sonar Navig., 1996, 143, (2),
pp. 71 78
Hagan, M.T., and Menhaj, M.: Training feedforward networks with the
Marquardt algorithm, IEEE Trans. Neural Netw., 1994, 5, (6),
pp. 989 993
Levenberg, K.: A method for the solution of certain nonlinear problems
in least squares, Q. Appl. Math., 1944, 2, pp. 164 168
Marquardt, D.W.: An algorithm for least-squares estimation of
nonlinear parameters, J. Soc. Ind. Appl. Math., 1963, 11, pp. 431 441

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004

Das könnte Ihnen auch gefallen