Sie sind auf Seite 1von 12

Pattern Recognition 43 (2010) 14 -- 25

Contents lists available at ScienceDirect

Pattern Recognition
journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / p r

Polygonal approximation of digital planar curves through break point suppression


A. Carmona-Poyato ∗ , F.J. Madrid-Cuevas, R. Medina-Carnicer, R. Muñoz-Salinas
Department of Computing and Numerical Analysis, Córdoba University, 14071 Córdoba, Spain

A R T I C L E I N F O A B S T R A C T

Article history: This paper presents a new algorithm that detects a set of dominant points on the boundary of an eight-
Received 5 September 2008 connected shape to obtain a polygonal approximation of the shape itself. The set of dominant points is
Received in revised form 8 May 2009 obtained from the original break points of the initial boundary, where the integral square is zero. For
Accepted 10 June 2009
this goal, most of the original break points are deleted by suppressing those whose perpendicular dis-
Keywords: tance to an approximating straight line is lower than a variable threshold value. The proposed algorithm
Digital planar curves iteratively deletes redundant break points until the required approximation, which relies on a decrease
Polygonal approximation in the length of the contour and the highest error, is achieved. A comparative experiment with another
Dominant points commonly used algorithm showed that the proposed method produced efficient and effective polygonal
approximations for digital planar curves with features of several sizes.
© 2009 Elsevier Ltd. All rights reserved.

1. Introduction • Grumbach et al. [18], for the representation of geographic infor-


mation.
Image processing techniques have been widely used in object • Semyonov [50], for electroculographic biosignal processing.
recognition, industrial dimensional inspection, monitoring tasks, and • Shape understanding [4].
many other diverse fields of science and engineering. Extracting • Image analysis by providing a set of feature points [37].
meaningful features from digital planar curves is an important target • Image matching algorithms [24,51,57].
in many machine vision applications [56].
Since Attneave's famous observation that information regarding Many efficient algorithms for the polygonal approximation of dig-
the shape of a curve is concentrated at dominant points [4], its de- ital curves have been developed. The two main approaches to the
tection is an important research area for the contour methods of problem are [30]:
shape analysis. Such points are important features in image match-
ing, shape description, and pattern recognition because they pro- • Fitting the curve with a sequence of line segments that reduces a
vide significant data reduction while preserving crucial information certain error criterion or optimality constraint.
about the object [23]. • Finding a subset of dominant points as vertices of the approxi-
The advantages of using dominant points in computer vision are mating polygon.
clear: in an aerial image, dominant points indicate man-made ob-
jects; in a time sequence, dominant points can be used to compute In the first approach, the goal is to capture the essence of the bound-
the displacement between each pair of consecutive images [23]. ary shape with the least number of straight-line segments. In this ap-
Shape representation by polygonal approximation has been proach, some methods obtain optimal polygonal approximation with
extensively used for constructing a characteristic description of a regard to the number of segments and maximal error [38,40,47,48].
boundary in the form of a series of straight lines. This represen- In most cases, the number and position of the detected vertices de-
tation is very popular due its simplicity, locality, generality, and pend on a user-selected threshold and differ with variation in the
compactness [34]. size and orientation of the shape [30]. The second approach obtains
Polygonal approximation is used in important applications to the vertices of the approximated polygon directly through detect-
recognise planar objects, for example in: ing the corner of digital boundaries (dominant points). Generally,
dominant points are points of high curvature. For this reason, these
• Goyal et al. [17], for the recognition of numerals on car number methods can be classified into three categories:
plates and aircraft.
• Those that search for dominant points by estimating directly the
∗ Corresponding author. Tel.: +34 957212189; fax: +34 957218630. curvature in the original picture space [44,45,11,1,52,2,41,42,58,8,
E-mail address: ma1capoa@uco.es (A. Carmona-Poyato). 20,26].

0031-3203/$ - see front matter © 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.patcog.2009.06.010
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 15

• Those that evaluate the curvature by transforming the contour to Ray and Ray [41] proposed a k-cosine-based method to estimate
the Gaussian scale space [14,35,36,43]. the support region. Rosenfeld and Johnston [44] defined k-cosine,
• Those that search for dominant points using some significant mea- which can be considered a measurement equivalent to curvature.
sure other than curvature [5,9,13,25,19,53,29,55,56,30,7,3,31,15, Sarkar [49] used a method for determining dominant points that
32,33]. is based purely on chain code manipulation; the coordinates of the
digitised points are not required to be known or found.
Some of the direct curvature estimation methods use angle de- Zhu et al. [58] used a method, which is reliable and robust with
tection algorithms and need a previous region of support to estimate regard to noise, to obtain previous pseudodominant points. A critical
the curvature. Others algorithms require a user-defined threshold level is assigned to these points, based on a triangular area defined
to obtain the dominant points. Finally, other algorithms use non- by the two neighbouring pseudodominant points. Points with less
parametric procedures relying on the geometry of the contour to critical levels are deleted until the required level of approximation
obtain the support region dynamically for each boundary point [52]. is obtained.
In this paper, an algorithm for the polygonal approximation of Cornic [9] pointed out that both Teh and Chin's algorithm and
digital planar curves is presented. This algorithm relies on break Ray and Ray's algorithm are not robust for noisy contours, due to
point suppression to detect dominant points and obtain an efficient the fact that the local maximum curvature may be caused by noisy
polygonal approximation. To estimate the break points (i.e., where variations on the curve. He proposed an adaptive algorithm that does
the integral square is zero), all collinear points are deleted using not rely on curvature to detect dominant points. The main idea is
the proposed algorithm. This algorithm iteratively obtains dominant to measure the significance of each point for describing the contour
points by deleting quasi-collinear break points using a variable dis- and to evaluate its significance not by using the point itself, but also
tance as a threshold value. The algorithm finishes when a required through the other points on the curve. This author noticed that a
approximation, which is reliant on the contour length decrease and few points are themselves a limit of the support region of many
the highest error, is achieved. points; most of the time, those points are dominant. The measure of
In Sections 2 and 3, the comparison methods and proposed the significance of a point Pi is given by a function of the number of
method are described. The experimental results are shown in times it has been the limit of a support region.
Section 4, and the main conclusions are summarised in Section 5. Latecki et al. [27] proposed a method based on the suppression of
a vertex with a minimal relevance measure that uses discrete curve
2. Dominant point detection evolution (DCE). The relevance measure was based on differences in
the length of neighbouring vertices and their turn angles.
Since information on a curve is concentrated at the corners, Wu [55] proposed a simple measurement to detect corners. He
corner detection is an important research area in the contour used an adaptive bending value to determine the region of support
methods of shape analysis. Such points are commonly known as for each point in the contour.
dominant points. Dominants points are usually identified as points Marji and Siy [30] proposed a new algorithm that detects a set
with extreme local curvature. In the continuous case, the curvature of dominant points of a contour, which constitute the vertices of a
of a point is defined as the rate of change between the tangent angle polygonal approximation of the contour itself. The algorithm first
and the arc length: calculates the independent left and right support region for each
point using a non-parametric, least-squares error criterion. The end
d points of the support region are called nodes, and their strength is
= (1)
ds measured by the frequency of their selection (similar to Cornic's
method). Only nodes with a strength greater than zero are consid-
In the discrete space, many algorithms have been suggested to cal- ered. The nodes are sorted in descending order (strongest node first),
culate the curvature of a point using information from neighbouring and the length of support region is used to sort the equal-strength
points. Those neighbouring points are designated as region of sup- nodes (largest support region first). The approximating polygon is
port for a given point. obtained by connecting the estimated dominant points. However,
Although dominant points can constitute an approximating poly- some of these points can be eliminated because (1) their contribu-
gon, polygonal approximation is conceptually different from domi- tion to the global contour is minimal and (2) they introduce non-
nant point detection [22]. Polygonal approximation seeks to find a significant errors into the representation. For this purpose, Marji and
polygon that best fits the given digital curve, and such a polygon may Syi [30] used an algorithm for the suppression of collinear points and
or may not be based on connecting the detected dominant points. another algorithm for the suppression of adjacent dominant points.
Carmona et al. [7] proposed a new algorithm to search for dom-
2.1. Related work inant points using a significant measure other than curvature. First,
the support region for each point was calculated. Left and right sup-
Teh and Chin [52] used the ratio of distance between Pi and the port regions for each point were independent. An adaptive bending
chord Pi−k , Pi+k (di,k ) as well as the length of the chord (li,k ), to value to calculate the estimated curvature in a point Pi was used.
determine the support region Once the dominant points are obtained, some of these points min-
imally affect the global contour (since the distance between these
di,k dominant points to the line that joins the immediate left Pl and right
rik = (2)
li,k Pr dominant points is 0 or close to 0). A method to eliminate the
collinear points based on an optimisation procedure was proposed.
This ratio can be considerated as a measure equivalent to curvature. Gao et al. [15] proposed a new corner detection method for
Ansari et al. [2] proposed a method that assigns, for each bound- boundaries that is based on a dyadic wavelet transform (WT) at local
ary point, a support region to the point based on its local properties. natural scales. The points corresponding to wavelet transform mod-
Each point is then smoothed by a Gaussian filter with a width pro- ulus maxima (WTMM) at different scales are taken as corner candi-
portional to its support region. A significance measure for each point dates. For each candidate, the scale at which the maximum value of
is then computed. Dominant points are finally obtained through the normalised WTMM exists is defined as the “local natural scale”;
non-maximal suppression. the corresponding modulus is taken as its significance measure. This
16 A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25

approach achieves more accurate estimation of the natural scale of well the polygon obtained by the algorithm to be tested fits the curve
each candidate than the existing global natural scale-based methods. relative to the optimal polygon in terms of the approximation error.
Masood [31,32] proposed a new method based on dominant point Efficiency measures how compact the polygon obtained by the al-
deletion. An initial set of dominant points was used. All of the break gorithm to be tested is, relative to the optimal polygon that incurs
points were selected as an initial set, because they produce a devia- the same error. These components are defined as
tion of zero from the original contour; this was the basic requirement
of the algorithm. Masood obtained these break points using Free- Eopt
Fidelity = 100 (7)
man's chain code. Deletion of any dominant point would increase Eapprox
the sum of the squared error ISE (i.e., the distance of all curve points Mopt
from the polygonal approximation). The perpendicular squared dis- Efficiency = 100 (8)
Mapprox
tance of any point Pk (xk , yk ) from the straight line connecting the
point Pi (xi , yi ) and Pj (xj , yj ) is calculated as where Eapprox is the error incurred by the algorithm to be tested and
 Eopt is the error incurred by the optimal algorithm. Both algorithms

 ((xk − xi )(yj − yi ) − (yk − yi )(xj − xi ))2 are set to produce the same number of lines. Mapprox is the number
 (3)
(xi − xj )2 + (yi − yj )2 of lines in the approximating polygon produced by the algorithm to
be tested, and Mopt is the number of lines that the optimal algorithm
He used an algorithm where dominant points increasing the mini- would require to produce the same error as the tested algorithm.
mum value of the ISE were deleted. To obtain the optimal polygon in terms of the approximation error,
Masood [33] improved his method using a local optimisation Perez and Vidal [38] proposed a method using dynamic program-
algorithm to reduce the error (ISE) after each deletion. ming.
The advantage of Rosin's [46] evaluation criteria over Sarkar's [49]
2.2. Efficiency of dominant points detectors method is that it can be used to compare the results of polygonal
approximation with different numbers of dominant points.
The focus of research in polygonal approximation is to obtain Depending on the shape of the curve, the two measures may vary
a reasonable/required compression level with minimum er- considerably. Rosin used a combined measure (geometric mean of
ror/distortion [33]. The efficiency of dominant point detectors or fidelity and efficiency)
the quality of polygonal approximation can be measured by the 
amount of data reduction and the closeness to the original bound- Merit = Fidelity × Efficiency (9)
ary. These measures are the two most important used for assessing
the results of polygonal approximation algorithms. Many other To avoid the same problem, Marji and Syi [30] used a modified
measures, based on these, have been used to evaluate and compare version of the FOM (in this case, he used the inverse of the FOM).
the efficiency of dominant point detectors. These measures are The new measure is defined as

ISE
• Compression ratio (CR) WEx2 = (10)
CRx
n
CR = (4) where x is used to control the contribution of the denominator to
nd the overall result in order to reduce the imbalance between the two
where n is the number of points in the contour and nd is the terms. These authors used x = 1, 2 and 3.
number of points of the polygonal approximation. A small number
of dominant points implies a large compression ratio. 3. Proposed method
• The sum of square error (ISE) is defined as
Our method first obtains all of the break points by suppression

n
ISE = e2i (5) of the collinear points. For this purpose, the algorithm proceeds as
i=1
follows:

where ei is the distance from Pi to the approximated line segment, • An initial break point Pini1 is selected. This point can be any break
obtained using Eq. (3). point of the original contour.
• Sarkar [49] combined these two measures as a ratio, producing a • A distance dt , near zero (e.g., 0.1) is set as a threshold value.
normalised figure of merit (FOM). This is defined as • The points whose distance (d) from the straight line that joins the
next point and previous point is lower than threshold value (dt )
CR
FOM = (6) are deleted, because they are collinear points. This elimination
ISE
process can be summarised as follows:
• The maximum error (E∞ ) is the maximum value of ei . The ISE can ◦ select initial point Pini1 as Pi , and the second and third point as
hide a large error at particular point due to closeness of the polyg- Pj and Pk ,
onal approximation to other parts of the curve. This is not desir- ◦ repeat,
able and may result in the hiding of important features of a given ◦ perpendicular distance d of Pj from the straight line connecting the
boundary. Therefore, the maximum error is also an important point Pi and Pk is calculated using Eq. (3),
error measurement. ◦ if d < = dt then
Pj is eliminated (Fig. 1)
Rosin [46] showed that the two terms in the FOM are not bal- Pj ← Pk and Pk ← Pk+1
anced, causing the measure to be biased toward approximations with ◦ else
lower ISE (which can be easily attained by increasing the number of Pi ← Pj , Pj ← Pk and Pk ← Pk+1
detected dominant points). Hence, it is not the best measure for com- ◦ end-if
paring contours with different numbers of dominant points. Rosin ◦ until Pj = Pini1
used two components: fidelity and efficiency. Fidelity measures how ◦ end-repeat.
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 17

Pj
d Pk

Pi

Fig. 1. Elimination of redundant break points (I).

Fig. 3. Dominant points of the chromosome contour: (a) rt = 0.7, n = 12, (b)
rt = 0.5, n = 12, (c) rt = 0.4, n = 12, and (d) rt = 0.3, n = 31.

◦ The redundant quasi-collinear break points with a distance (d)


lower than threshold value (dt ) are deleted, using the algorithm
used for the suppression of collinear points.
◦ dt is increased.
• until final condition is satisfied,
• end-repeat.

An initial value 0.5 is used for


√ the distance threshold, because the
possible minimum error is 1/ 5 = 0.447 once the collinear points
are removed. This case happens with three consecutive break points
(without intermediate collinear points) when the chain code of the
first and second differ in one. For this reason, it is possible to remove
the break points in the first iteration using an initial value of 0.5.
A 0.5 increment (low value) is used to avoid many points that sat-
isfy the final condition in the last iteration. If we use a high value as
an increment, many unnecessary points that satisfy the final condi-
tion can be obtained in the last iteration. In most cases, a maximum
of only three or four iterations are necessary; higher values do not
greatly reduce the efficiency of the method.
Horng [21] noted the significance of the initial point for achieving
the minimum error fitting. The initial point must be within the final
set of dominant points.
To obtain the initial point, any break point is used as initial point
in the first iteration (dt = 0.5); redundant break points in this itera-
tion are eliminated. After the first iteration, the initial point for the
Fig. 2. Elimination of redundant break points (II). next iteration is obtained. For this purpose, the break point with the
maximum distance to the straight line that joins the next and pre-
vious break points is selected. This initial point will remain and will
All of the collinear points are thus eliminated, and all of the break finally be a dominant point. In Figs. 3–6, the initial point is shown
points are obtained. as the most severely standing out point. These figures show that the
This idea is shown in Figs. 1 and 2. The first figure shows the initial point remains in all iterations.
perpendicular distance, d, of the central point from the straight line Obtaining the initial point after the first iteration allows us to
connecting the previous and the next point. In Fig. 2, 1.0 is used as avoid the problem in which all points of the initial contour are break
a threshold value to eliminate the first redundant break points. To points. The initial point obtained is less sensitive to noise.
simplify the explanation, all of the points of the contour are used, Elimination of redundant break dominant points can be continued
including the collinear points. In all cases except for case (h), the in this way until the required level of approximation is obtained.
central point is eliminated. The termination condition can depend upon the requirements of the
Using an initial point Pini for each iteration, an iterative proce- end user. Masood [31] proposed a termination condition that relies
dure similar to the method for the suppression of collinear points on the maximum error (E∞ ). He used 0.9 as a threshold value to
described above is used. In this case, Pi , Pj and Pk are consecutive establish the final condition. We believe that this value cannot be
break points. fixed, because it does not take into account the size, shape, and noise
of the original boundary. In any case, it would have to be a value
• an initial distance dt is set as threshold value, that varies depending on the size, shape, and noise of the original
• repeat contour.
18 A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25

Fig. 4. Dominant points of the leaf contour: (a) rt = 0.7, n = 21, (b) rt = 0.5, n = 21, Fig. 6. Dominant points of the semicircle contour: (a) rt = 0.7, n = 6, (b) rt = 0.5, n = 7,
(c) rt = 0.4, n = 21, and (d) rt = 0.3, n = 49. (c) rt = 0.4, n = 21, and (d) rt = 0.3, n = 44.

Fig. 5. Dominant points of the infinity contour: (a) rt = 0.7, n = 10, (b) rt = 0.5, n = 10,
(c) rt = 0.4, n = 11, and (d) rt = 0.3, n = 27. Fig. 7. Trivial cases: (a) rectangle 1, (b) rectangle 2, (c) final contour in rectangle 1,
and (d) final contour in rectangle 2.

We propose a termination condition based on the decrease in


in the elimination process using the threshold distance. This case is
length associated with a deleted break point (Pi ), li,j , and the max-
shown in Fig. 7.
imum error, E∞,j , obtained in the j-th iteration. The length evolu-
In the early iterations, a high number of redundant break points
tion and the maximum error show the deformation of the boundary
are eliminated when the threshold is increased. The maximum error
when break points are eliminated. Therefore, we use the ratio ri . This
is thus considerably greater than the decrease in length associated
is defined as
with a point, and the ri values are small. The obtained boundary is
li,j consequently very similar to the original boundary, but the number
ri = (11) of dominant points can be high depending on the shape of the orig-
E∞,j
inal boundary.
In the first iteration, a great number of noisy break points are sup- In the next iterations, the number of redundant points eliminated
pressed, the maximum error is small, and the length of the boundary decreases. If important break points are deleted, the ri values are
is significantly reduced; however, the decrease in length associated much higher than the previous values of ri and the obtained bound-
with a point as well as the ri values are small. ary is very different from the original boundary.
There is a special and trivial case in which all points of the contour The termination condition is satisfied when any ri value exceeds
are break points (zigzag contour). In this case, a high value of ri a threshold value, rt , that is experimentally determined. In the last
(close to 0.9) is obtained for the second and next break points. If a iteration, all selected break points are suppressed, except for those
break point satisfies the final condition, we avoid this drawback by that meet the condition.
checking to see if the break point is a zigzag point (i.e., if the next and A threshold value, rt , between 0.5 and 0.7 was experimentally
previous points are break points too). In this case, the second point obtained. If a value close to 0.7 was used, a boundary with few dom-
in the zigzag is removed. Therefore, the next points in the zigzag do inant points and a high error maximum was obtained. In this case,
not satisfy the final condition, because the ri value of the next break the polygonal approximations were not very good, but the original
point is reduced. The next break point in the zigzag is suppressed contour was recognisable in the obtained approximations. If a value
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 19

Fig. 8. Polygonal approximations for other real contours (rt = 0.5).

lower than 0.5 was used, a boundary with many dominant points If rt was equal to 0.7, the figures show that a boundary with
and a small maximum error was obtained. If a value equal to 0.5 few dominant points and a high maximum error was obtained; the
was used, a good polygonal approximation with a reduced number obtained contour was very similar to the original contour in almost
of dominant points and greater maximum error was obtained. all cases. Moreover, if a value equal to 0.5 was used, a boundary with
many dominant points and a small error maximum was obtained;
4. Experimental results the obtained contour was very similar to the original contour in
almost all cases.
The proposed method was applied to four contours, the chro- The value of rt can be selected based on the requirements of the
mosome, infinity, leaf, and semicircles curves, that have been end user. To obtain more precise contours, a value of 0.5 can be used.
commonly used as planar curves in many previous and recent To obtain contours that are less precise with few dominant points,
studies [32,33,31,15,7,30,55,56]. An initial value of 0.5 for distance a value of 0.7 can be used.
the threshold with a 0.5 increment was used. The resultant contours, Since the proposed method can obtain polygonal approximations
using values 0.7, 0.5, 0.4, and 0.3, are shown in Figs. 3–6. with different numbers of dominant points, the results of interme-
For the shapes of chromosome and leaf, the results of the pro- diate iterations were used to obtain dominant point numbers equal
posed algorithm were similar when rt took values of 0.7, 0.5, and to those obtained by the algorithms under comparison. Comparative
0.4. For the shapes of infinity and semicircle, the results of proposed results for the popular boundaries of the chromosome, leaf, infinity,
algorithm were similar when rt took values 0.7, and 0.5. and semicircles curves are listed in Tables 2 and 3.
For the chromosome, leaf, and infinity shapes, the obtained For the shape of chromosome, the results of the proposed algo-
boundary was very similar to the original boundary when rt was rithm are listed (in Table 2) with 18, 16, 15, and 12 dominant points
between 0.5 and 0.7. For the semicircular shape, however, the for comparison with other algorithms. This table shows that the
obtained boundary differed from the original boundary. maximum error (E∞ ) of the proposed method was less than or equal
All of the obtained boundaries were very similar to the original to that from all of the other algorithms except for that of Masood
boundary, and the number of dominant points was small when a [32,33] for cases of 12 dominant points. The best results were ob-
value of 0.4 was used for rt . tained by the Masood algorithm [33] and then our proposed method,
When a value of 0.3 was used for rt , however, the number of which was better than the rest of algorithms with regard to both the
dominant points was very high. ISE and FOM.
The proposed method was applied to other real boundaries, us- For the shape of leaf, the results of the proposed algorithm are
ing 0.5 and 0.7 as rt values. The polygonal approximations obtained listed (in Table 2), with 28, 23, and 22 dominant points. This ta-
are shown in Figs. 8 and 9. The contours shown in Figs. 8 and 9 were ble shows that the maximum error (E∞ ) of the proposed method
made from the segmentation of two real shapes by scanning the was less than or equal to that from all of the other algorithms
real images (using an Epson Perfection Photo V100 scanner sets to except for that of Masood [33] for cases of 28 dominant points.
300 dpi resolution) and using manual segmentation. Then, we used The Masood algorithm [32] performed similarly, and another Ma-
the classic algorithm [16] to obtain the chain code from the seg- sood algorithm [33] performed the best. The proposed method
mented silhouette. was better than the rest of algorithms with regard to the ISE
Table 1 shows a summary of the results of Figs. 8 and 9. and FOM.
20 A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25

Fig. 9. Polygonal approximations for other real contours (rt = 0.7).

Table 1
Summary of the results for other real contours.

Contour N rt = 0.5 rt = 0.7

ndp CR ndp CR

Tinopeners 580 25 23.2 20 29


Plane1 1015 20 50.8 17 59.7
Plane2 787 10 78.7 7 112.4
Plane3 1073 24 44.7 11 97.5
Plane4 1126 19 59.3 17 66.2
Plane5 1098 28 39.2 7 156.9
Rabbit 745 28 26.6 10 74.5
Screwdriver 1677 8 209.6 4 419.2
Dinosaur1 795 38 20.9 23 34.6
Dinosaur2 625 29 21.6 19 32.9
Dinosaur3 674 29 23.2 26 25.9
Hand 1045 20 52.3 15 69.7
Hammer 2701 10 270.1 10 270.1
Fish-sword 2743 33 83.2 31 88.5
Turtle 553 29 19.1 23 24.0

For the semicircular shape, the results of the proposed algorithm efficiency, and merit based on the optimal solution by Perez and
are listed (in Table 3) with 30 and 26 dominant points for compari- Vidal [38]. Table 4 shows the fidelity, efficiency, and merit for the
son with other algorithms. This table shows that the maximum er- semicircle shape using the proposed method and other top-rated
ror (E∞ ) of proposed method was less than or equal to that from existing methods for different polygonal approximations.
all of the other algorithms except for that of Masood [33] for cases The proposed algorithm produced very good results, similar to
of 26 dominant points. The Masood algorithm [32] performed sim- those from the best algorithms.
ilarly, and another Masood algorithm [33] performed the best. The Table 5 shows the complete results when our method and Ma-
proposed method was better than the rest of algorithms with regard sood's [31] were applied to real contours. The latter was selected
to the ISE and FOM. since (1) its computational complexity is similar to ours, (2) it is very
Finally, for the infinity shape, the results of the proposed algo- recent, and (3) it has obtained good results. In Table 5 the parameter
rithm are listed (in Table 3), with 13 and 10 dominant points. The FOM2 is defined as
maximum error (E∞ ) of the proposed method was less than that for
all of the other algorithms. The proposed method was better than CR2
FOM2 = (12)
the rest of the algorithms with regard to the ISE and FOM. ISE
Rosin [46] compared 31 algorithms using the semicircular shape
presented by Teh and Chin [52]. He computed the algorithm's fidelity, and is included.
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 21

Table 2
Comparison with other methods.

Contour Method ndp CR E∞ ISE FOM

Chromosome Ray–Ray [41] 18 3.33 0.71 5.57 0.60


Ray–Ray [42] 0.65 4.81 0.69
Zhu and Chirlian [58] 0.71 3.44 0.97
Latecki and Lakamper [27] 1.99 27.06 0.12
Masood [32] 0.52 2.88 1.16
Masood [33] 0.51 2.83 1.18
Proposed 0.51 3.01 1.11

Ansari and Huang [2] 16 3.75 2.00 20.25 0.19


Zhu and Chirlian [58] 0.71 4.68 0.80
Latecki and Lakamper [27] 1.98 32.02 0.12
Wu [56] 0.69 4.70 0.61
Masood [32] 0.52 3.84 0.98
Masood [33] 0.63 3.49 1.07
Proposed 0.51 3.97 0.95

Teh and Chin [52] 15 4.00 0.74 7.20 0.56


Zhu and Chirlian [58] 0.74 5.56 0.72
Latecki and Lakamper [27] 1.98 38.58 0.10
Masood [32] 0.63 4.14 0.97
Masood [33] 0.76 3.88 1.03
Proposed 0.63 4.27 0.94

Marji and Siy [30] 12 5.00 0.90 8.03 0.62


Zhu and Chirlian [58] 0.89 8.92 0.56
Latecki and Lakamper [27] 2.16 45.61 0.11
Masood [32] 0.88 7.76 0.65
Masood [33] 0.79 5.82 0.86
Proposed 0.89 7.92 0.63

Leaf Cronin [10] 28 4.29 0.74 7.30 0.59


Latecki and Lakamper [27] 2.83 54.05 0.08
Masood [32] 0.66 6.91 0.62
Masood [33] 0.92 6.19 0.69
Proposed 0.74 6.83 0.63

Sarkar [49] 23 5.22 0.78 13.17 0.40


Wu [56] 1.00 20.34 0.26
Zhu and Chirlian [58] 0.89 11.56 0.44
Latecki and Lakamper [27] 2.83 60.40 0.09
Carmona et al. [7] * 15.63 0.33
Masood [32] 0.74 10.61 0.49
Masood [33] 0.92 9.46 0.55
Proposed 0.74 10.68 0.49

Marji and Siy [30] 22 5.45 0.78 13.21 0.41


Zhu and Chirlian [58] 0.89 13.71 0.39
Latecki and Lakamper [27] 2.83 60.55 0.09
Masood [32] 0.74 11.16 0.49
Masood [33] 0.95 10.66 0.51
Proposed 0.74 11.16 0.49

Rosin [46] showed that the two terms in the FOM are not bal- • The FOM2 values obtained with our method for rt = 0.5 were
anced, causing the measure to be biased toward approximations slightly better than those obtained with Masood's. However,
with lower ISEs. This drawback becomes more evident for real con- our method obtained polygonal approximations that required
tours, which usually contains a large number of points. Therefore, 90% fewer dominant points (on average). The FOM2 val-
Marji and Syi [30] proposed a measurement similar to the FOM2, ues obtained with our method for rt = 0.7 were worse than
namely 1/FOM2 (see Section 2.2), to compare the efficiency of differ- those obtained with Masood's method in most cases. For this
ent methods. Additionally, Carmona et al. [7] proved that the FOM2 value of rt , we would like to point out that our method ap-
demonstrates better performance than the FOM. proximated the polygon using the minimum possible num-
The results shown in Table 5 can be summarised as follows: ber of dominant points while preserving the original shape.
Thus, the errors obtained for these approximations were very
• The Emax values obtained from the proposed method were greater high.
than those obtained by Masood's method, because the latter used
0.9 as a threshold but did not take into account the scale. In conclusion, our results for the best approximation (rt = 0.5),
• The FOM values obtained from the proposed method, which takes taking into account the best measurement of comparison (FOM2),
into account the ISE and CR, were lower than those obtained from were slightly better than those obtained with Massod's method. Nev-
Masood's method. The reason for this is that the FOM was very bi- ertheless, we obtain polygonal approximations requiring 90% fewer
ased because of the low ISE values in Massod's method. Therefore, dominant points.
Masood's method obtained approximations with a large number Finally, the proposed method was compared with Masood's meth-
of dominant points. ods, taking into account the time complexity and ISE, using real
22 A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25

Table 3
Comparison with other methods (continued).

Contour Method ndp CR E∞ ISE FOM

Semicircles Cornic [9] 30 3.40 * 9.19 0.37


Cronin [10] 0.49 2.91 1.17
Zhu and Chirlian [58] 0.63 4.30 0.79
Latecki and Lakamper [27] 1.00 4.54 0.75
Masood [32] 0.49 2.91 1.17
Masood [33] 0.49 2.64 1.29
Proposed 0.49 2.91 1.17

Wu [56] 26 3.92 0.88 9.04 0.43


Marji and Siy [30] 0.74 9.01 0.44
Zhu and Chirlian [58] 0.63 4.91 0.80
Latecki and Lakamper [27] 1.21 13.04 0.30
Masood [32] 0.63 4.91 0.80
Masood [33] 0.49 4.05 0.97
Proposed 0.63 4.91 0.80

Infinity Teh-Chin [52] 13 3.46 * 5.93 0.58


Wu [56] 1.11 5.78 0.60
Proposed 0.63 2.65 1.30

Cornic [9] 10 4.50 * 4.30 1.05


Carmona et al. [7] * 5.56 0.81
Proposed 0.90 5.29 0.90

Table 4
ISE, fidelity, efficiency and merit when proposed method and other methods are applied to Teh and Chin curve.

Method ndp ISE Fidelity Efficiency Merit

Massood [33] Any Optimum 100.0 100.0 100.0


Lowe [28] 13 21.66 95.7 98.6 97.1
Banerjee et al. [6] 6 150.53 93.3 98.7 96.0
Proposed method 30 2.91 90.7 97.4 94.0
Proposed method 24 6.18 89.2 96.3 92.7
Proposed method 26 4.91 82.5 95.4 88.7
Proposed method 21 9.82 81.7 95.8 88.5
Massood [32] 21 9.82 81.7 95.8 88.5
Proposed method 34 2.30 83.9 93.1 88.4
Sarkar [49] 20 13.65 66.0 78.9 72.2
Gao et al. [15] 13 * 40.1 54.1 73.4
Proposed method 11 48.25 67.7 88.0 77.2
Proposed method 10 65.29 60.8 90.5 74.2

Table 5
Proposed method and Masood's method.

Contour N rt = 0.5 rt = 0.7 Masood [31]

ndp Emax CR ISE FOM FOM2 ndp Emax CR ISE FOM FOM2 ndp Emax CR ISE FOM FOM2

Tin-openers 580 25 3.4 23.2 620.1 0.037 0.868 20 5.0 29.0 1857.5 0.016 0.453 119 1.0 4.9 22.2 0.220 1.072
Plane1 1015 20 6.3 50.8 6401.5 0.008 0.402 17 6.4 59.7 6773.8 0.009 0.526 258 1.0 3.9 47.0 0.084 0.330
Plane2 787 10 11.6 78.7 5655.4 0.014 1.095 7 21.0 112.4 62520.6 0.002 0.202 203 1.0 3.9 37.7 0.103 0.399
Plane3 1073 24 7.1 44.7 4508.7 0.010 0.443 11 26.5 97.5 135999.7 0.001 0.070 296 1.0 3.6 49.9 0.073 0.263
Plane4 1126 19 11.0 59.3 14966.3 0.004 0.235 17 11.0 66.2 14966.3 0.004 0.293 255 1.0 4.4 39.6 0.112 0.493
Plane5 1098 28 5.4 39.2 4103.3 0.010 0.375 7 32.2 156.9 147552.9 0.001 0.167 265 1.0 4.1 57.2 0.072 0.300
Rabbit 745 28 3.6 26.6 1094.5 0.024 0.647 10 13.3 74.5 18252.5 0.004 0.304 102 1.0 7.3 56.7 0.129 0.941
Screwdriver 1677 8 9.3 209.6 25570.9 0.008 1.718 4 30.6 419.3 415271.4 0.001 0.423 702 1.0 2.4 141.5 0.017 0.040
Dinosaur1 795 38 1.7 20.9 287.3 0.073 1.523 23 5.9 34.6 3800.8 0.009 0.314 155 1.0 5.1 27.0 0.190 0.975
Dinosaur2 955 38 4.4 25.1 2170.6 0.012 0.291 38 4.4 25.1 2170.6 0.012 0.291 194 1.0 4.9 40.3 0.122 0.601
Dinosaur3 674 29 3.4 23.2 975.6 0.024 0.554 26 4.6 25.9 1314.9 0.020 0.511 141 1.0 4.8 39.4 0.121 0.580
Hand 1041 20 6.3 52.1 5804.1 0.009 0.467 15 16.6 69.4 20707.2 0.003 0.233 175 1.0 5.9 56.8 0.105 0.623
Hammer 2701 10 13.2 270.1 17346.5 0.016 4.206 10 13.2 270.1 17346.5 0.016 4.206 635 1.0 4.3 176.9 0.024 0.102
Fish-sword 1912 46 13.6 41.6 18123.8 0.002 0.095 22 13.6 86.9 47090.3 0.002 0.160 989 1.0 1.9 121.1 0.016 0.031
Turtle 553 29 2.5 19.1 235.2 0.081 1.546 23 3.9 24.0 707.3 0.034 0.817 59 0.9 9.4 53.9 0.174 1.631

Averages 24.8 6.9 65.6 7190.9 0.022 0.964 16.7 13.9 103.4 59755.5 0.009 0.598 303.2 1.0 4.7 64.5 0.104 0.559

Results for other real contours.

contours. The comparison was run on a standard PC with Intel(R) break points in the first iteration until only three dominant points
Pentium(R) four CPUs with 3.00 GHz, 1 Gb of RAM using Ubuntu(R) were obtained. Fig. 10 shows the results using tinopeners, plane2,
8.1. For this purpose, the methods were applied using all of the initial rabbit, and dinosaur1 as real contours.
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 23

10000 1e+06
Masood Masood
Masood opt 100000 Masood opt
1000 Proposed Proposed
10000
100 1000

10 100
10
1
1
0.1 0.1
0 50 100 150 200 250 0 50 100 150 200 250

10000 1e+07
Masood 1e+06 Masood
Masood opt Masood opt
1000 Proposed Proposed
100000
100 10000
1000
10 100
10
1
1
0.1 0.1
0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

10000 1e+07
Masood Masood
Masood opt 1e+06 Masood opt
1000 Proposed Proposed
100000
100 10000
1000
10 100
10
1
1
0.1 0.1
0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350

10000 1e+07
1e+06
1000
100000
100 10000
1000
10 100
10
1
1
0.1 0.1
0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
Fig. 10. Graphs showing the time complexity and ISE using tinopeners, plane2, rabbit, and dinosaur1 as real contours. The left column shows time values in milliseconds,
and the right column shows the ISE. A logarithmic scale has been used on the y-axis.

In the case of the time complexity, our method produced the best non-optimised Masood method when the number of dominant
results since Masood's method eliminated only one break point in points obtained was low, and the results were worse when the
each iteration. However, our method eliminated a high number of number of dominant points obtained was high. In our opinion, this
break points in the first iteration. The worst case complexity of the is an advantage of our method: in this range of low values, a rea-
proposed algorithm can be given as O(mn), where n is the number sonably high compression level with minimum error and distortion
of points of the contour, and m is the number of initial break points. required was obtained using our method (Fig. 8).
The worst case occurs when only one break point is eliminated in
each iteration. This complexity is similar to that of Masood's method 5. Conclusions
[31,32]. Moreover, the worst case complexity of the optimised Ma-
sood method [33] can be given as O(mn2 ). For this reason, the time An efficient and simple approach to polygonal approximation is
complexity results of the proposed method were much better than presented in this paper. Initially, our method calculates all of the
those of the optimised Masood method. break points at which the boundary makes a turn. Redundant break
In the case of the ISE, the optimised Masood method obtained points are deleted when they are quasi-collinear points with the
the best results. Our method obtained better results than the previous and next break points. For this purpose, a threshold distance
24 A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25

is used. This threshold is increased until a termination condition is [23] S. Hsin-Teng, H. Wu-Chih, A rotationally invariant two-phase scheme for corner
satisfied. detection, Pattern Recognition 29 (1996) 819–828.
[24] X. Hu, N. Ahuja, Matching point features with ordered geometric, rigidity,
We propose a termination condition based on the ratio li,j /E∞,j . and disparity constraints, IEEE Transactions on Pattern Analysis and Machine
The threshold value of this ratio can be selected depending upon the Intelligence 16 (1994) 1041–1049.
requirements of the end user. To obtain more precise contours with [25] P.W. Huang, S.K. Dai, P.L. Lin, Planar shape recognition by directional flow-
change method, Pattern Recognition Letters 20 (1999) 163–170.
more dominant points, a threshold value of 0.5 was experimentally [26] K.-L. Chung, P.-H. Liao, J.-M. Chang, Novel efficient two-pass algorithm for
obtained. To obtain less precise contours with few dominant points, closed polygonal approximation based on LISE and curvature constraint criteria,
a threshold value of 0.7 was experimentally obtained. Journal of Visual Communication and Image Representation 19 (2008) 219–230.
[27] L.J. Latecki, R. Lakamper, Convexity rule for shape decomposition based on
The algorithm produced efficient polygonal approximations with discrete contour evolution, Computer Vision and Image Understanding 73
different numbers of dominant points. The proposed method was (1999) 441–454.
compared to recent and commonly used algorithms. The results were [28] D.G. Lowe, Three-dimensional object recognition from single two dimensional
images, Artificial Intelligence 31 (1987) 355–395.
similar to those from the Masood algorithm [31,32] and better than [29] M. Marji, P. Siy, A new algorithm for dominant point detection and
those from other algorithms under comparison. The results were only polygonization of digital curves, Pattern Recognition 36 (2003) 2239–2251.
worse than the optimised Masood algorithm [33], but this algorithm [30] M. Marji, P. Siy, Polygonal representation of digital planar curves through
dominant point detection—a nonparametric algorithm, Pattern Recognition 37
has greater computational complexity than the proposed algorithm (2004) 2113–2130.
since it uses an optimisation algorithm. [31] A. Masood, S.A. Haq, A novel approach to polygonal approximation of digital
curves, Journal of Visual Communication and Image Representation 18 (2007)
264–274.
References [32] A. Masood, Dominant point deletion by reverse polygonization of digital curves,
Image and Vision Computing 26 (2008) 702–715.
[1] I.M. Anderson, J.C. Bezdek, Curvature and tangential deflection of discrete arcs: [33] A. Masood, Optimized polygonal approximation by dominant point deletion,
a theory based on commutator of scatter matrix pairs and its application to Pattern Recognition 41 (2008) 227–239.
vertex detection in planar shape data, IEEE Transactions on Pattern Analysis [34] A. Melkman, J. O'Rourke, On polygonal chain approximation, in: G.T. Toussaint
and Machine Intelligence 6 (1984) 27–40. (Ed.), Computational Morphology, North-Holland, Amsterdam, 1998, pp. 87–95.
[2] N. Ansari, K.W. Huang, Non-parametric dominant points detection, Pattern [35] F. Mokhtarian, A.K. Mackworth, A theory of multiscale-based shape
Recognition 24 (1991) 849–862. representation for planar curves, IEEE Transactions on Pattern Analysis and
[3] F. Arrebola, F. Sandoval, Corner detection and curve segmentation by Machine Intelligence 14 (1992) 789–805.
multiresolution chain code linking, Pattern Recognition 38 (2005) 1596–1614. [36] F. Mokhtarian, Silhouette-based isolated object recognition through curvature
[4] F. Attneave, Some informational aspects of visual perception, Psychological scale space, IEEE Transactions on Pattern Analysis and Machine Intelligence 17
Review 61 (1954) 189–193. (1995) 539–544.
[5] A. Bandera, C. Urdiales, F. Arrebola, F. Sandoval, 2D object recognition based on [37] R. Neumann, G. Teisseron, Extraction of dominant points by estimation of the
curvature functions obtained from local histograms of the contour chaincode, contour fluctuations, Pattern Recognition 35 (2002) 1447–1462.
Pattern Recognition Letters 20 (1999) 49–55. [38] J.C. Perez, E. Vidal, Optimum polygonal approximation of digitized curves,
[6] S. Banerjee, W. Niblack, M. Flickner, A minimum description length polygonal Pattern Recognition Letters 15 (1994) 743–750.
approximation method, Technical Report no. RJ 10007, IBM Research Division, [40] A. Pikaz, I. Dinstein, Optimal polygonal approximation of digital curves, Pattern
1996. Recognition 28 (1995) 373–379.
[7] A. Carmona-Poyato, N.L. Fernandez-Garcia, R. Medina-Carnicer, F.J. Madrid- [41] B.K. Ray, K.S. Ray, Detection of significant points and polygonal approximation
Cuevas, Dominant point detection: a new proposal, Image and Vision Computing of digitized curves, Pattern Recognition Letters 22 (1992) 443–452.
23 (2005) 1226–1236. [42] B.K. Ray, K.S. Ray, An algorithm for detecting dominant points and polygonal
[8] D. Chetverikov, Z. Szabo, A simple and efficient algorithm for detection of high approximation of digitized curves, Pattern Recognition Letters 13 (1992)
curvature points in planar curves, in: Proceedings of the 23rd Workshop of 849–856.
Austrian Pattern Recognition Group, 1999, pp. 175–184. [43] B.K. Ray, R. Pandyan, ACORD—an adaptive corner detector for planar curves,
[9] P. Cornic, Another look at dominant point detection of digital curves, Pattern Pattern Recognition 36 (2003) 703–708.
Recognition Letters 18 (1997) 13–25. [44] A. Rosenfeld, E. Johnston, Angle detection on digital curves, IEEE Transactions
[10] T.M. Cronin, A boundary concavity code to support dominant points detection, on Computers C-22 (1973) 875–878.
Pattern Recognition Letters 20 (1999) 617–634. [45] A. Rosenfeld, J. Weszka, An improved method of angle detection on digital
[11] H. Freeman, L.S. Davis, A corner finding algorithm for chain-coded curves, IEEE curves, IEEE Transactions on Computers (1975) 940–941.
Transactions on Computers 26 (1977) 297–303. [46] P.L. Rosin, Techniques for assessing polygonal approximation of curves, IEEE
[13] A.M.N. Fu, H. Yan, A contour bent function based method to characterize Transactions on Pattern Analysis and Machine Intelligence 19 (6) (1997)
contour shapes, Pattern Recognition 30 (1997) 1661–1671. 659–666.
[14] A. Garrido, N. Perez, M. Garcia-Silvente, Boundary simplification using a [47] M. Salotti, An efficient algorithm for the optimal polygonal approximation of
multiscale dominant-point detection algorithm, Pattern Recognition 31 (1998) digitized curves, Pattern Recognition Letters 22 (2001) 215–221.
[48] M. Salotti, Optimal polygonal approximation of digitized curves using the sum
791–804.
of square deviations criterion, Pattern Recognition 35 (2002) 435–443 (Marc
[15] X. Gao, F. Sattar, A. Quddus, R. Venkateswarlu, Multiscale contour corner
Salotti).
detection based on local natural scale and wavelet transform, Image and Vision
[49] D. Sarkar, A simple algorithm for detection of significant vertices for polygonal
Computing 25 (2007) 890–898.
approximation of chain-coded curves, Pattern Recognition Letters 14 (1993)
[16] R.C. Gonzalez, R.E. Woods, Tratamiento Digital de Imagines, Addison-Wesley,
959–964.
Reading, MA, 1996.
[50] P.A. Semyonov, Optimized unjoined linear approximation and its application for
[17] S. Goyal, M.P. Kumar, C.V. Jawahar, P.J. Narayanam, Polygon approximation of
Eog-biosignal processing, in: 12th IEEE International Conference on Engineering
closed curves across multiple views, in: Indian Conference on Vision, Graphics
in Medicine and Biology Society, 1990, pp. 779–780.
and Image Processing, 2002.
[51] I.K. Sethi, R. Jain, Finding trajectories of feature points in a monocular image
[18] S. Grumbach, P. Rigaux, L. Segoufin, The DEDALE system for complex spatial
sequence, IEEE Transactions on Pattern Analysis and Machine Intelligence 9
queries, in: Proceedings of ACM SIGMOD Symposium on the Management of
(1987) 56–73.
Data, 1998, pp. 213–224.
[52] C.H. Teh, R.T. Chin, On the detection of dominant points on digital curves, IEEE
[19] Y.H. Gu, T. Tjahjadi, Coarse-to-fine planar object identification using invariant
Transactions of Pattern Analysis and Machine Intelligence 11 (1989) 859–872.
curve features and B-spline modeling, Pattern Recognition 33 (2000)
[53] C. Urdiales, A. Bandera, F. Sandoval, Non-parametric planar shape representation
1411–1422.
based on adaptive curvature functions, Pattern Recognition 35 (2002) 43–53.
[20] J.H. Han, T. Poston, Chord-to-point distance accumulation and planar curvature: [55] W.Y. Wu, Dominant point detection using adaptive bending value, Image and
a new approach to discrete curvature, Pattern Recognition Letters 22 (2001) Vision Computing 21 (2003) 517–525.
1133–1144. [56] W.Y. Wu, An adaptive method for detecting dominant points, Pattern
[21] J.H. Horng, J.T. Li, An automatic and efficient dynamic programming algorithm Recognition 36 (2003) 2231–2237.
for polygonal approximation of digital curves, Pattern Recognition Letters 23 [57] P.C. Yuen, Dominant points matching algorithm, Electronic Letters 29 (1993)
(2002) 171–182. 2023–2024.
[22] J.H. Horng, Improving fitting quality of polygonal approximation by using [58] P. Zhu, P.M. Chirlian, On critical point detection of digital shapes, IEEE Trans-
dynamic programming technique, Pattern Recognition Letters 23 (2002) actions of Pattern Analysis and Machine Intelligence 17 (8) (1995) 737–748.
1657–1673.

About the Author—A. CARMONA-POYATO received his title of Agronomic Engineering and Ph.D. degree from the University of Cordoba (Spain), in 1986 and 1989, respectively.
Since 1990 he has been working with the Department of Computing and Numerical Analysis of the University of Cordoba as lecturer. His research is focused on image
processing and 2-D object recognition.
A. Carmona-Poyato et al. / Pattern Recognition 43 (2010) 14 -- 25 25

About the Author—F.J. MADRID-CUEVAS received the Bachelor degree in Computer Science from Malaga University (Spain) and the Ph.D. degree from Polytechnic University
of Madrid (Spain), in 1995 and 2003, respectively. Since 1996 he has been working with the Department of Computing and Numerical Analysis of Cordoba University,
currently he is an assistant professor. His research is focused mainly on image segmentation, 2-D object recognition and evaluation of computer vision algorithms.

About the Author—R. MEDINA-CARNICER received the Bachelor degree in Mathematics from University of Sevilla (Spain). He received the Ph.D. in Computer Science from
the Polytechnic University of Madrid (Spain) in 1992. Since 1993 he has been a lecturer of Computer Vision in Cordoba University (Spain). His research is focused on edge
detection, evaluation of computer vision algorithms and pattern recognition.

About the Author—R. MUÑOZ-SALINAS received the Bachelor degree in Computer Science from Granada University (Spain) and the Ph.D. degree from Granada University
(Spain), in 2006. Since 2006 he has been working with the Department of Computing and Numerical Analysis of Cordoba University, currently he is an assistant professor.
His research is focused mainly on mobile robotics, human–robot interaction, artificial vision and soft computing techniques applied to robotics.

Das könnte Ihnen auch gefallen