Beruflich Dokumente
Kultur Dokumente
existing between them. Fig. 4(a) and (b) show the results of
detecting the boundary points between different features, i.e. the
turning of the surface.
In Fig. 4(a), the corner of the surface has been detected; in
Fig. 4(b) an arc lies between two line segments, the two boundary
points are also found out through the proposed method.
Fig. 4(c) shows that scan points which are inside the arc can be
dealt with using prediction and it will not bring about any trouble.
The prediction point is not far away from the test point. The test
data is also shown in the figure. From the above figures we can
see that the prediction-based method can effectively detect the
discontinuity and turning of the obstacle surface.
Fig. 2. Test data. The proposed algorithm utilized the characteristic that the scan
data is sequential. So we can use the preceding scan points to
3.2. Mark point detection predict the position, as we assume that they are in the same
segment.
In this section we will describe the calculation of prediction As we use Np points to simulate the surface, from the above
point in detail as well as its application to detecting mark points. discussion we can know that if the test point is in the same segment
We use an example to illustrate it. Fig. 2 shows the test data with its previous Np points, it will not be considered a mark point.
which is intercepted from a scan data frame. For convenience it However, at the beginning of a segment, such as the second or
is a relevantly simple data piece. third point, there will be some points which belong to the previous
There are 20 points (blue dots) in the test data. Obviously they segment that participate in the calculation of its prediction point.
form two line segments. The sequence is from right to left. We The result is that these points are wrongly marked. More definitely,
select two test points and calculate their corresponding prediction in the beginning of each segment there may be Np − 1 points
points to illustrate the proposed algorithm. One is edge point P12 that are wrongly marked, following the boundary that has been
and the other one is P10 which lies inside the segment. correctly detected. Fig. 5(a) shows the situation. The red asterisks
As has described before, we use two lines to calculate the denote that they are considered mark points. As we set Np = 3,
prediction point, and use Line 1 and Line 2 to denote them. In two points are wrongly marked.
Fig. 3, Line 1 is formed by the preceding three points of the However, the mistake is easy to amend, as the situation comes
test point and Line 2 is the radial from the laser scanner (the regularly. After calculating and marking the whole scan data frame,
original point). Red dot denotes the test point and red circle if Np mark points appear in sequence, we need to have it checked.
denotes the prediction point. As we have obtained the scan data, Assuming the first mark point is Pj , the point cluster is Pj∼j+Np −1 .
mathematically calculating the emitting angle of the test point is The way is simply using these points to calculate the prediction
needless. We can get Line 2 by just linking the original point and point of the next point Pj+Np . If the next point is in the same
the test point, which is very easy to implement. segment with them, the latter Np − 1 points Pj+1∼j+Np −1 are
In Fig. 3(a), the calculated prediction point is far from the test remarked as ordinary points; otherwise no change will be made.
point P12 , so the test point is considered a mark point, and it Boundary points come in pairs, so once we detect a mark point,
corresponds to that it is the boundary of the rupture. In Fig. 3(b), the preceding point is also considered a mark point. Fig. 5(b)
the two points are almost at the same position, as the test point P10 shows the correct result after check step. As in the experiments
does lie in the segment. The difference of these two cases is easy we set Np = 3, the check step does not need much computation.
to point out. From the scheme we can see that the discontinuity in Besides, the way we used to check the mark points is also
the test data can be successfully detected by the proposed method. prediction, thus in programming the only important part of code is
Several features may coexist in the same segment. They may be calculating prediction point, which decreases the complexity of the
lines with lines, or lines with circles, and there is no discontinuity algorithm.
a b
Fig. 3. Calculating prediction points: (a) an edge point as the test point, (b) an in-segment point as the test point.
Y. Zhao, X. Chen / Robotics and Autonomous Systems 59 (2011) 402–409 405
a b
Fig. 4. (a) Detecting mark point between line segments. (b) Detecting mark point between line and arc. (c) Dealing with points that are inside the arc.
a b
Fig. 5. (a) The result after calculating each point. (b) Correct result after check step.
3.4. Algorithm process and analysis data contains N points and M features, the total computation time
of this frame is N + M, where each time one prediction point is
The main contribution of this paper is that it proposes a novel calculated. As in the experiments we use three points to form the
algorithm to detect the boundary of the features and split the surface, the calculation is very fast.
features directly. After the above procedure, each detected mark Second, the algorithm is based on the laser scan data only.
point is the start or end point of one segment. The features have That is to say, no prior knowledge of the environment is required,
already been separated. Compared to the traditional methods, the which is however indispensable to adaptive breakpoint detector.
proposed algorithm has several advantages. This characteristic is another advantage of the proposed algorithm
First, the prediction method detects features by finding out the and it makes the method very easy to migrate between various
change in surface between adjacent points. To this method the environments and laser scanners with no modification needed.
breakpoints and boundary points means no difference. So we can On the other hand, as we use three points to predict the position
merge the traditional two-step feature separation procedure into of the next point, for reliably detection the minimum number of
one stage, which is much easier to implement, the computation points in a segment is four. Compared to other methods, this is a
complexity is much reduced and easy to estimate. If one frame of relatively loose restriction.
406 Y. Zhao, X. Chen / Robotics and Autonomous Systems 59 (2011) 402–409
3.5. Feature type identification and parameter acquisition minimize the geometric distances from the circle to the data
points, and algebraic fit, which minimize various approximate
The scan data are segmented after the mark points are found (or ‘algebraic’) distances. Compared to geometric fit, algebraic fits
out and each segment presents a separate feature, which can be a usually run faster. It was shown in [20,19] that all the circle fits
line or an arc/circle. We need to identify the feature type of each have the same covariance matrix, to the leading order, in the small-
segment. noise limit.
Here we use a simple but efficient way to do it. First, for The parameters of the circle are (a, b, R), where (a, b) denotes
each segment a line is fitted from all the points that belong to the center, and R the radius of the circle. One can fit a circle by
the segment using LS method. In polar coordinates, a line can be minimizing the function
represented by −
F = (ri2 − R2 )2
x cos α + y sin α = ρ −
= (x2i + y2i − 2axi − 2byi + a2 + b2 − R2 )2 .
where α represents the angle between x-axis and the normal of
the line and ρ is the perpendicular distance from the origin to the Changing parameters B = −2a, C = −2b, D = a2 + b2 − R2 ,
line. According to [18], using LS method to fit a line with point set the function F is transformed to
{(ri , φi ) | i = 1, . . . , n}, we can get the line parameters as −
p
F = (zi + Bxi + Cyi + D)2
tan(2α) =
q where zi = x2i + y2i . The problem reduces to a system of linear
1 − equations (normal equations) with respect to B, C , D that can be
ρ= (ri cos(φi − α)) easily solved, and then one recovers the natural circle parameters
N i
(a, b, R).
where We take data points (xi , yi ) as noisy observations of true points
−− − (x̃i , ỹi ) with noise variance σ 2 , which in our system is the statistic
p= ri rj sin(φi + φj ) + (1 − n)ri2 sin 2φi
error σr2 of laser scanner. According to [19], the covariance matrix
i j >i i
−− − of circle parameters Θ is given by
q= ri rj cos(φi + φj ) + (1 − n)ri2 cos 2φi .
V [Θ ] = σ 2 (W T W )−1
i j >i i
v1
u1 1
di being the distance of each point to the fitted line, the average
. . .
distance of all the points in the segment is where W = .
.
.
.
. and ui
.
= (xi − a)/R, vi = (yi − b)/R.
n un vn 1
1−
d= di .
n i=1
3.6. Setting the thresholds
If the segment is a line segment, the points should not be
far away from the fitted line and the average distance d will be The thresholds that need to be set in the proposed method are
relatively small. But if the segment forms an arc/circle, the average two thresholds: dm , the threshold to detect mark points, and df , the
distance will be sufficiently large to be distinguished from the threshold to identify feature types.
previous case. Thus we can set a threshold df to compare. If d ≤ df , Investigating the distributing characteristic of laser scan data,
we take this segment as a line feature, otherwise the segment is we can see that it is affected by two factors: (1) the distance
treated as a circle feature. from laser to obstacle and (2) the angle between laser beam
For line features, the line parameters (ρ, α) have already been and obstacle surface. When the obstacle is near the laser, the
calculated during identification procedure. If we consider each data returned data points are concentrated. If the distance gets longer,
point to have the same Cartesian uncertainty, converting the points the data points become loose, i.e. the distance between adjacent
from polar coordinates (ri , φi ) to Cartesian coordinates (xi , yi ), points gets longer. When the acquired data is far from the laser
according to [18] the uncertainty of the line parameters can be range finder, the measurement uncertainty also gets higher. On
calculated by the other hand, the smaller the angle between laser beam and
obstacle surface, the longer the distance between adjacent points.
σρ2 = A The above phenomenon is caused by the fan-shaped measuring
σyy
2
cos2 χ + σxx
2
sin2 χ − 2σxy
2
sin χ cos χ mechanism of laser range finder. Thus, when setting the threshold
σα2 = A + d2 dm , we take into account above distance and angle factors. It can be
N
calculated by the following equation
where
π −ϕ
d0 r
1 − 1 − dm = +
x= xi y= yi λ rmax π
N
− N −
a= (xi − x)2 b=2 (xi − x)(yi − y) where r denotes the measured distance of the current point with
− r ∈ (0, rmax ], and ϕ denotes the acute angle between Line 1 and
c= (yi − y)2 d = y sin χ + x cos χ Line 2 in radian with ϕ ∈ (0, π /2]. rmax is the maximum mea-
surement range of the laser scanner. d0 is the initial threshold with
π aσyy
2
− bσxy2 + c σxx2
χ = α+ A= . d0 = 50 in experiments. λ is the normalization coefficient with
2 (a − c ) + b2 λ = 2. The distance unit is millimeter, which is identical to the
For circle features we need to calculate the center points and setting of the laser scanner.
radius of them. Fitting circle from noisy data points has been When determining the feature types, it is relatively easy to
widely discussed, Al-Sharadqah and Chernov give an extensive identify whether the feature is a line or not, and we use a fixed
review and error analysis of existing circle fitting algorithms threshold df = 20 with the unit millimeter. Experiment results
in [19]. The methods are divided into geometric fit, which show that it can correctly identify feature types.
Y. Zhao, X. Chen / Robotics and Autonomous Systems 59 (2011) 402–409 407
a b
Fig. 6. (a) Pioneer3AT robot equipped with SICK LMS200 laser scanner. (b) Office-like test environment at Fudan University.
Fig. 7. (a) Raw scan data. (b) Result of mark point detection. (c) Feature extraction result. (d) Zoomed view of the environment surface.
4. Experiments The typical raw scan data is shown in Fig. 7(a). The robot is at
coordinate (0, 0), and the scan direction is to the positive direction
4.1. Feature detection result of x-axis. The robot is denoted by a red triangle.
Fig. 7(b) shows the result of mark point detection. Red asterisks
The feature extraction system is implemented on a P3AT robot denote mark points. The feature types are also identified. Arcs are
equipped with a SICK LMS200 sensor. The laser is configured with represented by green dots and line segments are in blue. From
180° scene, 0.5° resolution, so each scan frame is consisted of 360 the figure we can see that the lines and arcs have been correctly
data points. The unit of data is millimeter. Fig. 6(a) shows the robot
detected, including one arc that is embedded in lines. There are
used in the experiment.
several isolated mark points in the figure. They do not belong to
The experiments are taken in the Physics Building of Fudan
any segment and are ignored in the followed process, and this will
University. The test area is typical indoor office-like environment
and is composed of regular planar walls and circular pillars. not affect the final result. Fig. 7(c) shows the feature extraction
Fig. 6(b) shows the test environment. result, including two circles and several line segments. The squares
In our experiments, when taking laser reading, the robot denote boundaries of the line segments. Fig. 7(d) is the zoomed
remains stationary. If range scans are taken with the sensor in view of one of the arcs. We can see that the wall is uneven. The
motion, motion correction can be done by the introduction in algorithm successfully inhibited the noise and its robustness is
Section 2. The experiment results are shown in Figs. 7 and 8. visually demonstrated.
408 Y. Zhao, X. Chen / Robotics and Autonomous Systems 59 (2011) 402–409
Fig. 8. (a), (b) One frame of raw scan data and its feature extraction result. (c), (d) Another frame of raw scan data and its feature extraction result. (e), (f) Another test scan
data and its feature extraction result.
Fig. 8 is another two groups of experiments. In Fig. 8(a), data from these maps. The algorithms are programmed in C++ and
there exist four circles in the figure, with two separated and two performed on a Pentium4 2.4G PC.
embedded in the wall. Fig. 8(b) is the feature extraction result of To compare the speed and correctness of the algorithms, we
this data frame, and the features have been correctly extracted. experimented in three aspects: (1) execution time, (2) correct mark
Fig. 8(c) shows another frame of raw scan data. To demonstrate the of each point and (3) correct type identification of each feature.
method’s ability to be used in environments which contain many Correct mark of each point is verified by:
short features, we add some small boxes into the test environment. NumberFalseMarked
Thus we can see there are many small clusters of points in the raw FalseMarkRatio = ,
TotalPointNumber
data. Fig. 8(d) shows the feature extraction result of this data frame,
and the shortest line segment in this data frame contains four where NumberFalseMarked denotes the number of points that are
points. From the figure we can see that the features are correctly wrongly marked, including normal points that are marked as mark
extracted, including two circle features. point, and mark points that are omitted.
Correct type identification of each feature is obtained by:
Table 1 [5] G.A. Borges, M. Aldon, Line extraction in 2D range images for mobile robotics,
Experiment results of the comparison. Journal of Intelligent and Robotic Systems 40 (2004) 267–297.
[6] V. Nguyen, S. Gächter, et al., A comparison of line extraction algorithms using
Proposed method ACF
2D range data for indoor mobile robotics, Autonomous Robots 23 (2007)
Average execution time (ms) 2.3 6.1 97–111.
False mark ratio 0.03 0.08 [7] T. Pavlidis, Steven L. Horowitz, Segmentation of plane curves, IEEE Transac-
False identification ratio 0.03 0.09 tions on Computers C-23 (8) (1974) 860–870.
[8] G.A. Borges, M.-J. Aldon, A split-and-merge segmentation algorithm for line
extraction in 2D range images, in: Proceedings of the International Conference
on Pattern Recognition, vol. 1, 2000, pp. 441–444.
methods based on curvature function need the segments to contain [9] J. Xavier, M. Pacheco, et al. Fast line, arc/circle and leg detection from laser
enough data points to form an identifiable characteristic. Thus if scan data in a player driver, in: Proceedings of the 2005 IEEE International
the environment contains many short features that are composed Conference on Robotics and Automation, April 2005, pp. 3930–3935.
[10] J. Vandorpe, H. Van Brussel, et al. Exact dynamic map building for a mobile
of relatively small pieces of data points, the false identification ratio robot using geometrical primitives produced by a 2D range finder, in:
will significantly increase, and such environment is quite common. Proceedings of the 1996 IEEE International Conference on Robotics and
In [15] we can see that the minimum number of points per segment Automation, vol. 1, April 1996, pp. 901–908.
[11] Sen Zhang, Lihua Xie, et al., Feature extraction for outdoor mobile robot
has been fixed to ten. This is a disadvantage of the methods based
navigation based on a modified Gauss–Newton optimization approach,
on curvature function. However, it has much reduced influence Robotics and Autonomous Systems 54 (2006) 277–287.
on PFE. As in experiments we use three points to predict the [12] R.O. Duda, P.E. Hart, Use of the hough transform to detect lines and curves in
position, for reliably detection the minimum number of points in a pictures, Communications of the ACM 15 (1) (1972) 11–15.
[13] J. Ryde, Huosheng Hu, Fast circular landmark detection for cooperative
segment is four. This renders the algorithm more adaptive to real localisation and mapping, in: Proceedings of the 2005 IEEE International
environments. Conference on Robotics and Automation, April 2005, pp. 2745–2750.
[14] Zhen Song, YangQuan Chen, et al. Some sensing and perception techniques
for an omnidirectional ground vehicle with a laser scanner, in: Proceedings
5. Conclusion of the 2002 IEEE International Symposium on Intelligent Control, 2002,
pp. 690–695.
In this paper a new algorithm for feature extraction from laser [15] P. Núñez, R. Vázquez-Martín, et al., Natural landmark extraction for mobile
robot navigation based on an adaptive curvature estimation, Robotics and
scan data is proposed. The main contribution is that it introduces Autonomous Systems 56 (2008) 247–264.
a new feature separation structure, which utilizes prediction to [16] R. Madhavan, H.F. Durrant-Whyte, Natural landmark-based autonomous
detect the mark points. Compared to the conventional methods, vehicle navigation, Robotics and Autonomous Systems 46 (2004) 79–95.
[17] Kai O. Arras, Nicola Tomatis, et al., Multisensor on-the-fly localization:
the time cost and computation complexity is much reduced. The precision and reliability for applications, Robotics and Autonomous Systems
method is based on laser data and no prior knowledge of the 34 (2001) 131–143.
environment is needed. It also has a minimal number of points [18] P. Jensfelt, H.I. Christensen, Pose tracking using laser scanning and minimalis-
tic environmental models, IEEE Transactions on Robotics and Automation 17
per segment, so it is much more adaptive to various different
(2) (2001) 138–147.
environments than other methods. The speed and accuracy of the [19] Ali Al-Sharadqah, Nikolai Chernov, Error analysis for circle fitting algorithms,
proposed method is demonstrated in experiments. Electronic Journal of Statistics 3 (2009) 886–911.
The proposed method can be applied to any application that [20] Prasanna Rangarajan, Kenichi Kanatani, Improved algebraic methods for circle
fitting, Electronic Journal of Statistics 3 (2009) 1075–1082.
needs line and/or circle feature extraction using laser scanner.
Furthermore, the proposed method can also be utilized in
laser data segmentation task by simply omitting the feature Yilu Zhao was born in China in 1986. He received the
type identification and parameter acquisition part. The above B.S. degree and M.S. Degree in electronic engineering
from Fudan University, Shanghai, China in 2007 and 2010,
applications are often seen for robots in unknown environment.
respectively. Now he is working toward the Ph.D. Degree
For example, the proposed method can be applied to small- from Fudan University at 2013. His research interests
sized robots such as cleaning robots and security robots, since include feature extraction, SLAM, and robot exploration.
the method is fast and compact. Our future work will focus on
integrating the proposed system with real time SLAM and target
tracking process.