Beruflich Dokumente
Kultur Dokumente
( 1| ) X k k +
6550
at time k+1, covariance matrix
( 1| ) P k k + , and system
observation information
( 1| ) Z k k + .
In observation stage, sensors scan evironment, get the
observation data ( 1) Z k + .
In update stage, we update system states by the
observation information ( 1) Z k + at time k+1.
In EKF, we define the robots position
r
X and features
location
m
X as system states, and assume the system states
obey the Gaussian distribution with mean
X , covariance
P .
1 2 n
m
T
r m m m
[ ]
[ X ]
[X X X X ]
T
r
X E X
X
=
=
=
(7)
1
1 1 1 1
1
[( )( ) ]
n
n
n n n n
T
rr rm rm
m r m m m m
m r m m m m
P E X X X X
P P P
P P P
P P P
=
(
(
(
=
(
(
(
(8)
i
m
X is the th i features location,
i
m r
P describes correlation
between the robot position and the th i features location,
j i i j
m m m m
P P = means the correlation of th i and th j feature and
correlation of th j and th i feature is the same.
As assumptions of static environment, the features
location isnt changed, means that
( 1) ( ) m m X k X k + = . And the
sensors observation noise ( ) w k obeys Gaussian distribution
with zero mean, covariance ( ) R k . Robot uses the actual
observation information ( 1) Z k + update the system state.
SLAM based on EKF gets more accurate location of robot and
features by continuous iterate of prediction, observation and
update stages.
As the EKF-SLAM assumes that the system states obey
the Gaussian distribution, we use the ellipsoid size of the
covariance matrix of robot location and features (or
landmarks) position as a quantitative assessment of the
accuracy of SLAM, that is
1
1
1
( ) ( ( )) 1/ ( ( ))
( ) / ( )
det( ) / det( )
i i
i i
i i
n
rr m m
i
n
i rr j m m
i i j
n
rr m m
i
C k k C P k k n C P k k
P n P
P n P
t t
t t
=
=
=
= +
= +
= +
[ [
(9)
Among them, ( )
i
P is the characteristic root,
( ( )) det( )
rr rr
C P k k P t = indicating the uncertainty of robot
location, ( ( ))
i i
m m
C P k k indicating the uncertainty of
th
i
feature. The smaller of ( ) C k k , the higher accuracy of the
whole system. In addition, error is the difference between
estimate value and true value of system states. Whether the
error is in the 95% hypothesis testing range determined by
covariance matrix, is one of the standard to determine the
consistence of the process of SLAM.
III. FUSION OF VISION AND LASER DATA FOR ACTIVE SLAM
A. Fusing vision and laser range data
In robot coordinate system R, the corner feature extracted
from laser data describes as follows:
[ y ]
W W W W T
R R R R
X x = (10)
In polar coordinate system describes as:
[ ]
R R R R T
i
c u = (11)
According to the vertical line extraction methods
[7],[8],[9], the direction of the vertical line feature vectors can
be written as:
( )
0 0
/ 1
T
C
i i u
l u u d f = (
(12)
The vertical line feature in robot coordinate system R
expresses as follows:
T
R R R
i i i
RC C
i
l x y
R l
( =
=
(13)
RC
R are rotation matrix of camera coordinate system C to
robot coordinate system R.
As a result, vertical line features azimuth in robot
coordinate system is:
R
R i
i R
i
y
arctan
x
u = (14)
For the corner feature extracted from laser data
R
i
c and
the vertical line extracted from monocular vision
R
i
u , we fuse
them by EKF, where
R
u in corner feature
R
i
c extracted from
laser data and
R
i
u in vertical line both corresponds to the
angel between X-axis in robot coordinate system R and the
line from robot to corner feature in the world coordintate
system. After sensor fusion, we can get more accurate location
of corner feature than the one only from laser or monocular
vision. The Extended Kalman Filter using in sensor fusion
expresses as follows:
| |
2 1
( ) ,
0 1 0
( )
( )
( )
R R R
i i i
R R
i i
R R
i
R
i
T T
c c
R R
i i
c c
h h c
H
z
K Q H HQ H
c c K z h
Q I KH Q
u
u
u
o
= =
=
=
= +
=
=
(15)
In which,
2
R
i
u
o is the covariance of
R
i
u ,
R
i
c
Q is the
covariance of
R
i
c .
R
i
c is the prediction status of corner feature
6551
after sensor fusion, its covariance matrix is
R
i
c
Q .For data
association and map building, we also need to maintain the
length of two sides and the bump type of corner feature, as
well as the image block where vertical line lies in the image.
In fig 1, the area of the ellipse represents the features
uncertainty, the smaller of the area the more accurate of the
certainty. The upper corner feature is the one used sensor
fusion, we can see that its ellipse area smaller than the others.
The bellow one which ellipse area is larger, is the one dont
use sensor fusion (Because monocular visions angel is
smaller than lasers 180
{ }
R
j
B u = from vertical line.
2 2 1
( ) ( ) ( )
R
i
R R T R R
j i j i j
u
u u o u u
= (16)
We select
R
k
u which has the smallest Mahalanobis
distance with
R
i
u , and meets 95% degree of confidence in
2
(1) _ distribution hypothesis.
2 2 2
1 0.05/ 2
R
k
u from vision
and corner feature
R
i
u from laser matched, they correspond to
the same feature, as in Fig 3.
Fig. 1 Feature using and not-using sensor fusion.
Fig. 2 Corner feature after sensor fusion.
Fig. 3 Feature matching
Associate feature which used sensor fusion and the
existed one in the map, use the method that mix observation
and matching of point and line segment. After feature
association, compute NSSD [10] of two image blocks by
image-based correlation matching method. If NSSD smaller
than the given threshold, its a wrong matching; otherwise,
matching succeed, update the system.
C. Active SLAM strategy
This approach based on optimal control strategy
[11],[12], select the control input vector which gets the
smallest system uncertainty. Meanwhile, as high accurate of
the system states, we can do navigation and exploration tasks.
( | )
W
R
X k k is robots location at time k and
( | )
W
R
X k k is
features position in corner features coordinate system.
-1 0 1 2 3 4
-3
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
x [m]
y
[
m
]
Local map: extracted lines
-1 0 1 2 3 4
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
3
x [m]
y
[
m
]
Global map: step 1
50 100 150 200 250 300
50
100
150
200
6552
1
2
( )
( )
( )
( )
( )
n
W
R
W
F
W
F
W
F
X k k
X k k
X k k
X k k
X k k
(
(
(
(
=
(
(
(
(
(18)
EKF-SLAM system assumes system states obey Gaussian
distribution, its covariance matrix can be written as:
1 2
1 1 1 1 2 1
2 2 1 2 2 2
1 2
( )
n
n
n
n n n n n
RR RF RF RF
F R F F F F F F
F R F F F F F F
F R F F F F F F
P P P P
P P P P
P k k P P P P
P P P P
(
(
(
(
=
(
(
(
(
(19)
Calculate accuracy evaluation indicator of SLAM [5] by
Current system covariance
( ) P k k .
RR
1
( | ) ( ( | ))
det(P ) 1/ det( )
n
FF
i
C k k f P k k
n P
=
=
= +
(20)
If ( | ) C k k c < ( c for a given threshold), you can believe
that the current system is relatively accurate, could execute
path planning tasks. Otherwise, uncertainty of current system
is large, should use optimal control strategy, choose the input
control vector which gets the smallest system uncertainty.
Optimal control strategy follows these steps:
1) Control input vector ( , )
i
v e = , v is linear speed and
e is angular velocity, use two differential-driven models to
predict robots position after t A time interval.
2) Compute prediction observation and Jacobean matrix
at time k+1 by prediction position
( 1| )
W
R
X k k + and
observation model .
( 1 ) ( ( 1 )) Z k k h X k k + = + (21)
( 1 )
X
h
h
X k k
c
V =
c +
(22)
3) Compute prediction covariance by update stage of
EKF.
( 1| 1) ( ( 1) ) ( 1| )
X
P k k I K k h P k k + + = + V + (23)
( 1) ( 1 ) ( 1)
T
X X
S k hP k k h R k
+ = V + V + + (24)
1
( 1) ( 1 ) ( 1)
X
K k P k k hS k
+ = + V + (25)
4) Compute accuracy evaluation indicator of SLAM
( 1| )
i
C k k + by prediction covariance
( 1| 1) P k k + + after the
implementation of control input ( , )
i
v e = .
5) Select the optimal control input vector:
arg min( ( 1 ))
c i
i
u C k k = + (26)
6) Control robots active movement, reducing the
uncertainty of system to improve the accuracy of SLAM.
IV. EXPERIMENTAL RESULTS
Use Pioneer 3 DX mobile robot (with odometer, laser and
Cannon VC-C50i vision) in indoor environment (The first
floor of Nankai University Boling building), experiment with
odometer, laser and vision data.
To compare the sensor fusion and laser only SLAM, for
the same data and parameters, using two methods SLAM
respectively, the uncertainty curves shown in Figure 5.4,
which shows sensor fusion is better than laser only SLAM in
accuracy.
Fig 4, the upper-left sub-picture shows the current camera
images, given the step 56 feature matching between the corner
feature from laser and the vertical line of current image; the
upper-right sub-picture shows the current local maps, in which
the blue rectangular coordinate system represents corner
features from laser, the red rays express the direction of the
vertical line feature; lower-left sub-picture shows the robot
trajectories and global maps; the bottom-right sub-graph given
accuracy of the evaluation indicator SLAM curve.
Experimental results show that laser and monocular
vision sensor fusion method can improve the accuracy of
SLAM. SLAM accuracy of the evaluation indicator curve as
shown in the lower-right sub-graph in fig 5, showing that the
use of sensor fusion and active SLAM, most of the time
evaluation indicator are less than 0.001, indicating the active
SLAM based on sensor fusion method are able to obtain an
accurate result of map building.
Image
50 100 150 200 250 300
50
100
150
200
0 2 4 6
-4
-3
-2
-1
0
1
2
x [m]
y
[
m
]
Local map: extracted lines
-5 0 5
-6
-4
-2
0
2
4
6
x [m]
y
[
m
]
Global map: step 56
0 10 20 30 40 50 60
0
0.002
0.004
0.006
0.008
0.01
Analysis
Fig. 4 Step 56, current camera image
6553
Image
50 100 150 200 250 300
50
100
150
200
-5 0 5 10
-4
-2
0
2
4
6
x [m]
y
[
m
]
Local map: extracted lines
-5 0 5 10
-8
-6
-4
-2
0
2
4
6
x [m]
y
[
m
]
Global map: step 195
0 50 100 150 200
0
0.002
0.004
0.006
0.008
0.01
Analysis
Fig. 5 accuracy of SLAM
V. CONCLUSION
Our experiment use sensor fusion of laser range data and
monocular vision on the three-dimensional structure
environment during active SLAM. Through the integration of
laser and monocular vision data on the three-dimensional
structure environment, we get a more accurate location of the
corner feature. And the use of active SLAM based on optimal
control strategy can improve the accuracy of the SLAM
process. Experiments prove that the proposed laser and visual
sensor fusion of active SLAM method is effective, can be
applied to the structured environment of mobile robot SLAM
and be able to obtain relatively accurate position and map.
REFERENCES
[1] Williams S.B., et al. Autonomous underwater simultaneous localisation
and map building. in IEEE International Conference on Robotics and
Automation. San Francisco, CA, USA, 2000.
[2] Thrun S, Hahneltt D, Fergusont D, et al. A system for volumetric robotic
mapping of abandoned mines. In Proceedings of IEEE International
Conference on Robotics and Automation, Taipei, Taiwan, 2003,
4270~4275.
[3] Mayol W.W., Davison A. J., Tordoff B. J., et al.. Applying Active Vision
and SLAM to Wearables. In: Proc International Symposium on Robotics
Research, Siena Italy, October 19-21, 2003.
[4] Kim J., Sukkarieh S. Autonomous Airborne Navigation in Unknown
Terrain Environments. In IEEE Transactions on Aerospace and Electronic
Systems, 2004, 40(3):1031~1045
[5] Feder H, Leonard J and Smith C. Adaptive mobile robot navigation and
mapping. International Journal of robotics research, 1999, 18(7):
650~668.
[6] Arras K O. Feature-based Robot Navigation in Known and Unknown
Environments [D]. Lausanne, Swiss: EPFL, 2003.
[7] P. Smith. I. Reid. A.J. Davison, Real-Time Monocular SLAM with
Straight Lines, in Proc. British Machine Vision Association, September
2006, Vol.2: 17~26.
[8] T. Lemaire, S. Lacroix, Monocular-vision based SLAM using line
segments, in Proc. IEEE International Conference on Robotics and
Automation, April 2007, 2791~2796.
[9] P. Gee and W. Mayol, Real-Time Model-Based SLAM Using Line
Segments, in 2nd International Symposium on Visual Computing,
November 2006, Vol.2: 354-363.
[10] Crowley J, Martin J. Comparison of Correlation Techniques. International
Conference on Intelligent Autonomous Systems, Karlsruhe, 1995.
[11] Bourgault F, Makarenko A. and Williams S B. Information based adaptive
robotic exploration. In Proceedings of IEEE/RSJ International Conference
on Intelligent Robots and Systems, Lausanne, Switzerland, 2002.
540~545.
[12] Huang S, Wang Z and Dissanayake G. Time optimal robot motion control
in simultaneous localization and map building (SLAM) problem. In
Proceedings of IEEE/RSJ International Conference on Intelligent Robots
and Systems, Sendai, Japan, 2004. 3110~3115.
[13] Chao Li, SLAM with Laser and Monocular Vision in Structured
Environment [D]. Tianjin: Nankai University College of Software, 2009.
[14] Simultaneous localization and mapping [EB/OL].
http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping.
6554