Beruflich Dokumente
Kultur Dokumente
(IJECET)
Volume 6, Issue 10, Oct 2015, pp. 20-29, Article ID: IJECET_06_10_003
Available online at
http://www.iaeme.com/IJECETissues.asp?JType=IJECET&VType=6&IType=10
ISSN Print: 0976-6464 and ISSN Online: 0976-6472
IAEME Publication
1. INTRODUCTION
Many techniques are used to navigate mobile robots in industries. For navigation of
mobile robots, self-position of robots need to be estimated. Many methods are used in
making vehicles more and more smarter both in the quest for making vehicles
autonomous as well as for driver assistance services such as adaptive cruise control,
lane departure warning and lane change assistance. All these applications can greatly
benefit only with accurate localization. For example, for an autonomous robot to
follow a road, it needs to know where it is. Localization is a prior step in navigation.
To stay in a specific lane, the vehicle must know its current position. The position
must be known in centimetres accuracy to follow road lane. GPS alone is not
sufficient to meet the requirements of such a precise localization. Many other
http://www.iaeme.com/IJECET/index.asp
20
editor@iaeme.com
techniques are used along with GPS for the purpose. The various methods used for
localization have been discussed in the following sections of this paper.
2. METHODS OF LOCALIZATION
2.1. GPS
There are three segments which make up the GPS system. The space segment of GPS
is 24 satellites (or space vehicles - SVs) in orbit about the planet at a height of
approximately 20,200 km such that generally at least four SVs are viewable from the
surface of the earth at any time. This allows the instantaneous user position to be
determined, at any time, by measuring the time delay in a radio signal broadcast from
each satellite, and using this and the speed of propagation to calculate the distance to
the satellite (the pseudo-range). As a rule-of-thumb, one individual satellite needs to
be received for each dimension of the users position that needs to be calculated. This
suggests three satellites are necessary for a position fix of the general user (for the x, y
and z dimensions of the receivers position) however, the user rarely knows the exact
time which they are receiving at, hence four satellite pseudo-ranges are required to
calculate these four unknowns. The satellite data is monitored and is controlled by the
GPS ground segment - stations positioned globally to ensure the correct operation of
the system. The user segment is the GPS user and the GPS reception equipment.
These have advanced considerably in recent years to allow faster and more accurate
processing of received data.
They typically contain pre-amplification, an analogue to digital converter and
DSP processors etc.
Acquire sensory information: For vision based navigation, this means acquiring and
digitizing camera images.
Detect landmarks: Usually this means extracting edges, smoothing, filtering, and
segmenting regions on the basis of differences in grey levels, color, depth or motion.
Matching: In this step, the system tries to identify the observed landmarks by
searching in the database for possible matches according to some measurement
criterion.
Calculate position: Once a match (or a set of matches) is obtained, the system needs
to calculate its position as a function of the observed landmarks and their positions in
the database.
http://www.iaeme.com/IJECET/index.asp
21
editor@iaeme.com
http://www.iaeme.com/IJECET/index.asp
22
editor@iaeme.com
robot can be determined by triangulation. The method was tested by placing the
camera setup at a number of different random positions in a 11.0m x 8.5m room. The
average localization error was 0.45m. No mismatch of features between the reference
and incoming image was found.
In [9], authors make use of omnidirectional camera for map building and
localization of a robot. The image sequences of the omnidirectional camera are
transformed into virtual top-view ones and melted into the global dynamic map. After
learning the environment from training images, a current image is compared to the
training set by appearance based matching. Appropriate classification strategies yield
an estimate of the robots current position.
2.3 2-D Map-Based Localization
In many urban navigation applications, high accuracy localization of moving vehicles
is achieved using maps of urban environments. One such technique has been proposed
by Jesse Levinson et al [1]. This approach integrates GPS, IMU, wheel odometry and
LIDAR data acquired by an instrumented vehicle to generate high resolution
environment maps. The idea of their work is to augment inertial navigation by
learning a detailed map of the environment, and then to use a vehicles LIDAR sensor
to localize itself relative to this map. The maps are 2-D overhead views of the road
surface, taken in the infrared spectrum. Such maps capture a multitude of textures in
the environment that may be useful for localization such as lane markings, tire marks,
pavement and vegetating near the road (e.g. grass). The maps are acquired by a
vehicle equipped with a state-of-the-art inertial navigation system (with GPS) and
multiple SICK laser range finders.
2.3.1 Road Mapping
The vehicle transitions through a sequence of poses. In urban mapping, poses are five
dimensional vectors, comprising the x y coordinates of the vehicle, along with its
heading direction (yaw), roll and pitch angle of the vehicle (the elevation z is
irrelevant for this problem). Let xt denote the pose at time t. Poses are linked together
through relative odometry data, acquired from the vehicles inertial guidance system.
These constraints can be thought of as edges in a sparse Markov graph. For any pose
xt and any (fixed and known) laser angle relative to the vehicle coordinate frame i,
the expected infrared reflectivity can easily be calculated. Here it is a Gaussian noise
variable with mean zero and noise covariance Qt. In log-likelihood form, this provides
a new set of constraints, which are of the form. The unknowns in this function are the
poses xt and the map m.
2.3.2 Feature matching using a global feature map
In [7], A method using widely available aerial imagery to support a vehicles
environmental perception and path planning has been described. The authors
introduced an iterative method for matching aerial imagery to vehicle camera images
and validated the algorithm on real data. The work shows that the idea of using aerial
imagery to support environmental perception is promising. With the help of aerial
imagery, the position of lane markings can be determined even at long distances or
outside the view of the vehicle.
2.4 3-D Model Based Localization
The location estimation of a vehicle with respect to a 3D world model finds
applications which include automatic navigation, automatic integration of new
http://www.iaeme.com/IJECET/index.asp
23
editor@iaeme.com
For each SIFT keypoint descriptor from the camera image It, find the closest k model
keypoints, under the restriction that no two model keypoints are taken from the same
scan.
For each of these k matches, find the model keypoint from the same scan that is next
closest to the test image keypoint descriptor and compute the ratio of descriptor
distances.
Rank-order all matches for all image keypoints by increasing value of this ratio and
retain the top 30.
2. For each keypoint match in rank order:
http://www.iaeme.com/IJECET/index.asp
24
editor@iaeme.com
auto correlated images. They conducted experiments with real images and examined
the performance of their method.
2.5.1 Generating Autocorrelation Images
An M x N omnidirectional panoramic image is created by mapping the cartesian
coordinates of each pixel of an omnidirectional image to the corresponding polar
coordinates, where the rotation of the omnidirectional image around the optical axis
of the camera corresponds to the horizontal shift of the omnidirectional panoramic
image.
The autocorrelation image is represented as conjugation of the autocorrelation
vectors rn (rn1, . . . , rnM), which are calculated for each horizontal line. In practice,
because of symmetrical property of autocorrelation images, only half of the image is
used. Theautocorrelation image generated in this way is invariant against the rotation
of the sensor and is unique for the position of the sensor.
2.5.2 Self-Localization with Eigenspaces
As mentioned above, omnidirectional images are transformed to autocorrelation
images.
http://www.iaeme.com/IJECET/index.asp
25
editor@iaeme.com
Compared to landmark based approach, the appearance based approach does not
require geometrical object position. However these methods cannot estimate a
vehicles lateral position since they assume that the trajectory of the self-positions is
the same as the trajectory when the database was constructed. In [4], authors use local
feature descriptors, and its experimental evaluation in a large, dynamic, populated
environment where the time interval between the collected set is upto two months.
The overview of the proposed method has been shown in the following diagram. The
input is the current omni-image and the current odometry reading. The database
consists of poses (x, y, ) of the database images together with the extracted features.
Output is the current estimate of the robot position based on the weight and
distribution of particles.
In [6], the authors addressed the issues of outdoor appearance based topological
localization for a mobile robot over different lighting conditions using
omnidirectional vision. Their databases, each consisting of large number of
omnidirectional images, have been acquired over different day times in dynamic
outdoor environments. Two different types of feature extractor algorithms, SIFT and
the more recent SURF [19, 20], have been used to compare the images, and the two
different approaches, WTA and MCL [21] have been used to evaluate performances.
Given the challenges of highly dynamic and large environments, general
performances of localization system are satisfactory.
3. SLAM
The simultaneous localization and mapping (SLAM) problem asks if it is possible for
a robotic vehicle to be placed at an unknown environment and for the vehicle to
incrementally build a consistent map of this environment while simultaneously
determining its location within this map. A solution to the SLAM problem has been
one of the notable success to the robotics community. A two part tutorial of SLAM
aims to provide a broad introduction to SLAM [15, 16].
The main steps in SLAM are:
Define robot initial position as the root of the world coordinate space or start with
some pre-existing features in the map with high uncertainty of the robot position.
Prediction: When the robot moves, motion model provides new estimates of its new
position and also the uncertainty of its location positional uncertainty always
increases.
Measurement: (a) Add new features to map. (b) re-measure previously added
features.
Repeat steps 2 and 3 as appropriate.
http://www.iaeme.com/IJECET/index.asp
26
editor@iaeme.com
In [18], a system for Monocular Simultaneous Localization and Mapping (MonoSLAM) relying solely on video input. The method makes it possible to precisely
estimate the camera trajectory without relying on any motion model. The estimation is
completely incremental- at a given time frame, only the current location is estimated
while the previous camera positions are never modified. In particular, simultaneous
iterative optimization of the camera positions is not performed and they have
estimated 3D structure (local bundle adjustment). The key aspect of the system is a
fast and simple pose estimation algorithm that uses information not only from the
estimated 3D map, but also from the epipolar constraint.
4. SUMMARY
A much progress has been made to endow vehicles with sensory information for
autonomous navigation. Localization can be achieved under various modelling
environments and these models can be geometrical, topological, appearance based etc.
Different representations vary with respect to degree of metrical knowledge
contained, exhibiting different degrees of robustness with regard to illumination
variations, etc.This paper has surveyed these aspects of the progress made so far in
vision for localization of autonomous vehicles.
ACKNOWLEDGMENTS
Author is thankful to Director, Sant Longowal Institute of Engineering and
Technology, Longowal for providing computational facilities for the research. Author
is also grateful to all those persons who helped in proof reading of this manuscript.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
http://www.iaeme.com/IJECET/index.asp
27
editor@iaeme.com
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
http://www.iaeme.com/IJECET/index.asp
28
editor@iaeme.com
[25]
http://www.iaeme.com/IJECET/index.asp
29
editor@iaeme.com