Sie sind auf Seite 1von 34

EE492 Senior Project Final Report

Multi Robot Exploration Using Minik Robots

Mahmut Demir

Project Advisor: H. Isl Bozma . Evaluation Committee Members: Aysn Ertzn u u Yagmur Denizhan

Department of Electrical and Electronic Engineering

Boazii University g c Bebek, Istanbul 34342

08.04.2012

Contents
1 Introduction 2 Objectives 3 Related Literature 3.1 Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Multi-Robot Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Map Types and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Approach and Methodology 5 Work done in EE491 Senior Design Project 5.1 Work on Minik II System . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Comprehensive Testing & Control . . . . . . . . . . . . . . 5.1.2 Re-wiring of Minik II Robots . . . . . . . . . . . . . . . . . 5.1.3 Design of Electronic Cards . . . . . . . . . . . . . . . . . . 5.1.4 Adding Range Sensor & Analog/Digital Converter Module 5.1.5 Minik Development and Control Software (MinikDCS) . . 5.1.6 Mid Level Motion Control of Minik Robots . . . . . . . . . 5.1.7 Integrating Teleoperation into our MinikDCS Software . . . 5.1.8 Integrating CMU Cam3 . . . . . . . . . . . . . . . . . . . . 5.2 Robotic Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Object matching . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Depth Calculation . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Map construction . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Connected Components . . . . . . . . . . . . . . . . . . . . 5.3.2 Depth calculation . . . . . . . . . . . . . . . . . . . . . . . . 6 Work done in EE492 Senior Design Project 6.1 Work on Minik II System . . . . . . . . . . . . . . . . . . . . . 6.1.1 Integrating Surveyor Cam . . . . . . . . . . . . . . . . 6.1.2 Designing All-in-One Control panel . . . . . . . . . . . 6.2 Robotic Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Connected Components in HSV Space . . . . . . . . . . 6.2.2 Line detection for depth calculation . . . . . . . . . . . 6.2.3 SURF features and depth calcuation from SURF feature 6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Connected Components . . . . . . . . . . . . . . . . . . 7 Economic, Environmental and Social Impacts 8 Cost Analysis A Minik II Wiring Diagram 1 1 1 1 3 3 3 3 3 4 4 4 4 5 5 5 5 7 7 8 10 12 12 12 14 17 17 17 18 18 18 21 21 24 24 27 28 28

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . points . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

List of Figures
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 Minik II robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An example of connected components . . . . . . . . . . . . . . . . . . . . . . . . . . Ratio comparison for matching objects . . . . . . . . . . . . . . . . . . . . . . . . . . Ratio of intersected areas in two images . . . . . . . . . . . . . . . . . . . . . . . . . Area of the objects gets bigger proportional to distance travelled into scene . . . . . Parameters used in calculation of distance from two consecutive images . . . . . . . Images that are used in connected component analysis . . . . . . . . . . . . . . . . . Images transformed into binary with a certain threshold . . . . . . . . . . . . . . . . Contours of the objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connected components detected in images . . . . . . . . . . . . . . . . . . . . . . . . Eliminated objects in the thresholding process . . . . . . . . . . . . . . . . . . . . . Connected objects that are above the threshold and touching each other . . . . . . . Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . Distance calculation with travelled distance: 40cm . . . . . . . . . . . . . . . . . . . Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . Comparison of calculated distances and groundtruth values . . . . . . . . . . . . . . PCB design of control panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hue, Saturation and Value components of an image . . . . . . . . . . . . . . . . . . . Connected components extracted for dierent hue values( from left: 10, 20, 60, 110) Set of test images to compare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of grayscale and HSV space extraction (HSV space is below) . . . . . . Hough transform applied to two consequtive images . . . . . . . . . . . . . . . . . . SURF features found in given image . . . . . . . . . . . . . . . . . . . . . . . . . . . SURF features are matched in two consequtive images . . . . . . . . . . . . . . . . . Diagram shows calculation of depth from two consequtive images . . . . . . . . . . . Depth calculation for case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Depth calculation for case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Depth calculation for case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process of nding connected components . . . . . . . . . . . . . . . . . . . . . . . . . 1 8 9 9 11 11 12 13 13 13 14 14 15 15 15 16 16 16 17 18 19 20 20 20 21 22 23 23 24 25 25 26 26 27 29

List of Tables
1 2 3 4 List of robots and hardware embodied in robots . . . . . . . . . Class structure of MinikDCS software . . . . . . . . . . . . . . Midlevel Motion Commands . . . . . . . . . . . . . . . . . . . . The components of Minik II robot, physical properties and cost . . . . . . . . . . . . . . . . . . . . . of each one . . . . . . . . . . . . . . . . . 4 . 6 . 6 . 28

Acknowledgements

I would like to thank my advisor Prof. Dr. Isl Bozma for her encouragement, understanding and guidance throughout my project. I also thank to my evaluation committee members Prof. Yagmur Denizhan and Prof. Dr. Aysn Ertzn. I am also grateful to Ozgur Erkent, Haluk Bayram, Hakan u u Karaoguz and Ahmet Unutulmaz who always helped us with their experience and wisdom.

Introduction

Autonomous exploration of unknown areas is an important robotics task that is still being studied extensively. Many dierent approaches have been oered and numerous algorithms were written in order to perform the exploration of environment eectively. In recent years, the use of multi-robot systems is being advocated for this task since using a team of robots, the overall performance can be much faster and more robust[1]. Each of robots explores only some part of the environment. A comprehensive map of the area can be created via robots communicating each other and exchanging the maps they have created. The goal of this project is to start investigate the use of multi-robots in map building using Minik II robots as shown in Fig. 1.

Figure 1: Minik II robots.

Objectives

This project addresses the problem of exploration and mapping of an unknown area by multiple robots. As our robots are very small sized, we have some limitations in terms of number of sensors and positioning devices. Therefore, we will use the image sequences taken by onboard camera. Motor encoder information will be used to calculate the distance travelled and to position robots. In the rst phase of our project, which is EE491 Senior Design Project, we worked on building a map of the environment by a single robot. Each robot is aimed to build a hybrid map of its environment. In this second term, we are going to improve our algorithm on map building by a single robot. Then we will move on the cooperation of robots task for ecient exploration. Cooperation between robots will be through wireless communication. Each robot will explore some part of the unknown environment and then, they will share it with other robots to create a bigger perspective of the environment.

3
3.1

Related Literature
Map Building

There is extensive work on mobile robot exploration and mapping. There are a number of ways making an ecient mapping and exploration of the environment depending on what kind of sensors is desired to be used. Such sensors may be one dimensional (single beam) or 2D- (sweeping) laser rangenders, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras. Recently,

there has been intense research into visual mapping and localization using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices [2]. Using visual clues to map the environment is a challenging task but it has many advantages in mapping. First of all, it provides the real image of the environment. Secondly, the use of cameras is has been increasing due to decreasing price and availability. Using cameras in mapping is a challenging task because they do not provide depth information, they are easily aected by changing lighting conditions and algorithms on image processing may be computationally costly. The rst step in the map making process is recognizing or extracting the features of the objects in the environment. Candidate objects should include features that enable them to be matched in subsequent frames. If they have such properties, then they can be matched and more comprehensive image of the environment can be constructed. Various methods used in feature extraction. Some of them include edge detection, corner detection, SIFT or SURF features etc. [3]. The SIFT features are local and based on the appearance of the object at particular interest points, and are invariant to image scale and rotation. They are also robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, they are highly distinctive, relatively easy to extract, allow for correct object identication with low probability of mismatch and are easy to match against a (large) database of local features. Object description by set of SIFT features is also robust to partial occlusion; as few as 3 SIFT features from an object are enough to compute its location and pose. The disadvantage of using SIFT algorithm is that it is comptutationally costly. SURF(Speeded Up Robust Feature) is another robust image detector and descriptor. It is partly inspired by the SIFT descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against dierent image transformations than SIFT[4]. It can be used in computer vision tasks just as SIFT used. Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D modelling and object recognition. Corner detection overlaps with the topic of interest point detection. Edge detection is a also a fundamental tool in image processing and computer vision, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. Candidate objects has features that distinguishes them from the other objects. In two subsequent images, these objects need to be matched. Reliable matching algorithm requires corporating more than one feature matching per object because features are tend to be changed as image changed. In this perspective, SIFT or SURF features are very reliable and invariant to image transforms. A real-time algorithm that can recover the 3D trajectory of a monocular camera and map the environment simultaneously is presented in [3]. Their work is important because it is the rst successful application of the SLAM methodology from mobile robotics to the pure vision domain of a single uncontrolled camera. The key concept of their approach is a probabilistic feature-based map, representing at any instant a snapshot of the current estimates of the state of the camera and all features of interest and, crucially, also the uncertainty in these estimates. They uses SIFT features to match the images. In [5], a simple method to track objects is presented. Here, the objects are the connected components or blobs in the image data. Blobs are matched against hypothesised objects, in a ve

dimensional space, parametrizing position and shape.

3.2

Multi-Robot Cooperation

After completing mapping with single robot, we are going to employ multiple robots in this task. It is argued that using multiple robots instead of a single one would increase accuracy and oer many advantages such as in Robust Exploration [6] where inaccuracies that occur over time from dead reckoning errors are reduced simultaneously.

3.3

Map Types and Performance

Another taxonomy on map making is based on the type of map constructed namely whether it is a metric, topological map or hybrid. The approaches used in nding each type of also vary depending on the reasoning used in constructing the associated maps. One traditional approach using metric maps is based on probabilistic approaches such as Extended Kalman ltering such as [7]. Topological maps - on the other hand - use graph-based approaches. In order to estimate overall performance, several scoring techniques exist. Scoring the map quality in terms of metric accuracy or scoring the skeleton accuracy rather than metric accuracy are used in scoring frequently.

Approach and Methodology

The project consists of two parts: Map building with a single Minik II robot Multi-robot exploration and map building In the EE491 Senior design project we focused on the rst issue. Whereas in EE492 we will focus simultaneously on the second issue and improving algorithms for single robot map building. Work done in EE491 Senior Design Project is explained in Section 5. Work done until now in EE492 Project is explained in Section 6.

5
5.1

Work done in EE491 Senior Design Project


Work on Minik II System

Map building with a single robot required the following: Redoing all the wiring in a Minik II robot for increased robustness Comprehensive testing and debugging of the robot and its navigation capabilities

5.1.1

Comprehensive Testing & Control

As I stated earlier, my project is a continuation of previous work done on MinikII robots. Therefore, examining and understanding the previous works was the rst step. Testing of robots was done both on hardware and software. The current states of all the robots robots are as shown in Table 1

Sensor Camera Motor Board Computer Distance sensor Voltage regulator Wireless Adapter Battery/Voltage play

Robot1 SRV  EPIA P700 / Pico Pc Sharp IR Ranger  Dis-

Robot2 CMU Cam  EPIA P700 / Pico Pc Sharp IR Ranger 

Robot3 CMU Cam  EPIA P700 / Pico Pc Sharp IR Ranger 

Robot4 SRV  EPIA P700 / Pico Pc Sharp IR Ranger 

Robot5 Stereo SRV  EPIA P700 / Pico Pc Sharp IR Ranger

Table 1: List of robots and hardware embodied in robots

5.1.2

Re-wiring of Minik II Robots

Complex wiring conguration inside the robots makes working on robots dicult and causes operation of robot to be unstabilized. Moreover, in order to add new modules and sensors we needed to have more clear wiring inside the robot. First, the wiring diagram of the robot was generated as is given in Appendix A. The wiring of the electrical system was then completely redone in order to allow easier dismantling of the electronic components when required. We are now redesigning the voltage regulator circuit and Nor-gate package to further increase space inside the robot. We have also provided connection drawings for the new design of the robots. We have slightly altered the placement of the parts of the robot as we need more space to place new sensors. 5.1.3 Design of Electronic Cards

I am also supervising to an undergraduate student in the design of new cards for MinikII. Design of the new card is required for routing RS232 Serial Port to internal/external sockets and also for reading encoder data from motors. Moreover, we are going to redesign the voltage reguator circuit to decrease its dimensions. 5.1.4 Adding Range Sensor & Analog/Digital Converter Module

Robots interact with the environment by using sensors embodied on their body and they usually have more than one sensor as the type of the properties of the environment varies(e.g. color, distance). s our project will be mainly focused on extraction of the map of environment, we will basically need both range information and visual clues from the environment. The purpose of 4

the previous work was distance measurement by using only a camera. Although, we are going to utilize the previous algorithm to some extend, we also plan to use range information which will be obtained by an infrared range sensor embodided on robots. Reading measurement from range sensors by using analog-to-digital converter module of motor cards is completed. 5.1.5 Minik Development and Control Software (MinikDCS)

We created a software which includes modules related to Minik robot. These modules include the following: Motor control Teleoperation of robots Camera and image processing module By using these software we can control these three modules at the same time. Each is working im a seperate thread and does not block other modules. Codes are written in C++ and class structure is extensively used. Using class-like structure makes our program more modular and using methods becomes easier for future use. Here, we will give a brief information about classes, methods exist on our program.

5.1.6

Mid Level Motion Control of Minik Robots

Its possible to control the motion of Minik robots by using mid level motion commands. These commands makes it easier to control the robot and eliminates the need for dealing with many parameters each time. Brief overview of motion commands are provided below:

5.1.7

Integrating Teleoperation into our MinikDCS Software

We have currently the ability to control robots remotely. This is done by connecting to MinikII operating system remotely and control it over another computer. This process, however, costs too much and burdens the network between robots and remote computer. Moreover, it causes to a lag between computers even the loss of connection. In order to make control of robots more ecient and faster, I have integrated the teleoperation module which I wrote earlier to MinikII robots. Currently, we can control the robots remotely. 5.1.8 Integrating CMU Cam3

Dierent from the previous version, we dont need any longer to run CMUCam grabber to grab images from CMU Cam. We wrote code which sets up a serial connection to CMU cam and grabs a frame whenever we need. In CameraVision class, grabFrame() method is used for grabbing image, 5

Class Name CameraVision

Methods setupConn() grabFrame() cComp() calcDist() showMatchesAndDistance()

Denition Sets up the connection with Camera Grabs a single frame from the camera using serial connection at baud rate of 115200 Calculated connected components of the image given Calculated distances to the objects exits in the image Draws the matched objects in two subsequent image and shows distances to them Sets up serial connection with motor card Includes mid level motion commands. Further explanation will be given Includes basic motion commands (get counter, set counter, set speed etc...) Sets up TCP/IP connection and waits for clients to connect After permission is given to client to connect, it accepts client and waits for commmands These commands include all the mid level motion commands we implemented in MotionControl class

MotionControl setupConn() midLevelMotion() basicMotion() TeleControl setConnection() acceptClient() motionCommands

Table 2: Class structure of MinikDCS software Method SetDefaultRobot() setRobotParameters() Parameters none rightRadius, leftRadius, axisLength none Denition Sets up default parameters for robot

setMotorSpeed(int speed) isMoving() stop() resetRobot() forward() backward() travel(int distance) goTo(int x, int y, int direction) rotateTo(oat angle) rotateLeft() rotateRight() goArc(int angle, int radius)

distance x, y, direction

angle, radius

Sets up given parameters sets motor speed for future commands is moving or not stop robot reset robot parameters go forward go bacward travel given distance go to given point and direction rotate to given angle rotate left rotate right go on an arc with given parameters

Table 3: Midlevel Motion Commands

when a new image is grabbed, it is written to updatedImage variable, and former image is written two previousImage variable. This enable us to compare two images in depth calculation.

5.2

Robotic Mapping

Robotic mapping aims to build a map of the local environment of the robot. Wide variety of sensors with dierent characteristics can be used for mapping. Because we have only a monocular camera as a sensor, recognition of the scene will be performed by processing the images obtained from the camera mounted on top the robots. In the rst phase of our project, we are going to deal with map building with a single robot. 5.2.1 Feature Extraction

Extracting and labeling of various disjoint and connected components in an image is central to many automated image analysis applications. Assuming objects, most of the time, can be distinguished by their colors dierent from the rest of the scene(or background), we can detect such objects by thresholding the given image. Basic thresholding is performed rst converting colored image to grayscale then thresholding image by pre-assigned threshold value. Connected components labeling scans an image and groups its pixels into components based on pixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a graylevel or a color (color labeling) according to the component it was assigned to. Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share the same set of intensity values I. In a gray level image, the intensity values will take on a range of values and hence the method needs to be adapted accordingly using dierent measures of connectivity. We assume binary input images and 8-connectivity. The connected components labeling operator scans the image by moving along a row until it comes to a pixel x where x denotes the pixel to be labeled at any stage in the scanning process) for which I(x) = 1. When this is true, it examines the four neighbors of x which have already been encountered in the scan (i.e. the neighbors (i) to the left of x, (ii) above it, and (iii and iv) the two upper diagonal terms). Based on this information, the labeling of x occurs as follows: If all four neighbors have intensity values equal to 0, assign a new label to x, else If only one neighbor has intensity equal to 1, assign its label to that of x, else If more than one of the neighbors have values equal to 1, assign one of the labels to x and make a note of the equivalences. After completing the scan, the equivalent label pairs are sorted into equivalence classes and a unique label is assigned to each class. As a nal step, a second scan is made through the image, during which each label is replaced by the label assigned to its equivalence classes. For display, the labels might be dierent graylevels or colors. As a result of connected components processing, a set of blobs B = {B1 , . . . , Bn } is obtained. Example of connected component analysis is shown 7

in Fig. 2 where the left portion shows the end of rst scan and the right portion shows the end of second scan.

Figure 2: An example of connected components

5.2.2

Object matching

Similarity measures are used in deciding whether two blobs are similar or dissimilar. In order to create a similarity measure for each object, we have chosen several features to be compared. Each type of feature requires a dierent indice measurement. We use two such measures: 1. The rst measure is spatial proximity of their centers 2. The second is width-to-height ratios of each object. The closer are the ratios, the more similar are the objects as is seen in Fig. 5. 3. Area of overlap 4. Color similarity Relative position measure uses the merits of the connected components algorithm. In that algorithms, each component is connected to its surrounding components. As the relative positions of the objects will not change across the images, it provides us a reliable matching. The center i R2 of a blob Bi is simply computed as the mean position namely i = 1 |Bi | x
xBi

(1)

The relative position of two blobs Bi and Bj is simply measured as the dierence between centers as: |i j | (2) The cross ratio is determined by simply taking the ratio of length li and width wi of a blob Bi . Again, two cross ratios are compared via taking taking their dierence as lj li wi wj 8 (3)

Figure 3: Ratio comparison for matching objects

Association measures compare the intersection of the areas of the objects across the subsequent images. Similar objects are supposed to have common areas(or intersection) because position of two objects in two subsequent images will not change much. If the areas of two objects intersect, then it is highly probably that objects will be matched as in seen in Fig. 4. The intersection of two blobs Bi and Bj can simply be computed as: (i, j) =
xBi Bj

(4)

Figure 4: Ratio of intersected areas in two images

Color measure is the last similarity indice we used in our algorithm. It compares the colors of the objects across the images, and matches objects with similar colors. The color of a blob is computed as the average color where c(x) denotes the color associated with pixel x: ci = 1 |Bi | c(x)
xBi

(5)

The similarity of colors is measured again via taking their dierence as |ci cj | (6)

Of course, none of these similarity measures alone could guarantee a reliable matching. We form an integrated measure of similarity s : B B R0 via incorporating all of these measures at the same time. The measure s will provide a measure of matching with higher probability. Each measure is weighted with coeents as shown in Eq. 7 so that unreliable indices will not aect the result much. s(B1 B2 ) = 1 i j + 2 5.2.3 Depth Calculation lj li + 3 (i, j) + 4 ci cj wi wj (7)

The second step is placing detected objects to their proper places in the map constructed. The main problem when dealing with images taken with a single camera do not have depth information about the environment. In other words, we can not infer the depth of the object from camera even though we can distinguish them in the scene. There are various ways of extacting the depth information from the environment. The most realiable and straigthforward way of it is using sonar sensor or laser range nders. However, their high prices and relatively big sizes does not allow us to use them. Another proposed solution is using stereo cameras. Stereo cameras look into the scene from two dieren viewpoint at the same time so that they can infer the depth of the objects in an instant. Processing such images and interpolating two images is a time-consuming task and requires expensive hardware, at least two cameras need to be used in the robot. Still, one of our Minik robots has Surveyor Stereo Vision system as its camera sensor can be used for future applications. Using simple and single pinpoint camera for mapping task is relatively new notion in robotic mapping. Recently, ubiquity of cameras in mobile platforms made them a strong competitor against sonar sensors or laser range nders. In order to extract the depth information by using a simple camera might be a challenging task. In our project in this term, we mainly concentrated on depth calculation by using a single camera. Two images that looks into the same scene but taken from dierent viewpoints can contain depth information and be extracted by using simple geometrical equations. When a robot approaches to the objects, area of the objects gets bigger proportional to the distance the robot travelled into the objects as shown in g. 5 The computation of computing depth from two consequtive images is as given in Eq. 8: h p1 = d2 f 10 h p2 = d1 f (8)

Figure 5: Area of the objects gets bigger proportional to distance travelled into scene

where f - The length between camera lens and CCD image sensor h - The height of the object in 3D world p1 - The height of the the object in the image when object get closer (in pixels) p2 - The height of the image when object is far (in pixels) d1 - The distance of the object to the lens when it is closer d2 - The distance of the object to the lens when it is far

Figure 6: Parameters used in calculation of distance from two consecutive images

In g. 6 the object gets closer to camera but it is the same thing as if camera gets closer to an object. As we can see from g. 6, as the camera(robot) gets closer to an object, the size of the image taken from CCD image sensor of the camera is increased. From this increase and the known information of distance travelled from the motor encoders, we can obtain distances to objects.

11

Eliminating h(actual distance) and f (f ocal lenght) in Eq. (8), we obtain ratio as = p2 /p1 = d1 /d2 If we denote x as distance travelled, x = d2 d1 We obtain distance to object in term of and x as in Eq.(11) d2 = x/(1 ) 5.2.4 Map construction (11) (10) (9)

In the map contruction process, the general approach we will follow is as follows: 1. First, distinguish the objects in the image and label them. 2. Next, match the objects in two subsequent images using the similarity indices we indentied before. 3. Following, extract the depth information of the matched objects. 4. Finally, stitch the images and objects according to the their relative positions.

5.3
5.3.1

Experimental Results
Connected Components

We used Open Source Computer Vision Library (OpenCV) for connected component extraction from the given images. OpenCv library provides a number of handy functions for connected component extraction. Before beginning connected component analysis, we rst checked the brightness level of the image. Brigter images more than a certain threhold are inverted to get better performance in the connected component analysis. First, we used ndThreshold() function with a certain threshold value to convert the image to black and white. In the next step, we found the contours of the objects in the image by using ndContours() function. In the nal step, we colorized the found connected components. Here, we applied our algorithm on several images and outputs are shown gures below: First image sequence as shown in g. 23 is the candidate images we are going to use to nd connected components.

Figure 7: Images that are used in connected component analysis 12

Images are then transformed into binary images with a certain threshold applied as shown in g. 8

Figure 8: Images transformed into binary with a certain threshold

Next sequence in g.9 shows the contours of the objects in the images

Figure 9: Contours of the objects

In the last step, g.10 shows colorized connected components

Figure 10: Connected components detected in images

As we can see from gures, some of the objects couldnt be recognized and categorized as connected component. Most of the error in our algoritm arises in the thresholding process. Because we use a threshold to seperate objects from the background, some of the objects that has brightness values under the threshold also get eliminated in the thresholding process. This can be seen in g.11. Two of the three objects in the scene recognized whereas the last one is eliminated in the thresholding process. This error can be xed by using more complex thresholding algorithms. One simple solution would be comparing hue values of the image along with the brightness values in thresholding. By performing logical-AND operation to outputs of the hue and brightness thresholding functions, we can get a better recognition performance.

13

Figure 11: Eliminated objects in the thresholding process

Another possible source of the error is that our connected component algorithm relates and merges objects which exceed the threshold value and touch to each other. This can be seen in g.12. In some of cases, cables or unwanted particles causes objects to be connected each other and seem as a single object. We tried to reduce error by performing erode and dilate on thresholded images, still we encounter such errors in some of the cases. Applying better thresholding algorithms can reduce this type of error.

Figure 12: Connected objects that are above the threshold and touching each other

Finding connected components and matching these component for a given subsequent images is an important process. Wrong matching can cause error in distance calculation. Therefore used three indices in order to have a reliable matching namely, intersecting the areas, ratio match and color match. 5.3.2 Depth calculation

After matching the connected components, we implemented our distance calculation algorithm as described in the methodology section. Distances to each detected objects in two consecutive images are calculated seperately. Results of the distance calculation algorithm with accompanying scene images are shown g.13...g.18. Calculated distances are written next to each obstacle in the gures. In the g.13...g.18, blue circle represents the robot and robot moves forward for a specied amount. Red circles represents the objects in the scene. After the distance calculation algorithm is performed, calculated distances for each object are written inside of each circle, respectively.

14

Figure 13: Distance calculation with travelled distance: 30cm

Figure 14: Distance calculation with travelled distance: 30cm

Figure 15: Distance calculation with travelled distance: 40cm

15

Figure 16: Distance calculation with travelled distance: 30cm

Figure 17: Distance calculation with travelled distance: 30cm

Figure 18: Distance calculation with travelled distance: 30cm

16

By using distance calculation algoritm, we obtained a satisfactory results with the success rate of 68.5%. In some of the scenarios above, some of the objects couldnt be recognized (e.g. in g.14 and in g.17) because of the threshold value we specied. Using an adaptive threshoulding can yield better results in recognizing objects. In some of the scenarios, we obtained very high errors in distance calculation for some of the objects. For example in g.17, distance is calculated as 115cm for the black object, but its distance in real is 240cm. This error is mainly due to a big change in the ratio of the object. Reason of big change in ratios is that boundaries of that object couldnt be specied exactly in this scenario. If the boundaries of the object in two consecutive images cant be found exactly, then distance calculation algorithm will not yield accurate results. Same error applies for g.13 also. Distance of the leftmost object is calculated as 132cm however its actual distance is 200cm. Because this object is shown on the edge of the image, object is clipped out from the image as robot moves. Clipping out the object, in turn, causes ratio of the object to be changed more than expected. In order to eliminate similar cases, we dont use objects that are very close to the edges of the image. Distance calculations for various scenarios and objects are performed and results are summarized in g.19. In the gure, vertical axis shows the calculated distances and the horizontal axis shows groundtruth data. In the experiments, we obtained a total success rate of 68.5%.

Figure 19: Comparison of calculated distances and groundtruth values

6
6.1

Work done in EE492 Senior Design Project


Work on Minik II System
Integrating Surveyor Cam

6.1.1

Three of Minik robots out of ve use surveyor cameras as their vision hardware. In the last term, we were only using CMU Cam to grab images because we only needed a single robot. However

17

working with multi robots will also require surveyor cameras to operate. We integrated surveyor cameras to our Minik Software and now it can be used by changing just one parameter as the CameraVision() class being initialized. Usage of the class is same as of usage of CMU Cam. Snce Surveyor cameras provide better resolution images compared to CMU cam, we will be using these cameras in the second half of the project. 6.1.2 Designing All-in-One Control panel

With the purpose of clearing out the mess of cables inside the robot, we designed a new control panel on which all of the switches and input output port are placed. We added switches for changing between external and internal RS232 ports and for changing between internal or external power. PCB of the control panel is show in g. 20

Figure 20: PCB design of control panel

6.2

Robotic Mapping

In the EE491 part, we have employed connected components approach for extracting features from the environment. Although we obtained results with 70% success rate in experiments, we wanted a further improvement in our feature extraction and depth calculation. Our rst attemp was using images represented in HSV space instead of Grayscale space. Then, we looked for other clues that can be used in depth calculation. These clues can be straight lines or SURF features. 6.2.1 Connected Components in HSV Space

Since dierent colors possibly take same grayscale values, using previous approach of grayscale thresholding caused some inaccuraties in the experiments we performed in EE491 project. As a solution, more improved approach of using HSV space instead of Grayscale space is used in the second half of the project. HSV stands for hue, saturation, and value, and is also often called HSB. HSV(or possibly HSI) are the two most common cylindrical-coordinate representations of points in an RGB color model, which rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation. Three components of HSV space is shown in g. 21. Because colors in the image are one of the most important clues that denes an object, we wanted to make use of color information in connected component extraction. Hue value actually stands the original-color and doesnt not get aected by changing illumination or brightness. Other two 18

parameters are also important because as the brightness of the object decreases its perceived-color gets darker and aects our extraction algorithm. Small saturation value implies that perceivedcolor of the object doesnt have enough so called color pigments so that they cannot represent their original color. Therefore we cannot make calculation on colors which has very low(or high) intensity or saturation values.

Figure 21: Hue, Saturation and Value components of an image

Taking into account saturation and intensity values, we applied a thresholding on image according to a set of hue values. A set contains most possible colors that we encountered throughout our experiments. A set usually contains 6-7 dierent colors but the number can be increased for more accurate results. For each color in the set, we scanned image once for connected components as shown in g. 22, if any component is found, we kept it in memory and we merged(logical-and) all

19

connected components in the memory after scanning is fully completed.

Figure 22: Connected components extracted for dierent hue values( from left: 10, 20, 60, 110)

In order to improve performance of scanning algorithm, K-Means algorithm can be used for deciding for which hue values to scan image instead of assigning values manually.

Figure 23: Set of test images to compare

Figure 24: Comparison of grayscale and HSV space extraction (HSV space is below)

20

Comparison of algorithms using Grayscale and HSV space can be seen in g. 24. Although in some of the cases, previous algorithm yield better results, because we now dismiss images which has low saturation and intensity values, latter approach performs better in overall performance. 6.2.2 Line detection for depth calculation

Connected components approach performs well and yields good result in an indoor environment in which distinct colored objects are placed. However, when we perform same experiment in corridors, performance of our algorithm decreases. Because clues that can be extacted from this environment are highly limited. In order to have our algorithm high performance in any environment, we tried other approaches, too. In a human environment, vertical and horizontal lines exist in various places. Making use of lines exist in the environment, we can calculated depth from the dierence of lenght of the lines between two consequtive images. The Hough transform is a feature extraction technique used in image analysis and is mostly concerned with the identication of lines in the image[8]. By applying hough transform to images we easily get straight lines exist in the image. In g. 25, result of hough transform applied to two consequtive images is shown.

Figure 25: Hough transform applied to two consequtive images

However there exist a problem in using hough transform. It doesnt guarantee to detect all lines in the image. It can even produce dierent results in a consequtive, similar images. Such an unstable detection cannot be used in depth calculation as because we need very exact lenght of the lines. 6.2.3 SURF features and depth calcuation from SURF feature points

As we stated in related literature section, SURF or SIFT algorithm produces very robust feature points that doesnt get aected from changing illumination or scale. Employing these features in 21

our depth calculation can yield very accurate results. Furthermore, these features can be used together with connected components in detection of objects. We had several experiments using SURF features as shown in g. 26 and g. 27.

Figure 26: SURF features found in given image

22

Figure 27: SURF features are matched in two consequtive images

Calculation of depth from SURF feauteres can be easily performed as shown in Eq.(12). We omit the derivation of formula since it is similar to one as in Eq.(11). x = d./ (12)

Figure 28: Diagram shows calculation of depth from two consequtive images 23

After we nd depth of the SURF points across consequtive images, we can merge our calculation with the one we obtained from connected components approach. By doing so, we can have not only depth space but depth of the objects in the scene.

6.3
6.3.1

Experimental Results
Connected Components

We used Open Source Computer Vision Library (OpenCV) for connected component extraction from the given images. In depth calculation examples, we used HSV values for thresholding. Before beginning connected component analysis, we rst checked the histogram of the image and applied histogram equalization. We performed in three dierent cases: Calculation in inroom(inside a room) environment where distinctive colored block objects exist ( g. 29). (Note that numbers next to objects stands for the calculated depth values for each object)

Figure 29: Depth calculation for case 1

24

Calculation in inroom(inside a room) environment where natural objects exist ( g. 30, 31).

Figure 30: Depth calculation for case 2

Figure 31: Depth calculation for case 2

25

Calculation in corridor like environment ( g. 32, 33, 34).

Figure 32: Depth calculation for case 3

Figure 33: Depth calculation for case 3

26

Figure 34: Depth calculation for case 3

In any of the environment our algorithm did perform depth calculation. For cases two and three, number of connected components depends on the number of interesting objects exit in the environment. Therefore, our algorithm was highly dependent on the type of the environment. In order to nd a solution for our environment-dependent algorithm we tried dierent approaches including line extraction and SURF feature extraction. Merging such algorithms will likely to produce better and more accurate results.

Economic, Environmental and Social Impacts

Working with multiple robots is still a relatively new area of investigation. Although there are many hard problems yet to be solved, multi-agent approaches have already demonstrated a number of important impacts both environmentally and socially: Multiple agents can improve eciency in many tasks as they specialize, yet some tasks simply cannot be done by a single robot. Moreover, real-time response can be achieved by spreading the computational burden of control and information processing across a population. Multi-agent strategies not only increase utility but also allow us to develop an important aspect of intelligence: social behavior. Some scientists, such as sociologists and psychologists, use simulated groups of robots to model social behavior in humans. Many aspects of human interaction can be studied in this way, including the spreading of diseases or learning how trac jams form.

27

Cost Analysis

Minik II robots have already been designed and manufactured in a previous ISL project. The cost analysis as taken from the associated EE491-EE492 project report is shown in table 4.

No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Component Plexiglass Cables Li-po battery GHM-01 30:1 Gear Motors X 2 Motor Control Card Materials Encoders X 2 Solarbotics Wheels CF to IDE 44 Pin Adapter Compact Flash Plastic Ball Caster VIA Epia P700-10L Surveyor Stereo Vision System(optional) CMU Cam3 (optional) Sharp Distance Sensors(optional) X 2 CNY70 Line Following Sensors(optional) Total Standart Total (All optionals included)

Weight (g) 400 75 150 66 29 30 24 29 20 4 400 140 108 75 75 1208 1606

Length (mm) 70 104 48 90 30 67 52 36 12 100 60 74 13.5 20

Width (mm) 40 34 37 70 20 67 44 42 12 72 150 115 44.5 40

Height (mm) 3 33 37 10 7 6 11 3.3 11 18 60 53 13.5 3

Cost (TL) 12 6 100 35 150 40 1 35 100 5 350 825 225 40 13 834 1567

Table 4: The components of Minik II robot, physical properties and cost of each one

Minik II Wiring Diagram

28

Figure 35: Process of nding connected components

29

References
[1] B. Yim, Y. Lee, J. Song, and W. Chung. Mobile robot localization using fusion of object recognition and range information. In Proceedings of IEEE Int. on Robotics and Automation, pages 35333536, 2007. [2] E.;Ostrowski J;Goncalves L.;Pirjanian P.;Munich M. Karlsson, N.; Di Bernardo. The vslam algorithm for robust localization and mapping. In Int. Conf. on Robotics and Automation (ICRA), 2005. [3] Ian D. Reid Andrew J. Davison. Monoslam: Real-time single camera slam. In IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, pages VOL. 29, NO. 6, JUNE 2007. [4] Tinne Tuytelaars Herbert Bay and Luc Van Gool. Surf: Speeded up robust features. In COMPUTER VISION ECCV, pages Volume 3951/2006, 404417, 2006. [5] Remagnino Paolo Orwell, James and Graeme Jones. From connected components to object sequences. In 1st IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), 2000. [6] Ioannis M. Rekleitis. Ph.d. thesis. In School of Computer Science, McGill University, Montreal, Quebec, Canada., 2003. [7] M. Fox D. Simmons R. Thrun S. Burgard, W. Moors. Collaborative multi-robot exploration. In ICRA 00. IEEE International Conference,, pages Vol.1, pg. 476 481., 2000. [8] Linfeng Guo Opas Chutatape. A modied houghtransform for line detection and its performance. In Elsevier, Pattern Recognition, pages Volume 32, Issue 2, Pages 181192, February 1999.

30

Das könnte Ihnen auch gefallen