Sie sind auf Seite 1von 4

Wrist Mounted Camera for Colour and Shape Recognition

Mihaela Tilneac1,a, Sanda Grigorescu1,b, Victor Palologue2,c, Valer Dolga1,d


1

University POLITEHNICA of Timisoara, Mechanichal Engineering Faculty, Blv.Mihai Viteazu 1, RO-300222, Timisoara, Romania
2

cole des Mines de Nantes, La Chantrerie 4, rue Alfred Kastler, F-44307 Nantes, France
a

mihaela.tilneac@mec.upt.ro, sanda.grigorescu@mec.upt.ro, victor@paleologue.eu, d valer.dolga@mec.upt.ro

Keywords: colour and shape recognition, robot visual servoing

Abstract. The aim of this paper is robot with vision system to interact with its environment. The task considered is recognizing and finding objects extracted from the image space. The work presents a Matlab program for different objects colour and shape recognition. The image taken by a low cost Web camera is processed and information is transferred to a robot controller for moving above the identified object. Introduction In recent years, there has been a growing interest in guiding robot manipulators through vision systems that contain video cameras to measure without contact the position and orientation of any visible object within their workspace. A vision system is composed of one or more video cameras together with the necessary program to acquire and display the image of a scene on a computer screen as well as to extract the most relevant information of the image. The vision system provides a two-dimensional (2D) image of the world system. The mapping between the coordinates associated with the 2D image and the corresponding coordinates in the world system is given in terms of perspective and rigid body transformations that involve the intrinsic and extrinsic camera parameters. The extrinsic parameters are referred to as the position and orientation of the camera with respect to the world reference frame. The intrinsic parameters are associated with optical and geometric aspects of the camera such as focal length, distortion coefficients of the lens, and scale factors. Camera calibration is an important issue in artificial vision, computation and visual servoing. Camera calibration is the process of determining explicitly the internal camera geometric and optical characteristics (intrinsic parameters) and the 2D position and orientation of the camera relative to the world coordinate system (extrinsic parameters). This is achieved from a set of known points in the world reference frame and their corresponding position in the computer image frame [2]. Visual servoing is a technique for robot motion control by feeding back information acquired by vision sensors. Visual information is versatile because environmental data can be measured without contact; hence, visual servoing plays an important part in robot operation in unknown or changing environments [4]. Vision-based robotic systems are also being used extensively in agricultural applications. Recently, increased research activities have been started to develop small autonomous field robots for future applications in agriculture [7]. Most new sensor technologies in agricultural engineering are based on optoelectronic systems [3]. Plant recognition is one of the most difficult problems confronting the agricultural robots. This is caused by the fact that there is no system available, with enough high computational power for distinguishing between crops and weed [5]. Generally, the problem is to find a set of features that may be used in a classification scheme [1].

Colour and shape recognition This paper presents preliminary results from researches aims to build an agricultural mobile robot with vision-based perception for mechanical weed removing. This implies the need to recognize individual plant in order to make distinction between crop and weed. In Fig. 1 is presented a possible configuration of an agricultural mobile robot for weed-removing applications.

Fig.1 Weed control robot We propose to realize an individual plant recognition and localization system. To reach this aim, we developed an image processing application using MATLAB environment [6]. Actually, the system is able to detect and localize colours and simple shapes (squares, triangles and circles). This application could be accomplished with plant leaf colour and shape recognition. Fig. 2 presents the MATLAB application structure. The application was developed in a modular way to separate different functions into independent parts. Thus, each part may change its operation without disturbing the rest of the program. The following diagram describes the structure of the program, illustrating the dependencies between modules by arrows. OBJECT denotes red square, red circle, red triangle, green square, green circle, green triangle, blue square, blue circle or blue triangle. COLOUR denotes red, green or blue colour. SHAPE denotes square, circle or triangle shape.

MARK IDENTIFIED OBJECT

OBJECT SELECTION

IMAGE AQUISITION

OBJECT 1

OBJECT 1 DETECTION COLOUR DETECTION

OBJECT 2 DETECTION COLOUR DETECTION SHAPE DETECTION

OBJECT 2 SHAPE DETECTION

. . .
OBJECT 9 OBJECT 9 DETECTION COLOUR DETECTION SHAPE DETECTION

Fig.2 MATLAB application structure

MATLAB FUNCTION
function [image] = image_acquisition()

EXPLANATION OF FUNCTION Launch device camera application for taking image(s). Isolates the colour in the image. Returns a binarized image. Analyze binary image for shape detection. Returns the centre coordinates (x, y) of the detected shape. Draws a cross having the centre in (x, y) coordinates, corresponding to the detected object.

function [ x,y ] = colour_shape_detection( image )


function imageModif = marking_object( image, x, y, t )

Tab.1 Explanations of program functions

function [ x,y ] = colour_shape_detection( image ) colour: red, green or blue shape: square, circle or triangle

Fig.3 Explanation of colour_shape_detection functions

Red colour binarization

Square shape detection

Green colour binarization

Triangle shape detection

Original image Blue colour binarization

Circle shape detection

Fig.4 Colour image processing and shape detection

Robot visual servoing The (x, y) coordinates of the detected object in camera space, calculated with the function imageModif, are used to move the robot in respective Cartesian coordinates (Fig. 5). Prior the launch of the application, an off-line calibration of the camera position and orientation with respect to the world coordinates was done. The Matlab program modules for robot control are: initialisationRobot (activation of port COM1), mouvementRobot (moves the robot by changing the X, Y Cartesian coordinates according to the transformation calculus).

Low cost Web Camera

x y z
Robot ER V+

Tip reaches the selected object

Fig. 5 The moving Web camera Conclusion Experiments on this application shows that more servoing techniques must be implemented for a better image acquisition based on camera orientation. Further research will be done by studying colours and shapes of a large number of plant species. More plant recognition features must be found in order to reach this goal. Leaves identification requires several complex algorithms to complete the Matlab program. Some steps are already done in this direction with good results. References [1] H. J. Andersen. Outdoor Computer Vision and Weed Control, Ph.D. Dissertation, Aalborg University, Denmark, Jun. 2002. [Online]. Available: http://www.cvmt.dk/~hja/publications/all.pdf. [2] R. Kelly, F. Reyes. On Vision Systems Identification with Application to Fixed-Camera Robotic Systems International Journal of Imaging Systems and Technology, Volume 11, Issue 3, 2000, pp: 170-180. [3] R. Klose, A. Ruckelshausen, M. Thiel, J. Marquering. Weedy a sensor fusion based autonomous field robot for selective weed control, Proc. 66th International Conference Agricultural Engineering/ AgEng: 167-172, 2008. [4] J. Lin, H. Chen, Ping Jiang, Y. Wang, P. Woo. Curve Tracking and Reproduction by a Robot with a Vision System. Journal of Robotic Systems Volume 16, Issue 10, 1999, pp. 547-556 [5] T. Oberndorfer. Embedded vision system for intra-row Weeding, Masters Thesis in Computer System Engineering, School of Information Science, Computer and Electrical Engineering Halmstad University, Jan. 2006. [Online]. Available: http://dspace.hh.se/dspace/bitstream/2082/533/1/0623%20TO.pdf. [6] V. Palologue. Reconnaissance d'image et manipulation robotique, Mode d'emploi et rapport de synthse du projet l'Universit Polytechnique de Timisoara, 2009. [7] A. Ruckelshausen, .P Biber, M. Dorna, M. Gremmes, R. Klose, A. Linz, F. Rahe, R. Resch, M. Thiel, D Trautz, U Weiss. BoniRob an autonomous field robot platform for individual plant phenotyping. In: Proceedings European Conference on Precision Agriculture (ECPA), Wageningen, The Netherlands, 2009.

Das könnte Ihnen auch gefallen