Sie sind auf Seite 1von 103

International Masters Thesis

Vision Based Grasp Planning for Robot Assembly

Naresh Marturi Technology

Studies from the Department of Technology at rebro University rebro 2010

Vision Based Grasp Planning for Robot Assembly

Studies from the Department of Technology at rebro University

Naresh Marturi

Vision Based Grasp Planning for Robot Assembly

Supervisor:

Prof. Ivan Kalaykov

Naresh Marturi, 2010


Title: Vision Based Grasp Planning for Robot Assembly
ISSN

1404-7225

Abstract
This thesis demonstrates an industrial assembly task of assembling several parts into a product by using the vision information. Due to the lack of sensory capabilities, most of the assembly cells cannot act intelligently in recognizing the workpieces and perceiving the task space. These types of systems were lacking the exibility and capability of automatic modication in their trajectories to accommodate changes in the task. Such a exibility for assembly cells was achieved through the integration of vision sensing. For this work, we prototype an assembly cell that has one ABB IRB140 robot equipped with a exible gripper, a exible xture and a camera xed in the midpoint of the gripper. The exibility of the assembly cell is provided by the main components - the gripper and the xture, which are already designed and prototyped at AASS IC Laboratory and the vision system, developed during this project. The image information from the camera is used to perceive the robots task space and to recognize the workpieces. In turn this information is used to compute the spatial position and orientation of the workpieces. Based on this information an automatic assembly grasp planner was designed and developed to compute the possible stable grasps and to select and execute the posture of the entire robot arm plus gripper. In order to recognize the workpieces, different low-level object recognition algorithms were developed based on their geometrical models. Once the workpieces are identied and grasped by the robot, the vision system is no longer in use and the robot will execute the predened sequence of assembly operations. In this system, the assembly process of every product is described as an assembly tree, e.g. precedence graph, for all parts in the product. Entire work was assessed by evaluating the individual modules of the project against a set of goal based criteria and using those results in nding the project overall importance. The tests conducted on the developed system showed that the system is capable of grasping and assembling workpieces regardless of their initial position and orientation. Apart from this, a simple and reliable communication was developed in order to connect the components of the assembly cell and to provide a exible process execution. Keywords: Flexible assembly cell, Grasp planner, Object identication
7

Acknowledgements
First of all I would like to gratefully acknowledge my supervisor Prof. Ivan Kalaykov for his abundant help and prolic suggestions. I specially thank him for his innite patience. The discussions I had with him were invaluable. I would like to say a big thanks to Assoc.Prof. Anani Ananiev for his support in xing the hardware problems with the gripper. I am grateful to all my friends for being the surrogate family during the two years I stayed in rebro. My nal words go to my family. I want to thank my mom, dad, sister and bujji, whose love and guidance is with me in whatever I pursue.

Contents
1 Introduction 1.1 Background . . . . . . . . . . . . . . . . . 1.1.1 Eye-in-hand for robot manipulators 1.2 Project goal . . . . . . . . . . . . . . . . . 1.2.1 Project evaluation . . . . . . . . . . 1.3 Contributions . . . . . . . . . . . . . . . . 1.4 Thesis structure . . . . . . . . . . . . . . . 2 Previous study 2.1 Related study . . . . . . . . . . . . . 2.1.1 Visual servoing . . . . . . . . 2.1.2 Grasping with visual servoing 2.1.3 Vision system . . . . . . . . . 2.1.4 Grasping module . . . . . . . 2.2 Resource description . . . . . . . . . 2.2.1 Robot . . . . . . . . . . . . . 2.2.2 Grasping system . . . . . . . 2.2.3 Camera . . . . . . . . . . . . 2.3 Functions in FMS . . . . . . . . . . . 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 17 19 20 20 20 21 23 23 23 24 26 28 30 30 34 39 40 41 43 43 44 45 48 48 51 54 55 55

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

3 Developed system 3.1 Experimental setup . . . . . . . . . . . . . 3.2 System architecture . . . . . . . . . . . . . 3.2.1 Work ow . . . . . . . . . . . . . . 3.3 Object identication . . . . . . . . . . . . 3.3.1 Shaft recognition . . . . . . . . . . 3.3.2 Gear recognition . . . . . . . . . . 3.4 Automatic planner for arm posture control 3.5 Synchronizing arm and gripper motions . . 3.5.1 Arm motion control . . . . . . . .
11

12

CONTENTS

3.5.2 Gripper motion control . . . . . . . . . . . . . . . . . . 3.5.3 Interface to synchronize motions . . . . . . . . . . . . . 3.6 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Experiments with the system 4.1 Test environment . . . . . . . . . . 4.2 Assembly representation . . . . . . 4.2.1 Assembly sequence selection 4.3 Test scenario . . . . . . . . . . . . 4.3.1 Special cases . . . . . . . . . 4.3.2 Test results . . . . . . . . . 4.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 58 61 63 63 64 64 66 67 68 75 77 77 78 83 85 91 91 92 95 99

5 Conclusions 5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Source code for shaft identication B Source code for gear identication C Camera calibration procedure C.1 Camera intrinsic parameters . . . . . . . . . . . . . . . . . . . . C.2 Camera extrinsic parameters . . . . . . . . . . . . . . . . . . . . D Source code for client and server E Source code for Galil controller

List of Figures
1.1 FMS schematic diagram . . . . . . . . . . . . . . . . . . . . . . 1.2 System architecture of robotic vision based control . . . . . . . 1.3 Typical Eye-in-hand system . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Visual servoing control schema . . . . . Grasping control architecture . . . . . . Pinhole camera model . . . . . . . . . . Contour image . . . . . . . . . . . . . . Contour image . . . . . . . . . . . . . . ABB IRB 140B robotic manipulator . . . Robot geometry . . . . . . . . . . . . . . Robot mechanical structure . . . . . . . Robot base and tool coordinate systems Robot wrist coordinate system . . . . . . Flexible gripper . . . . . . . . . . . . . . Finger conguration 1 . . . . . . . . . . Finger conguration 2 . . . . . . . . . . Finger conguration 3 . . . . . . . . . . Flexible xture . . . . . . . . . . . . . . Galil motion controller . . . . . . . . . . Camera . . . . . . . . . . . . . . . . . . Simulated view of robot-centered FMS . Test assembly . . . . . . . . . . . . . . . Test assembly parts sequence . . . . . . . System architecture . . . . . . . . . . . . Work ow diagram . . . . . . . . . . . . (A) Background image (B) Current frame Subtracted image . . . . . . . . . . . . . Curve pixels of the shaft . . . . . . . . . (A) Original image (B) Edge image . . .
13

17 18 19 23 25 27 29 30 31 31 31 32 33 35 36 36 37 37 39 39 41 43 44 44 47 49 49 49 51

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

14

LIST OF FIGURES

3.9 3.10 3.11 3.12 3.13 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20

Recognized gears . . . . . . . . . . . . . . . . Sample code for robot motion control . . . . . Robot zone illustration diagram . . . . . . . . Robot controller communication architecture Sequential diagram for client server model .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

52 56 57 58 59 63 64 65 68 68 69 69 69 70 70 71 71 72 72 72 73 73 74 74 74 91

Test environment . . . . . . . . . . . . . . . . . . . . Graph structure of test assembly . . . . . . . . . . . . Precedence diagram of test assembly . . . . . . . . . (A) Initialized gripper (B) Arm at home position . . . (A) Arm at position 1 (B) Arm at position 2 . . . . . Arm at position 3 . . . . . . . . . . . . . . . . . . . . Screen-shot of execution window . . . . . . . . . . . Workpieces in robot task space . . . . . . . . . . . . Recognized shaft . . . . . . . . . . . . . . . . . . . . Grasping shaft . . . . . . . . . . . . . . . . . . . . . (A) Robot Fixing shaft (B) Fixed shaft . . . . . . . . . (A) Searching at POS2 (B) Searching at POS3 . . . . . Recognized small gear . . . . . . . . . . . . . . . . . Grasping small gear . . . . . . . . . . . . . . . . . . . (A) Robot xing gear (B) Assembled gear . . . . . . . (A) Searching for the gear (B) Robot grasping the gear (A) Fixing the gear (B) Assembled big gear . . . . . . (A) Robot grasping the pin (B) Fixing the pin . . . . . Assembled pin . . . . . . . . . . . . . . . . . . . . . . Final assembled product . . . . . . . . . . . . . . . .

C.1 Pinhole camera geometric projection . . . . . . . . . . . . . . .

List of Tables
2.1 2.2 2.3 2.4 2.5 2.6 Robot axis specications . . . . . . . . Flexible gripper technical conguration Flexible gripper nger conguration . . Flexible xture technical conguration Flexible xture grasp dimensions range Camera specications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 35 37 38 39 40 50 53 56 66

3.1 Pseudo code for shaft recognition . . . . . . . . . . . . . . . . . 3.2 Pseudo code for gear recognition . . . . . . . . . . . . . . . . . 3.3 Robot positioning instructions . . . . . . . . . . . . . . . . . . . 4.1 Test assembly task description . . . . . . . . . . . . . . . . . . .

15

Chapter 1

Introduction
1.1 Background

This thesis demonstrates a simple industrial task of assembling different workpieces into a single product by using the image information from the vision system that is integrated with a high precision articulated1 robotic manipulator. The main goal of this work is to develop a successful grasp planning methodology using the vision information for a exible assembly cell.

Figure 1.1: FMS schematic diagram

In the past two decades, a number of technological advancements have been made in the development of exible manufacturing systems(FMS). A FMS can be described as a system consisting of one or more handling devices like robotic manipulators along with the robot controllers and machine tools, arranged so that it can handle different family of parts for which it has been designed and
1A

robot with rotatory joints and a xed base is called as an articulated robot.

17

18

CHAPTER 1. INTRODUCTION

developed [Rezaie et al., 2009]. Figure 1.1 shows an example of FMS. In the present day life, these systems are playing vital role in industrial applications like welding, assembly operations etc.. A simple FMS used for assembly operations, which is also termed as exible assembly cell is an arrangement of one or more Computed Numerically Controlled (CNC) machines along with a robot manipulator and a cell computer. The main tasks of CNC machines include work load balancing, task scheduling etc. and also responsible for handling the machine breakdown and tool breakage. The cell computer is responsible for the supervision and coordination for various operations in the manufacturing cell. The end effector of the robot manipulator is xed with specic machine tools (e.g. two or three gured grippers) depending on the task to be performed. The functions of these manufacturing cells and the technical specications of the used tools are described in more detail in the preparation study chapter of this thesis. Due to the lack of sensory capabilities, most of the assembly cells cannot act intelligently in recognizing the workpieces and perceiving the task space. For example, a general robotic assembly cell requires the workpieces presented to the robot must be placed in a predened precise locations and with a known orientation, which is xed to complete the overall task. These types of systems are lacking the exibility and capability of automatic modication in their trajectories to accommodate changes in the task. Such a exibility for assembly cells is achieved through the integration of vision sensing, as the visual sensors can provide a rich and complete information of the task space than any other sensing devices. And also with the help of the information received from integrated vision, robots can now act intelligently and possess a capability of dealing with imprecisely positioned workpieces and also can handle uncertainties and variations in the work environment. This vision information is also useful in enhancing the capability of robot by continuously updating its view of the world. This type of architecture is termed as Eye-in-hand or Eye-tohand conguration. The basic building blocks of the whole system are shown

Figure 1.2: System architecture of robotic vision based control

in Figure1.2 and can be described as follows: Work space includes xtures, workpieces and tools.

1.1. BACKGROUND

19

Sensory system allows the robot to perceive work environment and to recognize the workpieces. Control system contains the cell computer and a robot controller to organize the tasks and to control the robot respectively. Robotic manipulator is used to exhibit appropriate action under the control of robot controller.

1.1.1

Eye-in-hand for robot manipulators

For the last couple of decades eye-in-hand using visual servoing has been studied extensively because of its importance in industrial assembly operations. An eye-in-hand system can be described as a robot end effector equipped with a close-range camera as shown in Figure1.3. The camera selection is based on the task complexity. An illuminating source is attached to the gripper along with the camera in order to capture good images in dim lighting conditions and also to conict the light changes at some areas in the real scene of view. The camera has a lens that can be adjusted for proper focus to minimize the positioning error [Eye]. These type of systems are mainly employed to guide the robot end effectors and the grippers in performing a specic task. The images acquired by the camera are processed using specic algorithms in a computer system in order to recognize the object of interest and also to nd its spatial information. This information can be used to guide the robot movement in a specic workspace.

Figure 1.3: Typical Eye-in-hand system

20

CHAPTER 1. INTRODUCTION

1.2

Project goal

The main objective of this project is to develop and demonstrate a pilot robotic system for implementing new basic functionalities in assembly cell. These functionalities include: Investigating and implementing eye-in-hand vision system for perceiving the working environment of the robot and recognizing the workpieces to be manipulated. Developing and implementing an automatic grasp planner, which computes possible stable grasps and sets gripper posture accordingly. Developing and implementing an automatic grasping planner, which selects and executes the posture of the entire robot arm plus a exible gripper. All above functionalities have to be implemented by a respective visual perception system.

1.2.1

Project evaluation

The general approach for evaluating this thesis work involves evaluating the individual modules of the project against a set of goal based criteria and using those results in nding the project overall importance. These individual modules includes eye-in-hand visual servoing, grasp planner, motion control and an interface to integrate all these modules. The overall process time is not included as a concept of research in this thesis.

1.3

Contributions

With the completion of this thesis, the overall goal has been achieved and a grasp planner for robot assembly operations has been successfully developed. Main contributions of this thesis are: Developing a methodology of autonomous planning and replanning of assembly operations. Intelligent use of the Flexible Gripper. Integration and demonstration of the above two functions in a heterogeneous environment involving plenty of particular limitations and problems.

1.4. THESIS STRUCTURE

21

1.4

Thesis structure

This section describes the contents mentioned in the next chapters. Chapter 2 Provides an overview of the relevant background concepts along with a survey of existing techniques. This chapter also provides all the technical information regarding various components like the robot, gripper, xture and the camera used for this project and also introduces the reader to the problem. Chapter 3 Proposes a solution for the problem introduced in previous chapter. This chapter also provides a detailed description of the implementation procedure. Chapter 4 Describes the test scenario to demonstrate the capacity of the developed system. Chapter 5 Presents conclusions along with suggestions for future work.

Chapter 2

Previous study
This chapter provides an overview of the related background concepts along with a survey of existing techniques. The main goal of this chapter is to render the reader all technical information regarding various components like the robot, gripper, xture and the camera used for this project along with the basic functionalities of FMS.

2.1
2.1.1

Related study
Visual servoing

Visual servoing mainly refers to the use of vision data to control the motion of the robot. Visual servoing can be described as a closed loop control algorithm for which the error is dened in terms of visual measurements. The main goal of this control scheme is to reduce the error and drive the robot joint angles as a function of the pose error [Taylor and Kleeman, 2006]. This type of control schema is mainly applied in the object manipulation tasks which requires object detection, servoing, alignment and grasping. Figure 2.1 shows the basic building blocks of visual servoing control schema. The visual servoing systems are generally classied into two types: position based visual servoing(PBVS) and image based visual servoing(IBVS). In the former model, the error is calculated

Figure 2.1: Visual servoing control schema 23

24

CHAPTER 2. PREVIOUS STUDY

after constructing the pose of the gripper from visual measurements where as in the later model, the error is formulated directly as the difference between the observed and desired location of the image features. In both cases, image information is used to compute the error.

2.1.2

Grasping with visual servoing

Grasping by multi-ngered grippers has been an important topic of research in the eld of manipulation for many years. Reaching a particular position and grasping an object is a complex task which requires lots of sensing activities. These activities need to be performed in a right sequence and in a right time in order to make a smooth and stable grasp. Most of the studies state that in order to provide a stable grasp, the grasping system requires complete information regarding the robot kinematics, gripper capabilities, sensor capabilities and also about the workspace where the objects are placed [Diankov et al., 2009]. A few work, however, has been done in integrating vision-sensors for grasping and manipulation tasks. For grasping stationary objects which is the concept of this thesis, the objects pose can be computed from the available image frames and motion of the arm can be planned accordingly. Such type of approach in which a robotic arm picks up a stationary object and places it in a specied location is described by Chiu et al. [1986]. Figure 2.2 shows a theoretical control program by Arbib [1981] for grasping a stationary object using vision data. According to this control program, in moving the arm towards the stationary target object, the spatial location of the target need to be known in prior. The required spatial information i.e. the object location, size and its orientation of the object is provided by vision system. As the arm approaches the target, it needs to correct its orientation towards the target. At the point of grasping the arm should be aligned towards the target in such a way that it will grasp around the longest axis in order to make stable hold of the object. Generally, for any grasping model using the common manipulating framework there will be three phases namely specication, planning and execution phase. Specication phase is responsible for supplying enough information to the robot in order to perform a specic task. Planning phase is responsible to produce a global plan based on the information received from the specication phase. A collision free path for the robot end effector in order to achieve the nal goal is produced from this phase. Finally, the execution phase is responsible for continuously updating the information from the planning phase and to execute a right grasping action. The performance of the overall model mainly depends on these three phases. For planning a grasp based on visual servoing the nal goal of the robot is dened with respect to the task frame i.e. the information acquired by the camera. The overall plan of the robot will be updated every time whenever the task frame updates [Prats et al., 2007]. One main fundamental observation with robotic systems using vision for grasping is that a robot cannot determine the accurate spatial location of the object if the camera

2.1. RELATED STUDY

25

Figure 2.2: Grasping control architecture

is located far away. Therefore, the robot should move closer towards the object for better accuracy. Eye-hand coordination Eye-hand coordination is a typical scenario that links perception with action [Xie, 1997]. It is the integration of visual perception system with arm manipulation system; the overall performance of the model depends on how these two systems are integrated. The simplest approach of eye-hand coordination is to make use of the vision system to compute the pose of the target object in the robot workspace and pass this information to the robotic manipulator to plan and execute the specic actions required for the task. In order to make this coordination model more reliable, an incremental mapping principle which maps the visual sensory data to hands motions should be established [Xie, 2002]. Most of the recent studies on eye-hand coordination focus on the estimation of feature Jacobian matrix1 for mapping the motion of the robot arm to changes in the image frame. To model a reliable incremental mapping principle the camera should be calibrated using a high precession calibration rig.
1 The image feature Jacobian matrix is a linear transformation matrix to transform task space to image space [Hutchinson et al., 1996].

26

CHAPTER 2. PREVIOUS STUDY

2.1.3

Vision system

Vision system mainly comprises of a normal pinhole camera along with an illumination source. In order to integrate this vision system along with a manipulation framework, it should be capable of tracking multiple objects that need to be manipulated. Functionally this vision system can be decomposed into two subsystems: 1. Object tracking 2. Pose2 estimation Object tracking Multiple object tracking and shape representation is a vital task for assisting robotic manipulator in assembly operations using vision system. Extensive research has been carried out in this eld since many years and various research algorithms have been developed. The common approach used for most of the object tracking applications is by training the vision system with an image dataset containing object, and matching the object in real scene with trained object [Ba tanlar et al., 2010]. But this type of tracking system is inapplicable s for vision based manipulation as it consumes time in performing the overall task. Other approach is to nd different characterizing features of the objects from the image frames, and the most preferred feature of this kind is the object centroid. Koivo and Houshangi [1991] used these type of features in their work for tracking different objects. Hunt and Sanderson [1982] proposed various algorithms for object tracking based on the mathematical predictions of the centroid locations of an object from the image frames. Huang et al. [2002] proposed image warping and Kalman ltering based object tracking method. H.Yoshimi and Allen [1997] used ducial marks3 and Allen et al. [1993] used snakes4 approach to trace various objects in a given workspace. In order to reduce the computational time for object tracking, many algorithms are developed based on image background subtraction. One such approach was used by Wiklund and Granlund [1987] for tracking multiple objects. A variety of techniques based on blob detection, contour tting, image segmentation and object feature extraction are in practice for low level object identication and geometrical shape description. One such approach is used in this thesis. Pose estimation After detecting the object to be manipulated, it is necessary to compute its pose in order to nd a set of valid grasping points. Pose of an object can be comcan be described as a combination of position and orientation of object. mark is a black dot on a white background. 4 A snake is an energy minimizing spline which can detect objects in an image and track nonoccluded objects in a sequence of images [Tabb et al., 2000].
3 Fiducial 2 Pose

2.1. RELATED STUDY

27

Figure 2.3: Pinhole camera model

puted either from a single image or from a stereo pair of images. Most of the vision based manipulation applications use POSIT algorithm [Dementhon and Davis, 1995], to estimate the pose of an object using a single pinhole camera. In general, vision based pose estimation algorithms mainly rely on a set of image features like corner points, edge lines and curves. In order to estimate the pose of an object from an image, prior knowledge about the 3D location of the object features is necessary. The pose of an object is a combination of its rotation R33 and its translation T31 with respect to the camera. So the pose of an object can me mathematically shown as [R|T ] which is a 3 4 matrix. For a given 3D point on an object, its corresponding 2D projection (perspective projection5 ) in image, cameras focal length and principle point of focus are used to compute its pose. The camera focal length and its principle point of focus can be computed by following a standard calibration technique proposed by Chen and He [2007]. Figure 2.3 shows the normal pinhole camera model and the related coordinate systems. The projection of a 3D point M [X, Y, Z]T on to image plane at a point m[x, y]T can be represented in the homogeneous matrix form as

x f y = 0 1 m 0

0 f 0

x y 1

X 0 Y 0 Z 0 K 1 M

(2.1)

where, K is the intrinsic parameter matrix of the camera, f is the camera focal length, = Z is the homogeneous scaling factor, x and y represent the principle point of the camera. These parameters are used to nd the point projection. A complete pixel position of a point M can now be written as
5 Perspective

projection is the mapping from three dimension onto two direction.

28

CHAPTER 2. PREVIOUS STUDY

m = K

03

R 03

0 1

0T 3 0

T M 1

(2.2)

Alternatively after combining matrices, equation 2.2 can be written as

m = K

03

R 0T 3

RT M 1

(2.3)

From the corresponding mapping of 3D feature points to 2D image points, pose is estimated.

2.1.4

Grasping module

Grasping module used for robotic manipulation tasks mainly comprised of the following two sub-modules: 1. Grasp planning 2. Grasp execution Grasp planning has been an important concept of research in the eld of dexterous manipulation since many years. Okamura.A.M. et al. [2000] presented a survey of existing techniques in dexterous manipulation along with a comparison to robotics and human manipulation. Rao et al. [1989] proposed 8 possible grasps using a three ngered gripper. Most of the researches assume that the geometry of the object is known in prior before starting the grasping process. A very important concept in grasp planning is the way of using this information to lter unstable grasp points like corners of the object. The main task is to select a particular type of grasp i.e. to take a nal decision of using two or three ngers for a particular object based on its geometry. In general, grasping an object means building a relationship between manipulator and object model. Sometimes the complete information of the object model is hardly known. In that case grasp planning is imprecise and not reliable. So instead of using the information about the object model directly, object features obtained by vision system and a binary image containing contours of objects can be used. The contour images serve as input to grasp computing block inside the planning module and a respected output from a database of grasps is produced, which serves as input to the planning module. The database of grasps contains the following list of grasps: 1. Valid grasps Those satisfy a particular grasping criteria and can be used for grasping but a stable grasping is not ensured and needs verication. 2. Best grasps These are a part of valid grasps and ensure stable grasping.

2.1. RELATED STUDY

29

3. Invalid grasps Those dont satisfy a particular grasping criteria and cannot be used for grasping. Finally, planning module will select an appropriate grasp from this database and pass this information to the execution module. Grasp execution module is a control module, which is responsible for executing the selected grasp. This module is also responsible for initializing both gripper and manipulator arm before grasping and moving the arm towards the object location respectively[Morales et al., 2006]. Grasp region determination The contiguous regions on the contour that comply with the nger adaptation criterion are called grasp regions [Morales et al., 2006]. The preliminary step in the grasp point determination is the extraction of grasping region from binary image containing object contours. An example contour image is shown in the Figure2.4. Contour based approach is the one of the most commonly used technique for shape description and grasping region selection. The main idea behind this approach is to extract meaningful information from curves like nding the vertexes and line segments in image which are highly signicant for grasping point selection. Some of the other research works prefer polygonal approximation using linking and merging algorithms [Rosin, 1997].

Figure 2.4: Contour image

A curvature function along with a xed threshold (1 ) is used to process all the contour points [Morales et al., 2006]. 1 helps in nding the continuous edge points for which a line segment can be approximated. The nal outcome of this step will be a set of line segments. These line segments are processed further by using another threshold, 2 (selected based on nger properties); such that all segments below 2 are rejected. The remaining edge line segments are the valid grasping regions. Ideally, this type of approach fails for some objects shown in Figure 2.5. For those objects most of edge points lie below 1 ,

30

CHAPTER 2. PREVIOUS STUDY

therefore approximation of edge line segments is not possible. For such type of objects the longer regions are broken down to small pieces. And these pieces can be approximated as straight lines.

Figure 2.5: Contour image

Once the grasp regions are selected, the next step is to compute the grasp points where the robot can make hold of the object. In order to nd good grasp points, we need to nd compatible regions out of all available valid regions. For a two nger grasp, nding the compatible grasp regions can be performed iteratively by selecting two regions at a time and validating them. The validation procedure can be performed by nding the normal vectors of the selected regions and projecting the regions in the direction of normal vectors. If there exists any intersection between the regions, the regions are said to be compatible for grasping. Once the compatible regions are selected the midpoints of these regions serves as grasping points for the gripper. On the other way for a three nger grasping, the similar approach is used for nding the compatible regions as well as grasp points, which is explained by Morales et al. [2006].

2.2
2.2.1

Resource description
Robot

The robot used in this thesis is ABB IRB 140B which is shown in the Figure 2.6. This is a six axis articulated robotic manipulator which allows an arbitrary positioning and orientation in the robots work space. The geometrical and mechanical structures of the robot are shown in Figures 2.7 and 2.8 respectively. The accuracy of this robot is very high with its position repeatability of +/-0.03mm. The maximum payload handling capacity of this robot is 6kg. The axis specication and the joints velocities of the robot are shown in Table2.1.

2.2. RESOURCE DESCRIPTION

31

Figure 2.6: ABB IRB 140B robotic manipulator [ABB, 2004]

Figure 2.7: Robot geometry [ABB, 2004]

Figure 2.8: Robot mechanical structure [ABB, 2004]

32

CHAPTER 2. PREVIOUS STUDY

Axis No. 1 2 3 4 5 6

Axis C, Rotation B, Arm A, Arm D, Wrist E, Bend P, Turn

Range 360 200 280 Unlimited(400 default) 240 Unlimited(800 default)

Velocity 200 /s 200 /s 260 /s 360 /s 360 /s 450 /s

Table 2.1: Robot axis specications

Robot coordinate systems The position and motion of the robot are always related to the robot Tool Center Point (TCP). The TCP is located in the middle of the dened tool. For any application, we can dene several tools but only one tool or TCP is active at a particular point of time or point of move. The coordinates of TCP are dened with respect to the robot base coordinate system or can use its own coordinates. Figure 2.9 shows the robot base and tool coordinate systems. For many applications TCP coordinates are recorded with respect to the robots base coordinate system. The base coordinate system of the robot can be described

Figure 2.9: Robot base and tool coordinate systems

as follows: It is located on the base of the robot. The origin is located at the intersection point of axis 1 and the robots base mounting surface The xy- plane coincides with base mounting surface such that x-axis points forward from the base and y-axis points to the left (From the robots perspective).

2.2. RESOURCE DESCRIPTION

33

The z-axis points upward and coincides with axis-1 of the robot. The tool is always mounted on the mounting ange of the robot. In order to dene its TCP it always requires its own coordinate system. The orientation of the tool at any particular programmed position can be determined by the orientation of the tool coordinate system. This coordinate system is also used to get appropriate motion directions when jogging the robot. The tool coordinate system is often referenced to the wrist coordinate system of the robot (See Figure 2.10). The coordinate system of the wrist can be described as follows:

Figure 2.10: Robot wrist coordinate system [ABB, 2004]

The wrist coordinate system always remains same as the mounting ange of the robot. The origin(TCP) is located at the center of the mounting ange. The z-axis points outwards from the mounting ange. At a point of calibration, the x-axis points in the opposite direction, towards the origin of the base coordinate system. The y-axis points to the left and can be seen as a parallel axis to the y-axis of base coordinate system. Robot motion controller The motion of the ABB IRB 140 is controlled by a special purpose fth generation robot controller, IRC5 designed by ABB robotics. The IRC5 controller is embedded with all the functions and controls in order to move and control the robot. This controller combines the motion control, exibility and safety with PC tool support and optimizes the robot performance for short cycle times and precise movements. Because of its multi move function, it is capable of synchronizing up to four robot controls. The standard IRC5 controller supports

34

CHAPTER 2. PREVIOUS STUDY

the high-level robot programming language RAPID and also features a well designed hand-held interface unit called FlexPendant or teach pendant which is connected to the controller by an integrated cable and connector. Robot software and programming RobotStudio is a computer application for the ofine creation, programming, and simulation of robot cells. This application is used to simulate the robot in ofine mode and to work on the controller directly in the online mode, as a compliment to the FlexPendant. Both the FlexPendant and RobotStudio are used to programming. FlexPendant is best suited for modifying position and path sequences in the program where as robot studio is used for more complex programming (e.g. socket programming).

2.2.2

Grasping system

The grasping system used in this thesis consists of two components, namely: 1. Flexible gripper for grasping objects. 2. Flexible xture for xing the grasped objects. Both these components are designed and prototyped at AASS research laboratory. Flexible gripper The gripper prototype shown in Figure 2.11 has three single joint identical ngers, providing a balance between functionality and increased level of dexterity, compared to standard industrial grippers. The base (palm) increases the functionality of the gripper providing the possibility of different relative orientation of the ngers. One of the ngers is xed to the base, while the other two can symmetrically rotate up to 90 deg each. Driven by four motors, the gripper is capable of: Grasping work pieces of various shapes and sizes (4 300 mm). Holding the part rigidly after grasping. Table 2.2 provides the technical conguration of the exible gripper.

2.2. RESOURCE DESCRIPTION

35

Figure 2.11: Flexible gripper

Flexible Gripper FG 2.0 Type 3 ngered servo driven Gripping method Outside Payload 2.5kg Gripping force for every nger max.30N Course for every nger 145mm Time for total nger course 10s Rotation speed of the nger max.14.5mm/s Rotation degree of moved ngers max.90 Dimensions 397 546 580
Table 2.2: Flexible gripper technical conguration [Ananiev, 2009]

Sensory system The sensory system equipped with the gripper prototype provides a vital feedback information to implement a closed loop control of the gripper (nger) movement. Mainly two types of sensors are used in this system. They are: 1. Tactile sensors for touch information. 2. Limit switches6 for position information.
6 Limit switches are the switching devices designed to cut off power automatically at the limit of travel of a moving object

36

CHAPTER 2. PREVIOUS STUDY

With the latest prototype of the gripper, the contact surfaces of the ngers are enclosed with tactile sensing pads for enabling tactile feedback during grasping movements. These sensors are based on force sensing resistors, whose resistance value changes whenever the nger comes in contact with an object. Limit switches, which are commonly used to control the movement of the mechanical parts are the second type of sensors used with this system. The limit switches enclosed with the three ngers of the gripper are used to determine the coarse nger positions and the attached optical shaft encoders estimates the precise position information of the ngers. Finger congurations The three available congurations for the nger movement are shown in the Figures 2.12, 2.13 and 2.14. Table 2.3 shows the number of ngers used and the suitable type of grasping for a particular nger conguration.

Figure 2.12: Finger conguration 1

Figure 2.13: Finger conguration 2

2.2. RESOURCE DESCRIPTION

37

Figure 2.14: Finger conguration 3

Conguration 1 2 3

No. of ngers used 3 3 2

Type Grasp long objects Grasp circular objects Grasp small objects

Table 2.3: Flexible gripper nger conguration

Flexible xture

Figure 2.15: Flexible xture

The xture prototype shown in the Figure 2.15 features 4 DOF and consists of one driving module and two pairs of connecting and grasping modules. Table 2.4 provides the complete technical conguration of the xture. The driving module consists of two identical parts connected with orienting couple. The two different types of movements dened with the driving module are linear and rotatory. Linear movement is accomplished by using a gear motor, ball

38

CHAPTER 2. PREVIOUS STUDY

screw pair converting the rotary motion of the nut into reciprocal and a sliding block connected to it where as rotatory movement is accomplished by using gear motor passing rotary motion via a teeth - belt connection to the ball-linear pairs. The connecting modules are responsible for holding the grasping modules containing nger pairs. The xture is designed in such a way that it can open and close the nger pairs independently with respect to each other. The main benets with this xture architecture are: Grasping wide range of objects with different shapes and sizes. (See Table 2.5 for allowed dimensions). Holding the objects rmly. Self centering of the objects. Precise control over the horizontal movement of holding modules and the vertical movement of ngers. Type Type of grasp Driving Run of every holder Force of grasping Time for full travel of the holders Operational time for grasping Maximum rotational speed Maximum torque Maximum angle of rotation Positioning accuracy Max. Weight of the Grasped Detail Dimensions Weight 4 ngered exible xture Inside and outside Elctromechanical, 24V 50mm 2500N 2.5s 0.5s 360 /s 7.5N.m 210 0.05 10kg 550mm 343mm 150mm 22kg

Table 2.4: Flexible xture technical conguration

Motion control The motion of the both gripper and xture are controlled by two different Galils DMC 21x3 Ethernet motion controllers. Figure 2.16 shows the Galils DMC 21x3 motion controller. With a 32-bit microcomputer, the Galils DMC 21x3 motion controller provides advanced features like PID compensation with velocity and acceleration, program memory with multitasking and uncommitted I/O for synchronizing motion with external events. The encoder

2.2. RESOURCE DESCRIPTION

39

Shape Cylindrical Square / rectangle Hexagonal Inside grasping

Min. size(mm) 30 30 25 90

Max. size (mm) 165 160 85 220

Table 2.5: Flexible xture grasp dimensions range

Figure 2.16: Galil motion controller

and limit switch information can be accessed by special purpose Galils software tools (GalilTools) for motion controllers. This tool is also used for sending and receiving the Galil commands. The integrated Watch Tool is used to monitor the controller status such as I/O and motion throughout the operation. The GalilTools C++ communication library (Galil class is compatible with g++ compiler in Linux) provides various methods for communication with Galil motion controller over Ethernet gal.

2.2.3

Camera

Figure 2.17: Camera

The camera used in this thesis is Logitech webcam pro 9000 (See Figure 2.17). This camera is xed in the midpoint of the exible gripper. A source of illumination is integrated with the camera in order to provide a uniform light distribution over the scene of view. Table 2.6 provides the technical specications of the camera.

40

CHAPTER 2. PREVIOUS STUDY

Focal length Lens iris Megapixels Focus Adjustment Video resolution image resolution Optical sensor Frame rate Communication

3.7mm F/2.0 2 (enhanced up to 8) Automatic 1600 1200 1600 1200 CMOS Up to 30FPS USB 2.0

Table 2.6: Camera specications

2.3

Functions in FMS
highly automated group technology (GT) machine cell, consisting of a group of workstations, interconnected by an automated material handling and storage system and controlled by a distributed computer system [Groover, 2000].

As explained in Section 1.1, a exible manufacturing system is a

These types of systems are mainly designed to produce various parts which are dened within the range of different sizes and shapes. These are a group of computer guided machines used to produce various products based on the controller (CNC machine) instructions [Mitchell]. The important characteristic of these systems are exibility; comes from their capability of handling different family of parts and their adaptability for the task changes. Generally these FMSs can be distinguished into various types like single machine cell, exible manufacturing cell and exible manufacturing system, based on the number of CNC machines used for operation [Leicester, 2009]. A Single machine cell consists of only one CNC machine and a exible manufacturing cell consists of two or three CNC controlled workstations along with the automated handling tools. The exible robot assembly cell used for this thesis consists of: A CNC machine tool responsible for controlling the overall assembly operation. An automated robot whose movements can be controlled by programming. A cell computer which is used to control the program ow and to coordinate the activities of the work station by communication with CNC. It is also used to monitor the work status.

2.4. SUMMARY

41

Figure 2.18: Simulated view of robot-centered FMS [Festo solution center]

These types of systems are also called as robot-centered FMS. A simulated view of one such system is shown in Figure 2.18. All the required instructions for the exible assembly cell are programmed and ported into the controllers before starting the assembly operation. All the components in the exible assembly are connected under a common network such that the communication between different machines and the cell computer is performed over high speed Intranet. One main advantage with this model is, all the instructions required for the task can be programmed ofine and tested in a simulation environment and can be executed online directly on the robot. The basic functionalities provided by this exible assembly cell are: Sensorial perception and status identication. Interactive planning in order to assure a stable performance for process changes. On-line decision making capabilities such as process monitoring, error recovery and handling tool breakage.

2.4

Summary

This chapter has provided an overview of the previous work done in the eld of vision based grasping and also provides a survey of existing techniques relevant to grasp planning methodologies, eye-in-hand control and extraction of work piece geometrical information from images. Technical details of various resources like the robot, CNC controller, camera, exible gripper and exible xture used in this thesis along with a short description of the exible assembly cell and its functionalities are also provided in this chapter. The concepts that are mentioned in this chapter are relevant to the main contributions in this thesis.

Chapter 3

Developed system
This chapter proposes a solution for the problem of grasping various workpieces from the robot work space and assembling them by making use of the Eye-in-hand vision system. This chapter also discuss the developed system architecture along with the used techniques in order to solve the problem.

3.1

Experimental setup

In order to demonstrate the capabilities of developed system a test assembly containing four different workpieces has been designed as shown in the Figure 3.1. These four objects differ from each other in size and shape and are initially placed in different locations at the robot work space. The main goal of the system is to identify the workpieces separately using the information received from camera and to develop a planning methodology for controlling the entire posture of the arm and gripper in order to assemble them. The parts identication sequence is shown in Figure 3.2.

Figure 3.1: Test assembly

43

44

CHAPTER 3. DEVELOPED SYSTEM

Figure 3.2: Test assembly parts sequence

3.2

System architecture

A conceptual diagram of the proposed system is shown in the Figure 3.3. The

Figure 3.3: System architecture

complete architecture is divided into four different subsystems, described as follows:

3.2. SYSTEM ARCHITECTURE

45

Camera subsystem is responsible for controlling the camera operations and updating the eld of view. Open Computer Vision Library (OpenCV) is used in the Linux platform to capture images from the camera. The images acquired from the camera serve as input to the image processing block inside controller subsystem. Controller subsystem is a combination of various blocks and is mainly responsible for synchronizing different actions inside the system. As previously stated, the image processing and object identication block inside this subsystem receives the image information from camera subsystem and produces an output containing the object feature information. This output serves as the input for shape and position identication block and a respected output containing the object spatial information is produced. This information is served as input to the grasp planning block and a overall plan for grasping is developed. The two interface blocks for the arm and Galil motion controllers are used to communicate with the ABB IRB 140 robot arm and exible gripper and xture respectively. These two interface blocks serve as a bridge between the software and the hardware. Robot arm subsystem is responsible for controlling the arm actions based on the information received from the controller subsystem. The task of this subsystem is to send/receive the position information of the robot end effector to/from controller. This subsystem is developed using ABBs RAPID programming language. Fixture and gripper subsystems are responsible for controlling the actions of the exible xture and gripper respectively based on the information received from the controller subsystem. These subsystems were developed using special purpose Galil programming language for motion controllers.

3.2.1

Work ow

Flow chart shown in Figure 3.4 provides a step-by-step solution to the problem of vision based assembly operation. As the system operates continuously for all the parts in a similar manner, a single cycle is displayed in the diagram. The system will run this cycle every time the variable P N (part number) increments. The work ow starts by initializing the robot, gripper and xture. In this step, all the three devices are moved to a predened position and orientation. In the next step, camera device and MATLAB engine are initialized, if any problem occurs during this step the system execution is stopped. Even though the xture and the gripper operate in a similar way, the execution steps are shown separately because, the xture is responsible only for holding the shaft (P N = 1) and remaining parts are mounted on this shaft. As the camera is xed inside

46

CHAPTER 3. DEVELOPED SYSTEM

the gripper, it cannot cover the total workspace in a single frame. Therefore the total region is divided into several subregions for which predened positions P OS1, P OS2 and P OS3 are specied. All workpieces should be found in these three subregions. If the initialization process is successful, the system will start executing the main cycle by changing the variable P N to 1. As a primary step of this cycle, the robot arm will be moved to a predened position P OS1 in order to search for particular workpiece in its eld of view. If the workpiece is identied at this position, system will start executing the next steps, otherwise the arm will move to next predened position P OS2 to nd the part. Once the part is identied, its position and shape details are computed and the arm is commanded to move to get a pre-grasping pose. At this point, the grippers ngers are adjusted depending on the received shape information. In the next step, the arm is commanded to move near the object and gripper grasps the workpiece. Once the grasping operation is successful, the camera is no longer in use and the arm is moved to a predened medium point position in order to avoid collision with the xture while xing the part. At this point of operation, the xture jaws are assigned in an order to hold the object. As a nal step in this cycle, the arm moves over the xture and delivers the grasped part. The whole cycle is executed repeatedly for all the parts and system execution stops once the assembly operation is nished.

3.2. SYSTEM ARCHITECTURE

47

Figure 3.4: Work ow diagram

48

CHAPTER 3. DEVELOPED SYSTEM

3.3

Object identication

This section describes the developed methods to identify the workpieces from images. The workpieces in this assembly operation are, a cylindrical shaft, two circular gears which are already shown in the test assembly setup. As stated earlier in Section 3.1 these workpieces have to be identied in a specic order during assembly execution. The implementation procedures for part identication are explained below.

3.3.1

Shaft recognition

The procedure used to recognize the shaft was implemented in MATLAB based on the image background subtraction and boundary extraction algorithms. The pseudo code in Table 3.1 presents the basic steps of the procedure. Initially, a background image of the workspace is captured without any workpieces. For every new frame from the camera the proposed algorithm for shaft recognition, subtracts this new frame from the background image. The result of this step will be a binary image containing the region of interest (ROI) of different objects. Figures 3.5 and 3.6 show the background image, the original image and the resulting binary image after image subtraction respectively. The area of this ROI has to be greater than a certain threshold in order to avoid the recognition of small objects. This threshold is chosen manually by trial and error method. Next, in order to determine the type of the workpiece the curves that composes the boundaries are found in a systematic way. A threshold is obtained to discriminate between the shaft and the remaining work objects. The boundary region containing the total number of curves less than this threshold will be the region containing the shaft. Figure 3.7 shows the extracted curve pixels of a boundary region. As a cross checking approach another threshold is obtained based on the difference between the major axis and minor axis of the found region. This step is performed in order to eliminate misjudged regions in the previous step. The nal region after this step is to be the shaft region. The object orientation is determined based on the angle between the major axis of the detected region with the X-axis. Source code for shaft identication is given in Appendix A.

3.3. OBJECT IDENTIFICATION

49

Figure 3.5: (A) Background image (B) Current frame

Figure 3.6: Subtracted image

Figure 3.7: Curve pixels of the shaft

50

CHAPTER 3. DEVELOPED SYSTEM

Table 3.1: Pseudo code for shaft recognition

1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33:

Capture the background image BG Subtract each new frame from BG for each resulting region if (area > areathreshold) add this region to ROI end if end for for each resulting ROI extract the boundaries for each resulting boundary extract the curves end for for each extracted curve Find the number of curve pixels N CP end for if (N CP < curvethreshold) nd F V T = [M 1, M 2, O, A] where F V is a vector containing object features M 1 is the major axis of the region M 2 is the minor axis of the region O is the Orientation of the major axis A is the Area of the region nd difference D = M 1 M 2 set the value of found to one. end if end for if f ound = 1andD > shaf tthreshold compute the centroid return shaft found else return shaft not found Go To step 2 end if

3.3. OBJECT IDENTIFICATION

51

3.3.2

Gear recognition

The procedure used to recognize the circular gear is developed using Open CV with C++ in Ubuntu distribution. The pseudo code in Table 3.2 presents the basic steps of the implementation procedure. Once the shaft is recognized and xed in the xture, the system starts searching for the circular gears. The approach for recognizing both the circular gears is same and is based on the Hough transformation for circle detection algorithm [Yi, 1998; CV, 2000]. The pseudo code for gear recognition can be explained as follows: The images from the camera are captured continuously. Before starting with the exact search for circles, these captured frames require preprocessing. For this, each captured frame is converted into gray scale image and undergoes histogram equalization in order to normalize the intensity levels. A median lter of size 11 11 is applied on the equalized image to reduce the noise levels. Once the preprocessing stage is completed, edge features are extracted from the image using Canny edge detection algorithm. The Canny edge detection algorithm applies two thresholds which are used for edge linking and to nd the initial segments of strong edges respectively. These two thresholds are obtained using trial and error method. The result of this step is to be a binary image containing edges. Figure 3.8 shows the original image along with its computed edges.

Figure 3.8: (A) Original image (B) Edge image

The proposed algorithm uses this edge image to search for the presence of Hough circle candidates with specic radius. If one or more than one circle candidate found, the captured image is considered for the rest of the recognition process otherwise it is discarded and a new frame is captured. As an initial case if more than one circle candidate found, a lter is applied on the found circle candidates in order to limit their total count to two (assuming that both the gears are placed in the same subregion). This lter is designed based on the manually measured radii of two gears. As a next step, the radii of these circles are computed and the circle with least radius and satisfying the small gear radius condition is to be the smaller gear and the circle with most radius

52

CHAPTER 3. DEVELOPED SYSTEM

and satisfying the big gear radius condition is to be the bigger gear. Figure 3.9 shows the recognized gears. On the other hand if the gears are placed in different subregions i.e. if only one circle candidate found, two different thresholds are obtained (based on their radii) to recognize the gears. At a particular position, the time to search and recognize the gears is xed and if the gears are not recognized within this time the robot is commanded to move to one of the other predened position P OS1, P OS2 or P OS3 as mentioned earlier. If any of the two gears are found missing in the robot workspace, the system produces an assembling error message and program execution is stopped. Source code for gear identication is given in Appendix B.

Figure 3.9: Recognized gears

3.3. OBJECT IDENTIFICATION

53

Table 3.2: Pseudo code for gear recognition

1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40:

while (found = false) Capture frame from the camera convert image to Gray scale equalize the image lter the image with a median lter of size 11 11 nd edges in the image using Canny edge detector search for Hough circles in the binary edge image if (circlesf ound > 1) for (circlecandidate = 1 to 2) compute the radii R1 and R2 if (SM ALLGEAR) if (R1 < R2)and(smallmin < R1 < smallmax) compute centroid of the circle return centroid pixel coordinates found = true break the for loop else if (R2 < R1)and(smallmin < R2 < smallmax) do steps 13 to 16 end if else if (BIGGEAR) if (R1 > R2)and(bigmin < R1 < bigmax) do steps 13 to 16 else if (R2 > R1)and(bigmin < R2 < bigmax) do steps 13 to 16 end if end for break the while loop end if else if (circlesf ound == 1) compute radius R1 if (SM ALLGEAR) if (smallmin < R2 < smallmax) do steps 13 to 16 else if (BIGGEAR) if (bigmin < R2 < bigmax) do steps 13 to 16 end if else Go To step 2 end while

54

CHAPTER 3. DEVELOPED SYSTEM

3.4

Automatic planner for arm posture control

Once a workpiece is identied in the robot workspace, the next step is to compute its spatial location with respect to the robot and its possible stable grasping state depending on its orientation. Based on this information the arms posture can be controlled automatically. This can be performed in several steps. 1. Initially the robot is moved to a predened position in the robots workspace. As the robot is calibrated, its TCP position coordinates are available for future computation. 2. Next step is to nd the camera location in the robot world coordinates. This is performed by calibrating the camera with respect to the robot. The camera calibration procedure used for this thesis is described in Appendix C. 3. From the above step, a nal transformation matrix containing the camera rotation R(3 3) and translation T (3 1) with respect to the robot coordinate system is computed. This transformation matrix along with the camera intrinsic parameters matrix K is used to compute the objects location in robot frame. This can be explained as follows: Let us consider a 2D point m(u, v) in the image which corresponds to the centroid of the recognized object. For a 2D point m in an image, there exists a collection of 3D points that are mapped onto the same point m. This collection of 3D points constitutes a ray P () connecting the camera center Oc (x, y, z)T and m(x, y, 1)T , where is a positive scaling factor that denes the position of the 3D point on the ray. The value of is the average value of the back projection error of a set of known points in 3D. This value is used to obtain the X and Y coordinates of the 3D point using

X Y = T + R1 K 1 m Z M

(3.1)

As the used vision system is monocular, it is not possible to compute the value of Z i.e. the distance between the object and the camera, instead it is computed by using the robot TCP coordinates and the object model. 4. Next step is to compute the orientation of the object in robot workspace. This is performed by tting the detected object region (in image) to an ellipse. Now the orientation of the object corresponds to the orientation of the major axis of the ellipse with respect to its x-axis. Based on this orientation a nal rotation matrix also called as direction cosine matrix is computed using the current camera rotation. This rotation matrix is

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS

55

converted into quaternions which are used to command the robots nal orientation. As the used vision system is monocular, this system posses some limitations which are explained in Section 3.6. 5. Once the object orientation is computed, the next step is to drive the robot towards the object. The nal position coordinates (X, Y, Z) of the object that are computed in the third step along with the orientation quaternions computed in the previous step are used to move the robot to a particular location in order to grasp the object. The robot movement and positioning instructions are explained in Section 3.5.1. 6. Once the robot is moved to the object grasping location, a suitable grasping type (2 or 3 ngered) is selected automatically based on the objects orientation.

3.5

Synchronizing arm and gripper motions

The motions of the robot, the gripper and the xture are controlled independently by a set of predened functions that are dened in their motion controllers. As both gripper and xture motions are controlled in a similar manner we are not pointing out to the xture in this section.

3.5.1

Arm motion control

As explained before in Section 2.2, the robots motion is controlled by ABB IRC5 controller. The software embedded with this controller has the libraries containing predened functions and instructions developed in RAPID for arm motion control. For this project, a software program is developed in RAPID containing all the required set of instructions to control the robots motion autonomously and also to communicate with cell computer. The movements of the robot are programmed as pose to pose movements, i.e. move from the current position to a new position and the robot automatically calculates the path between these two positions. The basic motion characteristics (e.g. type of path) are specied by choosing appropriate position instructions. Both the robot and the external axes are positioned by the same instructions. Some of these positioning instructions used for this thesis are shown in the Table 3.3 and an example program is shown in Figure 3.10.

56

CHAPTER 3. DEVELOPED SYSTEM

Instruction MoveC MoveJ MoveL MoveAbsJ

Type of movement (TCP) Moves along a circular path Joint movement Moves along a linear path Absolute joint movement

Table 3.3: Robot positioning instructions

Figure 3.10: Sample code for robot motion control

The syntax of a basic positioning instruction is


This instruction requires following parameters in order to move the robot: Type of path: linear (MoveL), joint motion (MoveJ) or circular (MoveC). The destined position, p1. Motion velocity, v500 (Velocity in mm/s). Zone size (accuracy) of robot destined position, z20. Tool data currently in use, tool1. Zone size denes how close the robot TCP has to move to the destined position. If this is dened as ne, robot will move to the exact position. Figure 3.11 illustrates this.

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS

57

Figure 3.11: Robot zone illustration diagram

3.5.2

Gripper motion control

Gripper motion refers to the nger movements. In order to control these movements a Galil motion controller was used with the gripper prototype (See Section 2.2). The main task of this Galil controller is to control the motor speed that is associated with each nger. Each motor in the prototype is associated with different axis encoders in order to record the nger positions. Generally, the motor speeds can be controlled by executing specic commands (Galil commands) on the controller by using Galil tools software. For this thesis, a low level control program containing the required Galil commands is developed and downloaded to the controller to command the nger movements autonomously during different stages of the process execution. The nger movements are programmed in different steps and all these steps constitutes a grasp cycle. These steps are described below. Initialization is the primary step performed by the gripper ngers before starting the assembly process. During this step all three ngers are commanded to move to their home positions. The information provided by different limit switches (xed in the home positions) is used to stop the ngers after reaching their home positions. This step is mainly required to provide the camera, a clear view of the workspace. Pregrasping step is executed after a workpiece is identied by the vision system. During this step ngers are commanded to move to a pregrasping pose. This is computed based on the workpiece size, shape and orientation. This step is mainly required to avoid the collision of gripper ngers with other workpieces. After this step the nger motors are turned off. Grasping and holding steps are executed when the arm reaches near the workpiece and is ready to grasp. During this steps, ngers start closing and their movement is stopped based on the received tactile information from

58

CHAPTER 3. DEVELOPED SYSTEM

the force sensors xed on the nger tips. After this step motors are kept ON in order to hold the workpiece until it is released. Releasing is the nal and simple step executed at the end of every grasp cycle. During this step, the nger motors are turned off in order to release the workpiece.

3.5.3

Interface to synchronize motions

From the previous subsections we understood that the control programs for each device are developed using different programming languages related to their motion controllers and it is difcult to integrate them in a single program. So, for a exible process execution it is required to develop a software interface that can interact with all devices simultaneously. For this thesis, a software interface program is developed in C++ to communicate with the robot controller as well as to download control programs to Galil controllers and to execute Galil commands. This interface is executed from the cell computer which is a normal PC running with Ubuntu. The cell computer is also responsible for: Controlling the overall process execution. Monitoring the process status. Communicating with various devices in the cell for activity coordination. Handling assembly errors. Executing the object detection programs. As all the devices and cell computer are connected under a common LAN with different IP addresses, the interface program can interact with them over Ethernet by connecting to the specic IP address. Interface interaction with robot controller

Figure 3.12: Robot controller communication architecture

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS

59

Figure 3.12 shows the robot communication architecture diagram. It is a client server communication model where the server program is running on the robot controller and the client program on the cell computer. The server program is written in RAPID and contains all the instructions to communicate with client. Client initiates the connection with server by connecting to a specic socket address1 using TCP/IP. The communication process between client and server is described using a sequential diagram shown in Figure 3.13 and is explained in following steps:

Figure 3.13: Sequential diagram for client server model

Initially, sockets are created on both server and client (by default, a server socket is created with the execution of server program). Client requests a socket connection with server on specic port. If the requested port is free to use, server establishes a connection and is ready to communicate with client.
1 The

socket address is a combination of IP address and a port number.

60

CHAPTER 3. DEVELOPED SYSTEM

Once, the connection is established, the client program send/receive position information to/from the server program. Both sockets are closed when data transfer is successful. The server program is mainly responsible for receiving the position information from client and to execute it on robot. This position information is transferred in the form of strings from client. An example line of code is

These strings are preprocessed in server and are converted to oat/double. The movement type i.e. linear or joint can be decided by the value after brackets in the passing string (e.g. 1 in the above code). 1 and 2 for Joint movement using MoveJ and 3 for linear movement using MoveL. Once the instruction is executed on the robot, server sends current position information of TCP to client in the form of


where,[X, Y, Z] are TCP position coordinates, [Q1, Q2, Q3, Q4] are quaternions of TCP and [cf 1, cf 4, cf 6, cf x] are Robot axis congurations. Source codes of server and client are given in Appendix D. Interface interaction with Galil controller Interface for the Galil controller is developed using specic predened functions of Galil communication library. These functions are used to connect and download the program to controller and also to read specic digital outputs. The basic functionalities provided by communication library are: Connecting and Disconnecting with a controller. Basic Communication. Downloading and uploading embedded programs. Downloading and uploading array data. Executing Galil commands. Access to the data record in both synchronous and asynchronous modes. It is also possible to connect and communicate with more than one Galil controller at the same time. Source code is given in Appendix E.

3.6. LIMITATIONS

61

3.6

Limitations

Main limitations of the developed system are: The proposed object identication techniques can only detect the workpieces with particular shape (circles or rectangles). Size and shape of the pin requires a small pin holder xed in the work space to support grasping. Because of this problem the vision system cannot recognize the pin, as 80% of the pin region is covered by the holder. So a xed assembly plan is used for assembling the pin. The developed system cannot identify the pin hole presented on the shaft. Even though background subtraction technique is producing good results, it is sensitive to illumination changes and noise present in the background.

Chapter 4

Experiments with the system


This chapter describes the test scenario to demonstrate the capabilities of the developed system along with a detailed analysis of the nal results.

4.1

Test environment

The test environment used to test the developed system is shown below.

Figure 4.1: Test environment

63

64

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

4.2

Assembly representation

As stated earlier in Section 3.1 the test assembly consists of a cylindrical shaft, two circular gears and one pin. In order to generate an assembly plan for our test assembly, a computer representation of mechanical assembly is required. Generally an assembly of parts can be represented by the individual description of each component and their relationships in the assembly. Knowledge base contains all the related information regarding the geometric models of each component, their spatial orientations and the assembly relationships between them. Based on this knowledge an assembly model is represented by a graph structure as shown in Figure 4.2 in which each node represents an individual component of the assembly and the connected links represents the relationship among them.

Figure 4.2: Graph structure of test assembly

4.2.1

Assembly sequence selection

The entire assembly of any type should follow a predened assembly sequence, but with a content specic to the respective components. In order to nd a correct and reliable assembly sequence one should evaluate all the possible assembly lines of a given product. This task can be accomplished by using precedence diagrams1 [Y. Nof et al., 1997; Prenting and Battaglin, 1964]. These diagrams are designed based on the assembly knowledge base. Figure 4.3 shows the precedence diagram for our test assembly and is described below.
1 Precedence diagrams are the graphical representation of a set of precedence constraints or precedence relations [Lambert, 2006].

4.2. ASSEMBLY REPRESENTATION

65

Figure 4.3: Precedence diagram of test assembly

Usually this diagram is organized into different columns and all the assembly operations that can be carried out rst are placed in the rst column and so on. Each individual operation is assigned a number and is represented by a circle. The connecting arrows shows the precedence relations. Now let us consider the geometrical design of our test assembly components; the shaft contains a pin hole on its top and the small gear contains a step on one side. These components can be assembled only if they have a proper orientation. Based on this geometrical constraints, two assembly sequences are derived as shown in the diagram. In the rst sequence the shaft is xed at rst and the small gear is mounted on the shaft such that the base of the shaft is xed and the step side of the gear is facing towards the shafts base. Where as in the second sequence, small gear is xed at rst having the step side facing up and the shaft is xed to it such that the base of the shaft remains above the gear which is exactly the opposite case of rst sequence. Next, in order to x the big gear, rst one is a direct assembly sequence where the bigger gear can be directly mounted on the existing components where as in the second sequence the complete subassembly needs to be turned upside down and re-xed. All precedence relations are restricted to simple AND relations. A simple AND relation for a specic component means that it can be assembled only if it have a proper orientation and if the other required operations are performed beforehand e.g. the small gear can be assembled only if it has a proper orientation and the shaft is assembled properly. Once the precedence diagram has been designed, the cost of each action is estimated and the correct assembly sequence is selected by applying mixed integer programming2 [Lambert and M. Gupta, 2002]. After evaluating both
2 A mixed integer programming is the minimization or maximization of a linear function subject to linear constraints.

66

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

sequences, the rst sequence is selected for our test assembly mainly because of the following reasons: It is a direct assembly sequence. Task complexity is reduced. Overall assembly time is reduced. The nal assembly sequence along with the tasks are described in Table 4.1.
Table 4.1: Test assembly task description

Task No. 1 2 3 4 5

Description Fix shafts base Mount small gear (step towards shafts base) Mount big gear Install pin Remove complete assembly

4.3

Test scenario

In order to test the developed system, the following test scenario has been developed: 1. The demonstrator starts the assembly process by executing the main control program in the cell computer. 2. Subsequently the hardware components like the robot, the gripper and the xture start initializing. 3. Once the hardware component initialization is successful the software components like the vision system and MATLAB engine are initialized and the process status is displayed in the cell computer execution window (See Figure 4.7). 4. After all the system components are initialized, the robot will move to the three predened positions (POS1, POS2, POS3. See Section 3.2.1) in order to capture the background images of the workspace. 5. Then the arm will move to its home position and a assembly message stating submit the workpieces is displayed in the execution window. During this time the process execution is paused and will continue after the demonstrator gives a command.

4.3. TEST SCENARIO

67

6. Now it is the demonstrators job to place the workpieces arbitrarily in the three subregions. It is not restricted to place only one workpiece in one subregion. Once the workpieces are placed, the demonstrator enters a command at the execution window in order to continue the execution. 7. After the command the robot will start searching for the workpieces based on the assembly sequence explained in Section 4.2.1 in all subregions starting from POS1. 8. If the workpiece needed to be assembled is identied in any of the three positions, the grasp planner will compute its orientation from the obtained image information and compares it with the predened assembly sequence in order to nd whether the workpiece is having a proper orientation for assembling or not. If this condition is satised, an overall plan for grasping is produced based on the workpiece spatial position and orientation. Otherwise an assembly message is displayed in the execution window stating the required action to be performed by the demonstrator. This type of check is performed only for the parts that needs specic orientation in the assembly (e.g. small gear cannot be assembled if its step side is facing up). 9. Based on the generated plan from the above step an assembly cycle consisting of aligning, grasping, moving, xing and releasing is executed for every workpiece in order to assemble it. 10. If the workpiece is not identied in any of the rst two positions (POS1 and POS2), an assembly message stating workpiece not found is displayed in the execution window and the robot will move to the next position. And if the workpiece is not identied in POS3 an assembly error message stating workpiece not found in the workspace is displayed in the execution window and the execution stops. 11. Once the assembly is nished, the arm will move over the xture to grasp the entire assembly and to place it in a desired location. 12. As a nal step, a process completion message is displayed in the execution window.

4.3.1

Special cases

If two similar workpieces are found in one subregion, the system will randomly choose one of them. If the recognized small gear contains the step side facing up, an assembly message stating "turn the gear upside down and place it again" is displayed on the execution window and the process execution is paused.

68

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

If two small gears are found in one of the subregions, the system will search for the gear containing the plain side facing up.

4.3.2

Test results

Based on the above test scenario, the tests are conducted on the developed system and the results are shown below: System initialization

Figure 4.4: (A) Initialized gripper (B) Arm at home position

Figure 4.5: (A) Arm at position 1 (B) Arm at position 2

4.3. TEST SCENARIO

69

Figure 4.6: Arm at position 3

Figure 4.7: Screen-shot of execution window

Figure 4.8: Workpieces in robot task space

70

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Assembling shaft

Figure 4.9: Recognized shaft

Figure 4.10: Grasping shaft

4.3. TEST SCENARIO

71

Figure 4.11: (A) Robot Fixing shaft (B) Fixed shaft

Assembling small gear

Figure 4.12: (A) Searching at POS2 (B) Searching at POS3

72

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Figure 4.13: Recognized small gear

Figure 4.14: Grasping small gear

Figure 4.15: (A) Robot xing gear (B) Assembled gear

4.3. TEST SCENARIO

73

Assembling big gear

Figure 4.16: (A) Searching for the gear (B) Robot grasping the gear

Figure 4.17: (A) Fixing the gear (B) Assembled big gear

74

CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Assembling pin

Figure 4.18: (A) Robot grasping the pin (B) Fixing the pin

Figure 4.19: Assembled pin

Figure 4.20: Final assembled product

4.4. ANALYSIS

75

4.4

Analysis

Finger design of the current gripper prototype cannot provide a stable grasp for the small objects (width and height less than 3cm), if they are placed horizontally on the workspace. In order to solve this problem, objects are placed on top of small supporting blocks. The vision system cannot recognize the objects if they are placed vertically (standing position) in the line of camera view. As the used vision system is monocular, the system cannot build a 3D model of the object. Because of this limitation it is not reliable in computing the exact 3D location of the objects if they are placed vertically (standing position) in the workspace. As the system cannot recognize the pin hole in the shaft, the shaft is rotated manually such that the robot can x the pin. As the total work space is divided into three subregions the total operational time is increased (Total time is not a concept of this thesis). Because of the hardware problems of the gripper, it cannot hold an object for long time.

Chapter 5

Conclusions
This chapter presents conclusions along with suggestions for future work.

5.1

Summary

The main goal of this thesis was to investigate and implement an Eyeinhand based automatic grasp planning methodology for an industrial assembly operation. This objective has been successfully achieved by implementing a vision based autonomous grasp planner which can compute the possible stable grasps and executes the posture of both robot arm and the gripper. The developed system is exible in comparison to the conventional assembly cells, and has been implemented in a robot assembly to recognize, posit, manipulate, and assemble different types of workpieces. The system successfully grasps and assembles various workpieces regardless of their initial placement and orientation. In this system, every product is described as an assembly tree, i.e. precedence graph, for all parts in the product. Assembly trees are decomposed into simpler operations (e.g. grasp, move, insert, release, etc.) for the gripper, xture and robot respectively. Object identication techniques that have been developed to identify the workpieces doesnt require any system training and are capable to recognize workpieces at any given orientation. The main advantage of the developed vision system is its capability of online tracking of work pieces i.e. the changes in the location of the workpieces will not affect the process execution. As this is an industrial assembly operation, the knowledge of workpieces is known in prior and this knowledge is used to generate a nal execution plan. The interface that is developed to synchronize motions of various machine tools increased the overall process execution exibility. With the completion of this thesis, a reliable communication model to communicate with the robot arm as well as with the grasping system i.e. with both exible gripper and xture has been successfully implemented and tested under

77

78

CHAPTER 5. CONCLUSIONS

Linux platform. This model is simple and easy to understand and provides all the necessary functionalities.

5.2

Future work

The following advancements can be made to the existing system : The system capabilities can be increased by incorporating a stereo vision system to estimate the distance between the robot TCP and the work piece. Overall exibility can be increased by using a 3D model of the object generated by the vision system. Blob detection algorithm which compensates the background subtraction technique can decrease the vision system limitations.

Bibliography
Eye-in-hand system.

Galil motion control.

Open cv reference manual. , 2000.

ABB IRB 140 Product manual. ABB Automation Technologies AB, Vsters, Sweden, 2004. Peter K. Allen, Aleksandar Timcenko, Billibon Yoshimi, and Paul Michelman. Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Transactions on Robotics and Automation, April 1993. Anani Ananiev. Flexible gripper FG2.00.00.00 Operation manual. bro,Sweden, 2009. re-

Arbib. Perceptual structures and distributed motor control. Handbook of Physiology, Section 2: The Nervous System, Motor Control, II:14491480, 1981. Yaln Ba tanlar, Alptekin Temizel, and Yasemin Yardmc. Improved sift matchs ing for image pairs with scale difference. IET Electronics Letters, 46(5), March 2010. Jean-Yves Bouguet. Camera calibration toolbox for matlab. , July 2010.

Aihua Chen and Bingwei He. A camera calibration technique based on planar geometry feature. In Mechatronics and Machine Vision in Practice, 2007. M2VIP 2007. 14th International Conference on, pages 165 169, 4-6 2007. doi: 10.1109/MMVIP.2007.4430736. T. H. Chiu, A. J. Koivo, and R. Lewczyk. Experiments on manipulator gross motion using self-tuning controller and visual information. Journal of Robotic Systems, 3(1):5970, 1986.
79

80

BIBLIOGRAPHY

Daniel F. Dementhon and Larry S. Davis. Model-based object pose in 25 lines of code. Int. J. Comput. Vision, 15(1-2):123141, 1995. ISSN 0920-5691. doi: http://dx.doi.org/10.1007/BF01450852. Rosen Diankov, Takeo Kanade, and James Kuffner. Integrating grasp planning and visual feedback for reliable manipulation. In Proceedings of the 9th IEEE-RAS International Conference on Humanoid Robots, Paris,France, December 2009. Festo Didactic GmbH & Co. KG Festo solution center. microfms exible . manufacturing system. Mikell P. Groover. Automation, production systems and computer integrated manufacturing. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2000. ISBN 0130889784. Yu Huang, Thomas S. Huang, and Heinrich Niemann. Segmentation-based object tracking using image warping and kalman ltering. In Proceedings of the International Conference on Image Processing, volume 3, pages 601 604, 2002. Alison E. Hunt and Arthur C. Sanderson. Vision-based predictive robotic tracking of a moving target. Technical Report CMU-RI-TR-82-15, Robotics Institute, Pittsburgh, PA, January 1982. Seth Hutchinson, Gregory D. Hager, and Peter I. Corke. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5):651 671, October 1996. Billibon H.Yoshimi and Peter K. Allen. Integrating real-time vision and manipulation. In Proceedings of the Hawaii International Conference on System Sciences, volume 5, pages 178187, January 1997. A.J. Koivo and N. Houshangi. Real-time vision feedback for servoing robotic manipulator with self-tuning controller. IEEE Transactions on Systems, Man and Cybernetics, 21(1):134 142, jan/feb 1991. ISSN 0018-9472. doi: 10. 1109/21.101144. A.J.D Lambert and Surendra M. Gupta. Demand-driven disassembly optimisation for electronic consumer goods. Journal of Electronics Manufacturing, 11(2):121135, 2002. Alfred J.D. Lambert. Generation of assembly graphs by systematic analysis of assembly structures. European Journal of Operational Research, 168(3): 932951, February 2006. College Leicester. Presentation on exible manufacturing systems. , 2009.

BIBLIOGRAPHY

81

John Mitchell. Flexible manufacturing systems for exible production. .

Antonio Morales, Pedro J. Sanz, Angel P. del Pobil, and Andrew H. Fagg. Vision-based three-nger grasp synthesis constrained by hand geometry. Robotics and Autonomous Systems, 54(6):496 512, 2006. ISSN 09218890. doi: DOI:10.1016/j.robot.2006.01.002. Okamura.A.M., Smaby.N, and Cutkosky.M.R. An overview of dexterous manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation. ICRA 2000, volume 1, pages 255262, 2000. Mario Prats, Philippe Martinet, A.P.del Pobil, and Sukhan Lee. Vision/force control in task-oriented grasping and manipulation. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 2007. T.O. Prenting and R.M. Battaglin. The precedence diagram: a tool for analysis in assembly line balancing. The Journal of Industrial Engineering, 15(4): 208213, 1964. K. Rao, G. Medioni, H. Liu, and G.A. Bekey. Shape description and grasping for robot hand-eye coordination. Control Systems Magazine, IEEE, 9(2):22 29, feb 1989. ISSN 0272-1708. doi: 10.1109/37.16767. K. Rezaie, S. Nazari Shirkouhi, and S.M. Alem. Evaluating and selecting exible manufacturing systems by integrating data envelopment analysis and analytical hierarchy process model. Asia International Conference on Modelling and Simulation, pages 460464, 2009. doi: http://doi.ieeecomputersociety. org/10.1109/AMS.2009.68. Paul L. Rosin. Techniques for assessing polygonal approximations of curves. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 19(6):659 666, June 1997. Ken Tabb, Neil Davey, R. G. Adams, and S. J. George. Analysis of human motion using snakes and neural networks. In AMDO 00: Proceedings of the First International Workshop on Articulated Motion and Deformable Objects, pages 4857, London, UK, 2000. Springer-Verlag. ISBN 3-54067912-X. Geoffrey Taylor and Lindsay Kleeman. Visual Perception and Robotic Manipulation. Springer-Verlag Berlin Heidelberg, Germany, 2006. Johan Wiklund and Gsta H. Granlund. Image Sequence Analysis for Object Tracking. In Proc. of The 5th Scandinavian Conference on Image Analysis, pages 641648, 1987.

82

BIBLIOGRAPHY

Ming Xie. Robotic hand-eye coordination: New solutions with uncalibrated stereo cameras. Machine Vision and Applications, 10(10):136143, 1997. Ming Xie. A developmental principle for robotic hand-eye coordination skill. In Proceedings of the 2nd International Conference on Development and Learning, 2002. Ming Xie. The fundamentals of robotics: Linking perception to action. World Scientic Publishing Co., 2003. Shimon Y. Nof, Wilbert E. Wilhelm, and Hans-Jrgen Warnecke. Industrial assembly. Chapman and Hall, UK, 1997. Wei Yi. Circle detection using improved dynamic generalized hough transform (idght). In Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS 98. 1998 IEEE International, volume 2, pages 1190 1192, 6-10 1998.

Appendix A

Source code for shaft identication


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 close all ; background =im2double ( imread ( c o l o r _ b g ) ) ; image =im2double ( imread ( c ol or _ i ma g e ) ) ; [ r o i s mask num]= getROI ( background , image ) ; %======== ERODING THE MASK ======== s e = s t r e l ( s qua r e , 4 ) ; erodedBW = imerode ( mask , s e ) ; erodedBW = imerode ( erodedBW , s e ) ; f i g u r e ; imshow ( mask ) ; a x i s t i g h t on ; hold on ; %======== BOUNDARY EXTRACTION ======= [ B , L ,N, A] = bwboundaries ( erodedBW , h o l e s ) ; tempim = c e l l ( l e n g t h ( B ) , 1 ) ; d if f = zeros ( length (B) ,1) ; S = c e l l ( length (B ) , 1 ) ; %======== CURVES EXTRACTION ========= i f (~ l e n g t h ( B)==0) f o r k =1: l e n g t h ( B ) boundary = B { k } ; R = B{k } ; tempim { k } = z e r o s ( s i z e ( erodedBW ) ) ; [M N] = s i z e (R ) ; f o r i =1:M tempim { k } ( R( i , 1 ) , R( i , 2 ) ) = 1 ; end c ur v e = e x t r a c t _ c u r v e ( tempim { k } , 3 ) ; f o r no =1: l e n g t h ( c ur v e ) i f (~ l e n g t h ( c ur v e { [ no ] } ) = = 0 ) [ c u r v e _ p i x e l s ] = f i n d c u r v e ( c ur v e { no } ) ; %===== FIND SHAFT SUITABLE REGION ====== i f (~ l e n g t h ( c u r v e _ p i x e l s )==0&&l e n g t h ( c u r v e _ p i x e l s ) <200) S { k } = r e g i o n p r o p s ( tempim { k } , a l l ) ; d i f f ( k ) = S { k } . MajorAxisLengthS { k } . MinorAxisLength ; s t r = 1; break ; else s t r = 0; end else

% F i l t e r mask

83

84

APPENDIX A. SOURCE CODE FOR SHAFT IDENTIFICATION

42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68

s t r = 0; end end end else s t r = 0; end %====== CROSS CHECK FOR SHAFT ========= i f ( s t r ==1) shaft_num = f i n d ( d i f f ==max ( d i f f ) ) ; s h a f t _ c e n t r o i d = S { shaft_num } . C e ntr oi d ; s h a f t _ o r i e n t = S { shaft_num } . O r i e n t a t i o n ; i f ( ( s h a f t _ o r i e n t <40&&s h a f t _ o r i e n t > 0 ) | | ( s h a f t _ o r i e n t > 40&&s h a f t _ o r i e n t < 0 ) ) o r i e n t =0; % HORIZONTAL else o r i e n t =1; % VERTICAL end u = round ( s h a f t _ c e n t r o i d ( 2 ) ) ; v = round ( s h a f t _ c e n t r o i d ( 1 ) ) ; shaft_pix = [ shaft_centroid (1) , shaft_centroid ( 2 ) ] ; check = 1 ; plot (v , u , r* ) ; check = 0 ; end

else

Appendix B

Source code for gear identication


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 # i n c l u d e "cv . h" # i n c l u d e " h i g h g u i . h" #define CAMERA 0

/ / ===============================================================// / / FUNCTION TO DETECT GEARS / / INPUT : Type of t h e o b j e c t (SMALL or BIG ) / / OUTPUT : C e ntr oi d p i x e l c o o r d i n a t e s and O r i e n t a t i o n / / ===============================================================// v oi d d e t e c t _ g e a r ( c o n s t c ha r o b j _ t y p e [ 2 5 6 ] , i n t *u , i n t * v , double * o r i e n t ) { / / ====== Image d e c l a r a t i o n ============== I p l I m a g e * img ; I p l I m a g e * grayImg ; I p l I m a g e * MedianImg ; I p l I m a g e * cannyImg ; IplImage* c olor _ds t ; IplImage* r e s u l t ; i n t px [ 0 ] , py [ 0 ] , pz [ 0 ] ; i n t i t e r =0 , i , U1 , U2 , V1 , V2 , R1=0 ,R2=0 , di s k_ u , d i s k _ v ; float* p; double c o n t a r e a ; CvCapture * cap ; CvMemStorage * c s t o r a g e = cvCreateMemStorage ( 0 ) ;

/ / ======= Capture frame from camera ========= cap = cvCaptureFromCAM (CAMERA) ; i f ( ! cvGrabFrame ( cap ) ) { p r i n t f ( "Could not grab a frame \ n \ 7 " ) ; exit (0) ; } / / ======== Named windows to d i s p l a y r e s u l t s ======== cvNamedWindow ( "LIVE" , CV_WINDOW_AUTOSIZE ) ; cvNamedWindow ( "LIVE Edge F e a t u r e s " , CV_WINDOW_AUTOSIZE ) ;

85

86

APPENDIX B. SOURCE CODE FOR GEAR IDENTIFICATION

42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100

/ / work loop whi l e ( i t e r <50) { / / ========= r e t r i e v e c a p t u r e d frame ======= img = cvQueryFrame ( cap ) ; i f ( ! img ) { p r i n t f ( "bad v i de o \ n" ) ; exit (0) ; } / / ======== Image c r e a t i o n ========= r e s u l t = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , 8 , 1 ) ; d s t = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , 8 , 1 ) ; c o l o r _ d s t = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , 8 , 3 ) ; h i s t i m = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , IPL_DEPTH_8U , 1 ) ; / / C onv e r s i on to g r a y s c a l e grayImg = c v C r e a te Ima g e ( c v S i z e ( img>width , img>h e i g h t ) , IPL_DEPTH_8U , 1 ); cvCvtColor ( img , grayImg , CV_BGR2GRAY ) ; / / Histogram e q u a l i z a t i o n c v E q u a l i z e H i s t ( grayImg , h i s t i m ) ; / / A ppl y i ng Median F i l t e r MedianImg = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , IPL_DEPTH_8U , 1) ; cvSmooth ( h i s t i m , MedianImg ,CV_MEDIAN, 1 1 , 1 1 ) ; / / Computing Edge f e a t u r e s cannyImg = c v C r e a te Ima g e ( c v G e t S i z e ( img ) , IPL_DEPTH_8U , 1) ; cvCanny ( MedianImg , cannyImg , 80 , 120 , 3) ; cvCvtColor ( cannyImg , c o l o r _ d s t , CV_GRAY2BGR ) ; / / ========== S e a r c h f o r hough c i r c l e s ============ CvSeq * c i r c l e s = c v Houg hC i r c l e s ( cannyImg , c s t o r a g e , CV_HOUGH_GRADIENT, cannyImg >h e i g h t / 5 0 , 1 , 35 ) ; i f ( c i r c l e s >t o t a l >=2) { / / f i l t e r to l i m i t onl y <=2 c i r c l e s to draw f o r ( i = 0 ; c i r c l e s >t o t a l >=2? i <2: i < c i r c l e s >t o t a l ; i ++ ) { i f ( i ==0) { p = ( f l o a t * ) cvGetSeqElem ( c i r c l e s , i ) ; c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 , CV_RGB( 2 5 5 , 0 , 0 ) , 1, 8 , 0 ) ; c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB( 2 5 5 , 0 , 0 ) , 1 , 8 , 0 ) ; c v C i r c l e ( r o i , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB(255 , 255 , 255) , 1, 8 , 0 ) ; U1 = cvRound ( p [ 1 ] ) ; V1 = cvRound ( p [ 0 ] ) ; R1 = cvRound ( p [ 2 ] ) ; } else { p = ( f l o a t * ) cvGetSeqElem ( c i r c l e s , i ) ; c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 , CV_RGB( 2 5 5 , 0 , 0 ) , 1, 8 , 0 ) ;

87

101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163

c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB( 2 5 5 , 0 , 0 ) , 1 , 8 , 0 ) ; c v C i r c l e ( r o i , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB(255 , 255 , 255) , 1, 8 , 0 ) ; U2 = cvRound ( p [ 1 ] ) ; V2 = cvRound ( p [ 0 ] ) ; R2 = cvRound ( p [ 2 ] ) ; } } / / ======== Checking f o r s m a l l g e a r ============ i f ( strcmp ( obj _ ty pe , "SMALL" ) ==0) { i f ( f a b s ( R2R1 ) >15 & f a b s ( R2R1 ) < 30) & { i f ( ( R1<R2 )&&(R1>50&&R1<70) ) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } e l s e i f ( ( R2<R1 )&&(R2>50&&R2<70) ) { di s k_ u = U2 ; d i s k _ v = V2 ; * check = 1 ; break ; } else { * check =0; } } e l s e i f ( R1>50&&R1<70) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } e l s e i f ( R2>50&&R2<70) { di s k_ u =U2 ; d i s k _ v =V2 ; * check = 1 ; break ; } else * check = 0 ; } / / ======== Checking f o r b i g g e a r ============ i f ( strcmp ( obj _ ty pe , "BIG" ) ==0) { i f ( f a b s ( R2R1 ) >15 & f a b s ( R2R1 ) < 30) & { i f ( ( R1>R2 )&&(R1>70&&R1<90) ) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } e l s e i f ( ( R2>R1 )&&(R2>70&&R2<90) )

88

APPENDIX B. SOURCE CODE FOR GEAR IDENTIFICATION

164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225

{ di s k_ u = U2 ; d i s k _ v = V2 ; * check = 1 ; break ; } else { } * check =0; } e l s e i f ( R1>70&&R1<90) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } e l s e i f ( R2>70&&R2<90) { di s k_ u =U2 ; d i s k _ v =V2 ; * check = 1 ; break ; } } } e l s e i f ( c i r c l e s >t o t a l ==1) { f o r ( i = 0 ; c i r c l e s >t o t a l >=1? i <1: i < c i r c l e s >t o t a l ; i ++ ) { i f ( i ==0) { p = ( f l o a t * ) cvGetSeqElem ( c i r c l e s , i ) ; c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 , CV_RGB( 2 5 5 , 0 , 0 ) , 1, 8 , 0 ) ; c v C i r c l e ( img , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB( 2 5 5 , 0 , 0 ) , 1 , 8 , 0 ) ; c v C i r c l e ( r o i , c v P o i n t ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound ( p [ 2 ] ) , CV_RGB(255 , 255 , 255) , 1, 8 , 0 ) ; U1 = cvRound ( p [ 1 ] ) ; V1 = cvRound ( p [ 0 ] ) ; R1 = cvRound ( p [ 2 ] ) ; } cvSaveImage ( edgeim , r o i ) ; } / / ======== Checking f o r s m a l l g e a r ============ i f ( strcmp ( obj _ ty pe , "SMALL" ) ==0) { i f ( R1>50&&R1<70) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } else * check = 0 ; } / / ======== Checking f o r b i g g e a r ============ e l s e i f ( strcmp ( obj _ ty pe , "BIG" ) ==0) {

89

226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 }

} } else

i f ( R1>70&&R1<90) { di s k_ u =U1 ; d i s k _ v =V1 ; * check = 1 ; break ; } else * check = 0 ;

* check = 0 ; / / ====== D i s p l a y i n g r e s u l t s ============ cvShowImage ( "LIVE" , img ) ; cvShowImage ( "LIVE Edge F e a t u r e s " , cannyImg ) ; cvWaitKey ( 3 3 ) ; i t e r ++; } * u = di s k_ u ; *v = d i s k _ v ; * or i e nt = 0; c v R e l e a s e Ima g e (&img ) ; / / R e l e a s e image c v R e l e a s e C a p t u r e(&cap ) ; / / R e l e a s e c a p t u r e

Appendix C

Camera calibration procedure


The camera calibration procedure described here is used to develop a mathematical model of transformation between observed image points and the real world points. For this thesis, the points in the world are expressed in terms of robot base coordinates. In order accomplish this task we need to nd the camera intrinsic parameter matrix along with the camera external rotation and translation in the 3D world.

C.1

Camera intrinsic parameters

The intrinsic parameters of any camera are used to describe the internal geometry and optical characteristics of the camera [Xie, 2003]. The intrinsic parameter matrix is computed by projecting a 3D point in the real world onto the image plane. Let us consider a 3D point M (X, Y, Z) in the robot world coordinate system and assume that the frame w is assigned to the world coordinate system and the frame c is assigned to the camera coordinate system as shown in Figure C.1.

Figure C.1: Pinhole camera geometric projection

If c Qw describes the transformation from world frame w to camera frame c then the coordinates of M with respect to c is given by
91

92

APPENDIX C. CAMERA CALIBRATION PROCEDURE

Xc Xw Yc Yw = c Qw Zc Zw 1 1

(C.1)

As described in Chapter 2, the relationship between the image coordinates (u, v) and the corresponding coordinates (Xc , Yc , Zc ) in the camera frame is described by a projective mapping matrix K as follows:

Xc u Yc v = K Zc 1 1 f 0 u0 Dx f where K = 0 0 Dy 0 0 1

(C.2)

In the above equation, is unknown scaling factor, f is the focal length of the camera, (Dx , Dy ) are the image digitizer sampling steps and (u0 , v0 ) are the optical center coordinates in image coordinate system uv. The matrix K is the camera intrinsic parameter matrix. Practically, this is performed by capturing various images with different orientations of a chessboard pattern and extracting the grid corners in every image by using camera calibration toolbox in MATLAB [Bouguet, 2010].

C.2

Camera extrinsic parameters

By combining both C.1 and C.2 we will get

Xr u Yr c v = K Qw Zr 1 1

(C.3)

Equation C.3 describes the forward projective mapping from 3D to 2D and the elements of matrix c Qw are called as camera extrinsic parameters. These parameters include the camera rotation R33 and translation T31 in the real world. For this thesis the camera rotation and translation are computed as explained below. Our task is to x the camera in the midpoint of the gripper

C.2. CAMERA EXTRINSIC PARAMETERS

93

which is xed to the robots TCP. In order to perform this, rst the camera is placed at the robots base such that the optical axis pointing upwards. Next it is ipped (rotated 180 around the y axis) and translated to the gripper. The following equations shows the camera rotation and translation when the arm is at POS1. Rotation matrix R = rotation about y axis with an angle of 180 . That is, cos(180) 0 sin(180) 1 0 0 = 0 1 0 0 1 0 R= sin(180) 0 cos(180) 0 0 1 The respective translation is computed by subtracting the measured offsets i.e. the distances from the TCP to the gripper center point (GCP) and from the GCP to the camera center from the robot TCP coordinates. 450 TCP coordinates at P OS1 are 0 647 Measured offset distances are: From TCP to Camera center (z-axis) = 170mm From TCP to GCP (z-axis) = 131mm From GCP to camera center (x-axis) = 29mm Therefore, the nal translation is 450 29 421 = 0 0 T = 647 170 477 Whenever the arm moves to a new position, the rotation and translation matrices are updated based on the robot rotation and translation.

Appendix D

Source code for client and server


Listing D.1: Client code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #include #include #include #include #include #include #include #include < s y s / t y p e s . h> < s y s / s o c k e t . h> < s y s / s t a t . h> < s y s / s e n d f i l e . h> < n e t i n e t / i n . h> < netdb . h> < e r r no . h> < f c n t l . h>

u s i n g namespace s t d ; / / c out o s t r i n g s t r e a m v e c t o r s t r i n g / / =========== FUNCTION DECLARATIONS =============================// v oi d e r r o r ( c o n s t c ha r * msg ) ; v oi d s oc k_ c ont ( c ha r * pos ) ; / / =========== DECLARING ARM VARIABLES============================// i n t s oc kfd , portno = 1300 , n ; int t e l l e r ; c ha r b u f f e r [ 2 5 6 ] ; c ha r pos0 [256]= " [ 0 , 0 , 0 , 0 , 0 , 0 ] 1 " ; / / I n t i a l P o s i t i o n f o r ARM c ha r pos1 [256]= " [50.9 ,39 , 10.7 , 34.5 ,18.5 , 14.9]2 " ; c ha r pos2 [256]= " [8. 7 , 48 , 30. 7 , 27. 1 , 48. 5 , 8. 0]1 " ; c ha r pos3 [256]= "[ 27.1 ,25.8 , 11.9 , 52.8 ,48.6 , 22.4]2 " ; s t r u c t s oc ka ddr _ i n s e r v _ a d d r ; s tr uc t hostent *server ; / / ====== MAIN ========= i n t main ( ) { s oc k_ c ont ( pos0 ) ; s oc k_ c ont ( pos3 ) ; s oc k_ c ont ( pos1 ) ; s oc k_ c ont ( pos2 ) ; return 0; } / / ===============================================================// / / FUNCTION TO PRINT ERROR MESSAGE

95

96

APPENDIX D. SOURCE CODE FOR CLIENT AND SERVER

40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

/ / ===============================================================// v oi d e r r o r ( c o n s t c ha r * msg ) { p e r r o r ( msg ) ; exit (0) ; } / / ===============================================================// / / FUNCTION TO CONNECT TO SOCKET AND WRITE POS / / INPUT : STRING ( POS ) / / ===============================================================// v oi d s oc k_ c ont ( c ha r * pos ) { s oc kfd = s o c k e t ( AF_INET , SOCK_STREAM, 0) ; i f ( s oc kfd < 0) e r r o r ( "ERROR opening s o c k e t " ) ; s e r v e r = gethostbyname ( " 192. 168. 200. 91" ) ; i f ( s e r v e r == NULL) { f p r i n t f ( s t d e r r , "ERROR, no such h o s t \ n" ) ; exit (0) ; } bz e r o ( ( c ha r * ) &s e r v _ a ddr , s i z e o f ( s e r v _ a d d r ) ) ; s e r v _ a d d r . s i n _ f a m i l y = AF_INET ; bcopy ( ( c ha r * ) s e r v e r >h_addr , ( c ha r * )&s e r v _ a d d r . s i n _ a d d r . s_addr , s e r v e r > h_length ) ; s e r v _ a d d r . s i n _ p o r t = htons ( portno ) ; i f ( c onne c t ( s oc kfd , ( s t r u c t s oc ka ddr * )&s e r v _ a ddr , s i z e o f ( s e r v _ a d d r ) ) < 0) e r r o r ( "ERROR c onne c bz e r o ( b u f f e r , 2 5 6 ) " ) ; / / WRITING TO SOCKET s t r c p y ( b u f f e r , pos ) ; n = w r i t e ( s oc kfd , b u f f e r , s t r l e n ( b u f f e r ) ) ; i f ( n < 0) e r r o r ( "ERROR w r i t i n g to s o c k e t " ) ; / / READING FROM SOCKET n = r e a d ( s oc kfd , b u f f e r , 2 5 5 ) ; i f ( n < 0) e r r o r ( "ERROR r e a d i n g from s o c k e t " ) ; p r i n t f ( "%s \ n" , b u f f e r ) ; c l o s e ( s oc kfd ) ; / / C l os e s o c k e t

66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 }

Listing D.2: Rapid server code


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 % % % VERSION : 1 LANGUAGE: ENGLISH % % % MODULE main_module VAR i ode v l o g f i l e ; VAR e r r s t r e r t i t l e : ="UNDEFINED MOVEMENT TYPE" ; VAR e r r s t r e r s t r 1 : =" The l a s t e l e me nt of t h e p o s i t i o n s t r i n g s houl d s p e c i f i y t h e movement t y p e " ; VAR e r r s t r e r s t r 2 :="1 MoveJ " ; VAR e r r s t r e r s t r 3 :="2 Compute MoveL from Move J " ; VAR e r r s t r e r s t r 4 :="3 MoveL " ; VAR s t r i n g r e c e i v e _ t r a n s ;

97

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80

VAR s t r i n g r e c e i v e _ r o t ; VAR s t r i n g r e c e i v e _ s t r i n g ; VAR s t r i n g s e n d _ t r a n s ; VAR s t r i n g s e n d _ r o t ; VAR s t r i n g s e nd_ r obc onf ; VAR r o b t a r g e t C u r r e n t _ p o s i t i o n ; VAR r o b t a r g e t Pos1 ; VAR bool ok1 ; VAR bool ok2 ; VAR bool ok3 ; VAR bool ok4 ; VAR num C1 ; VAR num found1 ; VAR num found2 ; VAR num l e n ; VAR num C2 ; VAR num move_type ; VAR num me s s a g e _ c ount ; VAR s o c k e t d e v s e r v e r _ s o c k e t ; VAR s o c k e t d e v c l i e n t _ s o c k e t ; VAR s t r i n g c l i e n t _ i p ; VAR s t r i n g c l i e n t _ m e s s a g e ; CONST j o i n t t a r g e t j o i n t p o s 0 : = [ [ 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 9 E9 , 9 E9 , 9 E9 , 9 E9 , 9 E9 ] ] ; VAR j o i n t t a r g e t j o i n t p o s 1 ; PROC main ( ) j o i n t p o s 1 . e xta x : = [ 0 , 9 E9 , 9 E9 , 9 E9 , 9 E9 , 9 E9 ] ; ConfJ \ Off ; ! open a f i l e f o r w r i t i n g l o g s ! Open "HOME: " \ F i l e : = " s i ml og .DOC" , l o g f i l e \ Write ; me s s a g e _ c ount : = 1 ; SocketCreate server_socket ! S oc ke tB i nd s e r v e r _ s o c k e t S oc ke tB i nd s e r v e r _ s o c k e t , SocketListen server_socket ; , "130. 243. 124. 172" , 1300; "192. 168. 200. 91" , 1300; ;

WHILE TRUE DO S oc ke tA c c e pt s e r v e r _ s o c k e t , c l i e n t _ s o c k e t \ C l i e n t A d d r e s s : = c l i e n t _ i p ; me s s a g e _ c ount : = me s s a g e _ c ount + 1 ; SocketReceive c l i e n t _ s o c k e t \ Str := r e c e i v e _ s t r i n g ; l e n : = S tr L e n ( r e c e i v e _ s t r i n g ) ; ok1 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , l e n , 1 ) , move_type ) ; IF move_type =1 THEN !MOVEJ ok2 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , 1 , l e n 1) , j o i n t p o s 1 . robax ) ; MoveAbsJ j o i n t p o s 1 , v1000 , f i n e , t o o l 0 ; ! MoveAbsJ j o i n t p o s 0 , v1000 , f i n e , t o o l 0 ; ELSEIF move_type =2 THEN S i ng A r e a \ W r i s t ; ! Convert from J o i n t T a r g e t to R obta r g e t ok2 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , 1 , l e n 1) , j o i n t p o s 1 . robax ) ; Pos1 : = CalcRobT ( j o i n t p o s 1 , tool 0 , \ Wobj : = wobj0 ) ; MoveL Pos1 , v500 , f i n e , t o o l 0 ; ELSEIF move_type =3 THEN ConfL \ On ;

98

APPENDIX D. SOURCE CODE FOR CLIENT AND SERVER

81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126

! S i ng A r e a \ W r i s t ; Pos1 . e xta x : = [ 9 E+09 ,9E+09 ,9E+09 ,9E+09 ,9E+09 ,9E + 0 9 ] ; l e n : = S tr L e n ( r e c e i v e _ s t r i n g ) ; found1 : = StrMatch ( r e c e i v e _ s t r i n g , 1 , " ] " ) ; found2 : = StrMatch ( r e c e i v e _ s t r i n g , found1 + 1 , " ] " ) ; ok2 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , 1 , found1 ) , Pos1 . t r a n s ) ; C1: = found2found1 ; ok3 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , found1 +1 ,C1 ) , Pos1 . r o t ) ; C2: = l e nfound2 1; ok4 : = S tr T oV a l ( S t r P a r t ( r e c e i v e _ s t r i n g , found2 +1 ,C2 ) , Pos1 . r obc onf ) ; MoveL Pos1 , v500 , f i n e , t o o l 0 ; ELSE ErrLog 4800 , e r t i t l e , e r s t r 1 , e r s t r 2 , e r s t r 3 , e r s t r 4 ; ENDIF ! Get and t h e c u r r e n t p o s i t i t i o n C u r r e n t _ p o s i t i o n : =CRobT ( \ Tool : = t o o l 0 \ WObj: = wobj0 ) ; ! Convert i t to 2 s t r i n g s s e n d _ t r a n s : = V a l T oS tr ( C u r r e n t _ p o s i t i o n . t r a n s ) ; s e n d _ r o t : = V a l T oS tr ( C u r r e n t _ p o s i t i o n . r o t ) ; s e nd_ r obc onf : = V a l T oS tr ( C u r r e n t _ p o s i t i o n . r obc onf ) ; ! Send t h e s t r i n g s through S oc ke tS e nd c l i e n t _ s o c k e t S oc ke tS e nd c l i e n t _ s o c k e t S oc ke tS e nd c l i e n t _ s o c k e t the \ Str \ Str \ Str socket := send_trans ; := send_rot ; : = s e nd_ r obc onf ;

! send a s t r i n g with t h e outcome of t h e s i m u l a t i o n SocketClose c l i e n t _ s o c k e t ; ENDWHILE ERROR IF ERRNO=ERR_SOCK_TIMEOUT THEN RETRY; ELSEIF ERRNO=ERR_SOCK_CLOSED THEN RETURN; ELSE ! No e r r o r r e c o v e r y h a n d l i n g ENDIF UNDO SocketClose server_socket ; SocketClose c l i e n t _ s o c k e t ; close lo gfi le ; ENDPROC ENDMODULE

Appendix E

Source code for Galil controller


Listing E.1: Galil interface code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 # i n c l u d e " G a l i l . h" #define #define #define #define / / vector s tr ing Galil

GRIPPER " 192. 168. 200. 98" FIXTURE " 192. 168. 200. 99" GRIPPERPROGRAM " / . . . / . . . / " FIXTUREPROGRAM " / . . . / . . . / "

u s i n g namespace s t d ; / / c out o s t r i n g s t r e a m v e c t o r s t r i n g i n t main ( ) { try { / / ===== DECLARING IP f o r both g r i p p e r and F i x t u r e = = = = / / G a l i l g ( GRIPPER ) ; / / Gr i ppe r c o n n e c t i o n G a l i l g1 ( FIXTURE) ; / / F i x t u r e c o n n e c t i o n i n t i n i t _ o u t p u t = 0 , i n i t _ o u t p u t 1 =0; v e c t o r < char > r ; v e c t o r < char > r1 ; / / ==== CONNECT TO THE tALIL CONTROLLER ======/ / c out << g . c o n n e c t i o n ( ) << e n d l ; / / FIXTURE c out << g . l i b r a r y V e r s i o n ( ) << e n d l ; c out << g1 . c o n n e c t i o n ( ) << e n d l ; / / GRIPPER c out << g1 . l i b r a r y V e r s i o n ( ) << e n d l ; g . command ( "CFA" ) ; g1 . command ( "CFA" ) ;

/ / ===== PROGRAM DOWNLOAD TO THE tALIL CONTROLLER =====/ / g . programDownloadFile (GRIPPERPROGRAM ) ; / / Downloads program to c o n t r o l l e r from f i l e 36 g1 . programDownloadFile (FIXTUREPROGRAM) ; 37 38 39 / / ===== MOTION CONTROLLERS INTITIALISATION ==========// 40 g . command ( "XQ #INIT" ) ; 41 g1 . command ( "XQ #INIT" ) ; 42 whi l e ( 1 )

99

100

APPENDIX E. SOURCE CODE FOR GALIL CONTROLLER

43 { 44 r = g . r e c o r d ( "QR" ) ; 45 i n i t _ o u t p u t = g . s our c e V a l ue ( r , "@OUT[ 0 1 ] " ) ; / / Reads d i g i t a l output 46 i f ( i n i t _ o u t p u t ==1) 47 { 48 cout <<" : : : : : : : : : : INTITIALISATION COMPLETED : : : : : : : : "<< e n d l ; 49 i n i t _ o u t p u t ==1; 50 break ; 51 } 52 } 53 } 54 c a t c h ( s t r i n g e ) / / C a tc he s i f e r r o r 55 { 56 c out << e ; 57 } 58 } / / END MAIN

Listing E.2: Gripper control code


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 #INIT I n i t i a l i z a t i o n AL _ALA=0 _ALB=0 _ALC=0 _ALD=0 i =0 #CB i = i +1 CB i JP#CB, i <9 SHABCD spd =8000 JG spd ,spd ,spd ,spd IF ( _LFA =1) ; BG A ; ENDIF IF ( _LRB=1) ; BG B ; ENDIF IF (_LRC=1) ; BG C ; ENDIF IF (_LRD=1) ; BG D; ENDIF IF ( _LFA =0) ; ST A ; ENDIF IF ( _LRB=0) ; ST B ; ENDIF IF (_LRC=0) ; ST C ; ENDIF IF (_LRD=0) ; ST D; ENDIF AM DP 0 , 0 , 0 , 0 MG " I n i t i a l i s a t i o n Done" SB 1 WT 1000 JP#MOEN #START2 For two f i n g e r g r a s p MOC a=@IN[ 1 ] b=@IN[ 2 ] SHABD spd =8000 JG spd , spd BG AB #3 a=@IN[ 0 1 ] b=@IN[ 0 2 ] IF ( a =0) STA MG"A STOPPED" ENDIF

101

47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111

IF ( b =0) STB MG "BSTOPPED" ENDIF JP # 3 , ( a =1) | ( b =1) AM MG "GRASP DONE" SB 5 JP #EN JP # 3 , ( a =1) | ( b =1) AM MG "GRASP DONE" SB 6 JP #EN #START3 For t h r e e f i n g e r g r a s p a=@IN[ 1 ] b=@IN[ 2 ] SHABCD spd =8000 JG spd , spd , spd ,spd BG ABCD #4 a=@IN[ 0 1 ] b=@IN[ 0 2 ] c=@IN[ 0 3 ] IF ( a =0) STA MG"A STOPPED" ENDIF IF ( b =0) STB MG "BSTOPPED" ENDIF IF ( c =0) STC MG "C STOPPED" ENDIF M D O JP # 4 , ( a =1) | ( b =1) | ( c =1) AM MG "GRASP DONE" SB 5 JP #EN JP # 4 , ( a =1) | ( b =1) | ( c =1) AM MG "GRASP DONE" SB 6 JP #EN #PRGRDIS For s m a l l d i s k p r e g r a s p CB 4 SH AB PA 10000,13000 BG AB AM MG "DISK PREGRASP READY" SB 4 JP #EN #PRGRDI2 For b i g d i s k p r e g r a s p CB 4 SH AB PA 16000,13000

102

APPENDIX E. SOURCE CODE FOR GALIL CONTROLLER

112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136

BG AB AM MG "DISK PREGRASP READY" SB 4 JP #EN #PRGRDI3 For s h a f t p r e g r a s p CB 4 SH AB PA 10000,10000 BG AB AM MG "SHAFT PREGRASP READY" SB 4 JP #EN #MOEN M O MG "Motors OFF" #EN MG "END" EN

Listing E.3: Fixture control code


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 #INIT L a be l f o r t h e I n i t i a l i s a t i o n of t h e F i x t u r e i =0 #CB Making a l l used D i g i t a l Outputs "0" i = i +1 CB i JP#CB, i <5 SH ACD spd =70000 JG spd , , spd ,spd BG ACD AM DP 0 , 0 , 0 , 0 MG "INITIALISATION DONE" JG spd , , spd , spd PR 500000, ,80000 ,80000 BG ACD AM DP 500000, ,80000 ,80000 MG "PREFIXING DONE!" SB 3 JP#EN #FIXING L a be l f o r t h e f i x i n g of t h e s h a f t SHA JGA=70000 BGA JP#FIXING , _TEA<1000 STA AM MG "FIXING DONE!" SB 1 JP#EN #RELEASE L a be l f o r r e l e a s i n g t h e a s s e mbl y SH ACD spd =90000 JG spd , , spd , spd PR 500000, ,80000 ,80000 BG ACD

103

38 39 40 41 42 43 44

AM MG "ASSEMBLY RELEASED!" SB 2 JP #EN #EN Used f o r t u r n i n g OFF t h e motors M O EN

Das könnte Ihnen auch gefallen