Sie sind auf Seite 1von 92

Developing and Optimizing strategies for Robotic Peg-In-Hole insertion with a Force/Torque sensor

By

Kamal Sharma
Enrollment No ENGG01201001022

Bhabha Atomic Research Centre

A dissertation submitted to the Board of Studies in Engineering Sciences In partial fulfillment of requirements For the Degree of MASTER OF TECHNOLOGY of HOMI BHABHA NATIONAL INSTITUTE

August, 2011

Homi Bhabha National Institute


Recommendations of the Thesis Examining Committee
As members of the thesis examining committee, we recommend that the dissertation prepared by Kamal Sharma entitled Developing and Optimizing strategies for Robotic Peg-In-Hole insertion with a Force /Torque sensor, may be accepted as fulfilling the dissertation requirement for the Degree of Master of Technology.

Name Member-1 Member-2 Member-3 Technical advisor Co-guide Guide/Convenor Chairman Dr. (Smt.) Gopika Vinod Mr. Amitava Roy Dr. A.K. Bhattacharjee Mrs. Varsha T. Shirwalkar Dr. Prabir Kumar Pal Dr. Prabir Kumar Pal

Signature and Date

Final approval and acceptance of this dissertation is contingent upon the candidates submission of the final copies of the dissertation to HBNI.

Date:

Place:

DECLARATION
I, hereby declare that the investigation presented in the thesis has been carried out by me. The work is original and, to best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the award of any other degree or diploma of the University or other institute of higher learning, except where due acknowledgement has been made in the text.

Signature: Candidate Name: Kamal Sharma Date:

ii

ACKNOWLEDGEMENTS
I express my sincere thanks and gratitude to Dr Prabir K Pal, who is a marvelous mentor and guide. He provided a very caring and innovative environment to me. The best thing is that he never convinces until some task is made flawless. That encouraged me to develop very robust and accurate applications. Mrs. Varsha T. Shirwalkar proved to be a great Technology Advisor to me. She taught me to work out and face all sort of challenges that came during my project. Especially she provided me a vast knowledge over Force Control in a very intuitive and easy to understand manner. I am thankful to Dr. Venkatesh and Dr. Dwarakanath who provided me the in-depth knowledge for the working of Force/Torque Sensor. I am thankful to Mr. A. P. Das who helped me out in the mechanical engineering and control related issues. I thank Mr. Abhishek Jaju (Having expertise on KUKA robot) who provided to me the thorough understanding of Robotics and especially, the working of KUKA robot. Then, I pay my gratitude towards Mr. Shobhraj Singh Chauhan and Mrs. Namita Singh who provided a very spiritual, motivating and healthy environment to me that led to a very fast and smooth project completion. Special thanks to Mr. Shishir Kr. Singh who constantly criticized my work and provided to me a constant assessment that helped me out in making the things perfect. At last, I want to take this golden opportunity to express my gratitude towards Mr. Manjit Singh, Head, DRHR for teaching me to use research for application oriented and real-world problems and also for providing all the facilities and necessary infrastructure which was a prerequisite for the initiation and completion of the project. Kamal Sharma

iii

DEDICATED to Maa, Papa, Mintu & Bajrang Bali Ji

iv

Contents
Abstract ......................................................................................................................................... viii List Of Figures.................................................................................................................................. ix List Of Tables .................................................................................................................................. xii Abbreviations ................................................................................................................................ xiii 1. Introduction ................................................................................................................................ 1 1.2 Problem Statement and Objective ........................................................................................ 3 1.3 Plan of the thesis ................................................................................................................... 3 2. Industrial Robots & Programming .............................................................................................. 4 2.1 Industrial robots .................................................................................................................... 4 2.1.1 History............................................................................................................................. 4 2.2 KUKA KR-6 Robot ................................................................................................................... 5 2.2.1 Robot controller ............................................................................................................. 6 2.2.2 3-COM Card .................................................................................................................... 9 2.3 Robot Control ........................................................................................................................ 9 2.3.1 Conventional control ...................................................................................................... 9 2.3.2 Programming of KUKA robots (KRL) ............................................................................. 10 2.3.3 Real-time control (KUKA.Ethernet RSI XML) ................................................................ 11 3. The Force/Torque Sensor.......................................................................................................... 16 3.1 A General 6-Axis Force/Torque Sensor ............................................................................... 16 3.1.1 Stress and Strain ........................................................................................................... 16 3.1.2 Working Of A Strain-Gauge .......................................................................................... 18 3.1.3 Principle of Strain Measurement.................................................................................. 18 3.2 ATI 660-60 F/T Sensor ......................................................................................................... 20 4. Hybrid Position/Force Control .................................................................................................. 21 4.1 The Theory........................................................................................................................... 21 5. Preliminary Experiments using Force Sensing and Control ...................................................... 23 v

5.1 Recalibrating the Force/Torque Sensor .............................................................................. 23 5.2 Kuka Surface Scanner (KSS) ................................................................................................. 25 5.3 Kuka Stiffness Finder (KSF) .................................................................................................. 27 5.4 Lead Through Programming (LTP) Through Force Control ................................................. 29 5.5 A Continuous Tracing Algorithm (TUSS) .............................................................................. 30 5.5.1 Working of TUSS ........................................................................................................... 31 5.5.2 Analyzing TUSS under various cases ............................................................................. 34 5.5.5 Improved TUSS to avoid the problem of Deadlock Cycles ........................................... 36 5.5.6 Some Results With TUSS............................................................................................... 37 5.6 Use of such applications in Peg-in-hole insertion ............................................................... 39 6. Peg In The Hole ......................................................................................................................... 40 6.1 Hole Search.......................................................................................................................... 41 6.1.1 Blind Search Strategies ................................................................................................. 42 6.1.1.1 Grid Search ............................................................................................................. 42 6.1.1.2 Spiral Search........................................................................................................... 43 6.1.2 Intelligent Search Strategies ......................................................................................... 45 6.1.2.1 Neural Peg In The Hole .......................................................................................... 46 6.1.2.1.1 Neural Networks ............................................................................................. 46 6.1.2.1.2 Mathematical Model for Parallel Peg ............................................................. 48 6.1.2.1.3 Simulation Results ........................................................................................... 53 6.1.2.1.4 Tilted Peg Case ................................................................................................ 55 6.1.2.2 Precession Strategy ................................................................................................ 56 6.1.2.2.1 Need for Precession ........................................................................................ 56 6.1.2.2.2 The Precession Strategy .................................................................................. 57 6.2 Peg Insertion ....................................................................................................................... 58 6.2.1 Gravity Compensation .................................................................................................. 59 6.2.2 Large Directional Error (LDE) Removal ......................................................................... 61 6.2.2.1 LDE Removal Using Information from Hole Search ............................................... 61 6.2.2.2 LDE Removal Using Moment Information ............................................................. 62

vi

6.2.2.3 Stopping Criterion For LDE Removal...................................................................... 63 6.2.3 Small Directional Error (SDE) Removal ......................................................................... 63 7. Optimization ............................................................................................................................. 68 7.1 Design Of Experiments ........................................................................................................ 68 7.2 Optimizing the Search ......................................................................................................... 69 7.3 Optimizing the LDE Removal ............................................................................................... 72 7.4 Optimizing the SDE Removal ............................................................................................... 74 8. Results and Conclusions ............................................................................................................ 75 8.1 Conclusions.......................................................................................................................... 75

vii

Abstract
Almost all robot applications in the industries are of the pick and place kind. They are quite robust and accurate only when used in well-structured environments with pre-specified locations for handling objects. But when it comes to the usage of robots in somewhat unstructured environments, where positions of objects are not known precisely in advance, whereas tight tolerances is demanded in the operations, Force/Torque sensing and control is the only way to go. By using a 6-axis Force/Torque sensor at the wrist of the robot, we can read and analyze the forces as well as torques in the X, Y and Z-axis at the time of interaction of the robot with an object and modify its motion accordingly. Peg-in-hole is a benchmark problem for mechanical assembly using Force sensing. In this work, we have developed and implemented algorithms for peg-in-hole assembly. The peg and the hole chosen by us are cylindrical in shape. To start with, the approximate location of the hole is known. This can be obtained e.g. from computer vision. The peg reaches the approximate hole location and starts a search for the hole so that the centre of the peg falls within the clearance region of the centre of the hole. For this purpose, we have implemented blind search algorithms (like Grid and Spiral search) that exhaustively search for the hole. After the blind strategies, we have implemented some intelligent search strategies also that can lead the peg to directly jump to the hole center as soon as the hole is sensed by the peg. For the intelligent search, we have employed neural strategy as well as precession strategy. After the positional error between the peg and the hole centre is removed by the search, there is the need for insertion algorithms to remove the orientational error between the peg and the hole. In that, the Large Directional Error removal algorithm has been implemented by us to remove the large orientational error along with the positional error that was not addressed by the search strategy. Finally, we have implemented the Small Directional Error removal algorithm that is required to correct the small orientational misalignments so as to have a smooth insertion of the peg into the hole without any jamming. The insertion algorithms are supplemented by a Gravity Compensation algorithm that makes the Force/Torque sensor readings independent of its orientation. Finally, we optimize values of various parameters of the search and insertion algoritms to minimize overall time required for assembly. For this purpose, Design Of Experiments (DOE) technique was used. Results of these experiments and optimized values of search and insertion parameters are presented.

viii

List Of Figures
Figure 2.1: KUKA KR-6 Industrial Robot .......................................................................................... 5 Figure 2.2: KUKA KR-6 Dimensions ................................................................................................. 7 Figure 2.3: KUKA KR-6 specifications .............................................................................................. 7 Figure 2.4: KUKA KR C2 controller .................................................................................................. 8 Figure 2.5: Internals KUKA KR C2 controller ................................................................................... 8 Figure 2.6: 3-Com LAN card ............................................................................................................ 9 Figure 2.7: KUKA teach-pendent................................................................................................... 10 Figure 2.8: Example of short robot program syntax ..................................................................... 10 Figure 2.9: An example of KRLcode ............................................................................................. 11 Figure 2.10: Functional principle of data exchange ...................................................................... 13 Figure 2.11: Data exchange sequences ........................................................................................ 14 Figure 2.12: The different coordinate systems ............................................................................. 15 Figure 3.1 : Effects of a tensile and compressive force ................................................................ 16 Figure 3.2 : Mechanical Properties of some materials ................................................................. 17 Figure 3.3: Converting measured Strain into Voltage value ......................................................... 18 Figure 3.4: Converting measured Strain into Voltage value ......................................................... 19 Figure 3.5 ATI 660-60 Delta Sensor............................................................................................... 20 Figure 4.1: Example of a Force-Controlled task-turning a screwdriver ........................................ 21 Figure 4.2: Schematic of A Hybrid Controller ............................................................................... 22 Figure 5.1: Coordinate System followed for the recalibration of F/T sensor ............................... 23 Figure 5.2: GUI for KSS .................................................................................................................. 25 Figure 5.3: Kuka Scanning a surface using KSS ............................................................................. 25 Figure 5.4: Schematic for working of KSS ..................................................................................... 26 Figure 5.5: Profile generated by KSS for the surface shown in Figure 5.3 ................................... 26 Figure 5.6: Defining Stiffness of A Material .................................................................................. 27 Figure 5.7: KSF working over a mouse pad ................................................................................... 28 Figure 5.8: GUI for KSF .................................................................................................................. 28 Figure 5.9: Payload Assist ............................................................................................................. 29 Figure 5.10: A human finger tracing a surface from right to left while maintaining the contact 30 Figure 5.11: Black Box depicting the human approach ................................................................ 31 Figure 5.12: Robot with a probe, moving over a surface while maintaining the contact ............ 31 Figure 5.13: Black-Box depicting the TUSS Algorithm ................................................................. 33 Figure 5.14: Analysing TUSS over various surfaces....................................................................... 34 Figure 5.15: State Diagrams of TUSS for various surfaces ............................................................ 35 Figure 5.16: Binary equations for Improved TUSS ........................................................................ 37 ix

Figure 5.17: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles ............ 37 Figure 5.18(a): Experimental Setup to test Improved TUSS Algorithm ........................................ 38 Figure 5.18(b): GUI To Collect Data From The Improved TUSS Algorithm .................................. 38 Figure 5.19: Comparing the actual contour with the path followed by Improved TUSS Algorithm ....................................................................................................................................................... 39 Figure 6.1: Hole Search ................................................................................................................. 40 Figure 6.2: Depiction of Large Directional Error ........................................................................... 41 Figure 6.3: Depiction of Small Directional Error ........................................................................... 41 Figure 6.4: Search Points for Grid Search ..................................................................................... 42 Figure 6.5: Grid Search with clearance=0.5 mm. .......................................................................... 43 Figure 6.6: Spiral Search pattern .................................................................................................. 44 Figure 6.7: Spiral Search with clearance=0.5 mm......................................................................... 45 Figure 6.8: Basic structure of a neurocontroller ........................................................................... 46 Figure 6.9: Structure of a neuron ................................................................................................. 47 Figure 6.10: 3-layer Backpropagation Neural Network ................................................................ 48 Figure 6.11: The Parallel Peg case................................................................................................. 49 Figure 6.12: Case when the center of peg lies outside the chord of contact ............................... 50 Figure 6.13: Case when the center of peg lies inside the chord of contact ................................. 50 Figure 6.14: Computing the moments .......................................................................................... 51 Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case. ....... 53 Figure 6.16: Neural Network Training .......................................................................................... 54 Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D. ................................... 55 Figure 6.18: Tilted Peg Case .......................................................................................................... 55 Figure 6.19: Contact States for Tilted Peg Case ............................................................................ 56 Figure 6.20: The precession strategy: (a) The peg is initially tilted by Thetatilt, (b) The peg touches the hole surface, and peg height h1 is recorded. (c) As ................................................. 57 Figure 6.21: Peg height with respect to distance from holes center .......................................... 58 Figure 6.22: Need for Gravity Compensation ............................................................................... 59 Figure 6.23: Large Directional Error Case ..................................................................................... 61 Figure 6.24: Large Directional Error Removal Using Information From Search ........................... 62 Figure 6.25: Large Directional Error Removal Using Moment Information ................................. 62 Figure 6.26: Direction Of Moment Sensed and The Direction in which The Peg needs to be moved ........................................................................................................................................... 63 Figure 6.27: Removing Small Directional Error ............................................................................. 64 Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the midpoint. ............................................................................................................................................. 65 Figure 6.29: Calculating BackJump ............................................................................................... 65 Figure 6.30: Calculating the Upper Limit On BackJump ............................................................... 66 x

Figure 6.31: Insertion with Time using SDE Removal ................................................................... 67 Figure 6.32: Insertion and Upward Force Due to Jamming .......................................................... 67 Figure 7.1: a) The plot of Actual V/S Predicted Search Time (in seconds). b) The Prediction Profiler for a set of 20 Experiments. c) The Prediction Profiler for a set of 16 Experiments. ...... 71 Figure 7.2: a) The plot of Actual V/S Predicted Search Time (in seconds). b) The Prediction Profiler. ......................................................................................................................................... 74

xi

List Of Tables
Table 3.1: Specifications for ATI 660-60 Delta Sensor (Last Row) ............................................... 20 Table 5.1: Original Calibration Matrix ........................................................................................... 24 Table 5.2: Calibration Matrix For Recalibrated Sensor ................................................................. 24 Table 5.3: TUSS Algorithm depicted in the Truth Table............................................................... 33 Table 7.1: DOE for Precession Search ........................................................................................... 70 Table 7.2: DOE for LDE Removal ................................................................................................... 73 Table 7.3: Experimental Runs for SDE Removal............................................................................ 74 Table 8.1: Optimized results for Hole Search ............................................................................... 75 Table 8.2: Optimized results for LDE Removal ............................................................................. 75 Table 8.3: Optimized results for SDE Removal ............................................................................. 75

xii

Abbreviations
RCC DOF KRL RSI XML KSS KSF LTP Remote Compliance Center Degrees Of freedom Kuka Robotic Language Robot Sensor Interface Extended Markup Language Kuka Surface Scanner Kuka Stiffness Finder Lead Through Programming

TUSS Trace Using State Space TCP F/T BCS TCS LDE SDE DOE Tool Center Point Sensor Force/Torque Sensor Base Coordinate System Tool Coordinate System Large Directional Error Small Directional Error Design Of Experiments

xiii

1. Introduction
Usually, robotic operations are of pick and place type and in the fields that are remote or those which are hazardous for humans to be present. For such tasks, robots can be programmed by hard coding the positions or the points where it has to reach for the accomplishment of a particular task. But there are many situations where the environments are not very much structured [1], the interactions are not predictable or tolerances are very less in magnitude and the accuracy required is much higher. One good example for such a kind of task is mechanical assembly where tolerances are very less. Suppose, two parts need to be mated and the clearance between those two parts is of the order of lesser than a millimeter. Now the exact location of the female part can be specified through positional measurement or through computer vision. But if the accuracy of position measurement is not suitable for mating such low clearance parts, then the robot may try to inaccurately push the male part into the female one, and that may lead to the development of large forces that can harm the parts or the robot itself. Also sometimes the camera vision is not possible due to the obstruction of view [2]. So, we are left with an approximate idea of the position of one part and the assembly has to be done with this much knowledge. At such times, the force-control comes into play. The parts have to be mated by getting feedback from the forces and torques that are generated from the interaction of the mating parts. There are two methods to employ the force information (Force here means both the forces and the torques) for assembly task. One is to use passive sensing. There are passive devices for mechanical assembly, but their limitation is that they are not customized for specific assembly and are not general purpose devices. Also they are not so flexible as to be adapted to any environment [3]. The other method is using active sensing and developing an intelligent controller that continuously takes force feedback and decides its next command to the robot for the assembly to take place. Hybrid Position/Force Control theory [4] (that controls the position as well as the generated forces at the manipulator end) provides the method for building such a controller. Peg-in-hole insertion is the essential first step in the validation of algorithms for robotic assembly. In our work, we tried to assemble the pegs into holes with 1.5 m.m. clearance and reached upto a level of 0.5 m.m. For that purpose we have developed the algorithms for accurately positioning the peg inside the hole as well as properly orienting it for insertion without any jamming. After the successful testing of our algorithms, we have optimized them for a particular assembly operation.

1.1. Previous Work


Peg in The Hole, being a classical assembly problem has been addressed by many researchers. W. Haskiya discusses a solution using a passive device [1]. Passive controllers focus on the usage of a programmably compliant robot wrist. Usually passive devices do not work with chamferless holes, but Haskiya has devised a solution for chamferless assembly using passive accommodation approach. W. Newmann and Y. Zhao [5] have used feature extraction and moment signals to locate the hole. They have tried to learn the location of the hole and current position of the peg from the moment signals being generated by the peg and hole contacts. They then train the neural network that can tell the approximate location of the peg with respect to the hole by sensing the moments at any contact state. Phong and Fazel [6] discuss a Fuzzy control based approach for the Peg in Hole assembly. The suggested technique operates more similarly to the human behaviour during manual assembly. The method can also learn from teaching data how to perform compliant motion in an assembly task. Gullapalli and Grupen[7] have used associative reinforcement learning for the assembly. An associative reinforcement learning system has to learn appropriate actions in various situations through search guided by evaluative performance feedback. Chhatpar [2] used a precession based strategy for accurately locating the center of a hole. This strategy uses only the force information without any moment sensing for the hole search. Various force control strategies are discussed by Shekhar and Khatib [8]. Insertion with small directional error and large directional error is discussed by Kim and Lim [9]. This is based on active sensing of moments, and appropriately changing the orientation of the peg so as to have a smooth insertion. Design of Experiments (DOE) based optimization of assembly has been discussed by Gravel and Zhang [10]. The statistical nature of assembly and non analytical nature of interactions amongst the affecting factors make DOE a perfect optimization technique for assembly tasks. A Genetic Algorithm based approach for assembly optimization has been used by Marvel and Newman [11]. However, the DOE is a specialized technique and must be reworked whenever modifications are introduced into the assembly setup or the robot while genetic approach describes a generalized machine learning approach for optimizing the search strategies employed in autonomously assembling components in variable work configurations. In our work, we have simulated the neural network approach using moment information as proposed by Newmann and Zhao [5]. We found that the neural network approach is suitable only for the case when the peg is parallel to the hole surface (which is not possible for realworld assemblies). We implemented the precession strategy of Chhatpar [2] for real-world assemblies in which the peg can have any orientation with respect to the hole surface. Then finally, we have optimized the assembly operation using DOE techniques of Gravel and Zhang [10] which is well suited for optimizing the assembly operations.

1.2 Problem Statement and Objective


Our research deals with the Development and Optimization of strategies for Robotic Peg-inhole insertion with a Force /Torque sensor. The objective of the project is to develop a robust strategy for robotic Peg-In-hole insertion using a 6-axis Force/Torque sensor that can provide the force feedback in X, Y and Z-directions as well as the torque feedback in the same directions. The clearance (i.e., the difference between the Peg and the Hole diameters) was kept as low as 0.5 mm. This assembly has to be done with a 6 DOF (Degrees Of Freedom) Kuka robot. For this purpose, a real-time control program needs to be written that can communicate with the robot and command it to move in the Cartesian space so as to accomplish the assembly. After the development and testing of the algorithms, they have to be optimized for a particular assembly. Peg-In-Hole problem, in general, can deal with various shapes of pegs and holes like square, cylindrical, spline, etc. The case of cylindrical peg is relatively easy to solve because of its rotational invariance. We have developed algorithms for cylindrical peg and hole here, as it is most often the case. The holes are assumed to be on some plane and smooth surface. Also it is assumed that there is only one hole present at a time within the periphery of the peg.

1.3 Plan of the thesis


The thesis starts with the discussion of industrial robots and their programming in Chapter 2. Then Chapter 3 gives a detailed description of the working of a Force/Torque sensor. Chapter 4 describes the Hybrid Position/Control theory that forms the basis of force-controlled applications. Chapter 5 lists and explains all preliminary force-controlled applications developed by us that would further form a part of Peg-In-Hole assembly. Chapter 6 presents the implementation of algorithms for the three crucial steps in Peg-In-Hole assembly, viz. Hole Search, Large Directional Error Removal and Small Directional Error Removal. The optimization of parameters for our assembly has been dealt with in Chapter 7. Chapter 8 presents the results obtained in our work. The thesis concludes with the conclusions and future work given in Chapter 9.

2. Industrial Robots & Programming


This Chapter describes Industrial Robot, first in general and then more detailed about this specific project.

2.1 Industrial robots


There is a difference between robots in general and industrial robots. When talking about robots in general, that includes all types of robots which can have special purposes, humanoids (robots that are constructed to act as a human), cleaning robots, etc. However this report focuses on robots used within the industry. By ISO definition, an industrial robot is an automatically controlled, reprogrammable multipurpose manipulator that can be programmed in three axes or more [12]. The expression field of robotics is often used and that includes everything related to the robots, normally when used in industrial applications typically for production. Robots can be adapted and used for a lot of different tasks. Typical applications for industrial robots can be welding, assembly, painting, inspection, etc. performed at a high speed and with high precision. Normally the robot is connected through a controller to a laptop, or desktop computer in which the programming takes place. Almost all industrial robots come with a so called teachpendant which can be used in order to move and control the robot. In other words the robot can be taught to go to specific points in space and then perform the trajectory that it has been taught.

2.1.1 History
The first real robot patents were applied by George Devol in 1954 [13]. Two years later the first robot was produced by a company called Unimation, founded by George Devol and Joseph F. Engelberger. In these first machines the angles of each joint were stored during a teaching phase and could then later be replayed. The accuracy was very high 1/10000 of an inch. Worth to notice is that the most important measure in robotics is not accuracy, but repeatability. This is due to the fact that normally the robots are taught to do one task over and over again. The first 6axis robot was invented in 1969 by Victor Scheinman at Stanford University, named "the Stanford arm" and could substitute the motions of an arm [13]. The Stanford arm made it possible to make arbitrary paths in space and also permit the robot to perform more complicated tasks, such as welding and assembly. In Europe the industrial robots took off around 1973 when both KUKA and ABB entered the market and launched their first models. Both came with six electromechanically driven axes. In the late 1970's the consumer market started showing interest and several America based companies entered the market. The real robotics boom came in 1984 when Unimation was sold to Westinghouse Electric Corporation. Few Japanese companies managed to survive (like FANUC, Kawasaki, Honda, etc.), and today the major companies are Adept Technology, StaubliUnimation, the SwedishSwiss Company ABB ( Asea Brown Boveri) and the German company KUKA Robotics.

2.2 KUKA KR-6 Robot


The robot used in this project is a KUKA KR 6 (see Figure 2.1). Its versatility and flexibility make the KUKA KR 6-2 one of the most popular robots. It has a payload of 6 kg. See Figure 2.2 for dimensions of the robot and Figure 2.3 for its specifications.

Figure 2.1: KUKA KR-6 Industrial Robot

Some of the significant robot specifications are: Accuracy: It refers to how close does the robot get to the desired point. When the robot's program instruct the robot to move to a specified point, it does not actaully perform as per specified. The accuracy measures such variance. That is, the distance between the specified position that a robot is trying to achieve (programming point), and the actual X, Y and Z resultant position of the robot end effector. Repeatability: The ability of a robot to return repeatedly to a given position. It is the ability of a robotic system or mechanism to repeat the same motion or achieve the same position. Repeatability is a measure of the error or variability when repeatedly reaching for a single position. Repeatability is often smaller than accuracy. Degrees of Freedom (DOF) - Each joint or axis on the robot introduces a degree of freedom. Each DOF can be a slider, rotary, or other type of actuator. The number of DOF that a manipulator possesses thus is the number of independent ways in which a robot arm can move. An industrial robot typically have 5 or 6 degrees of freedom out of which 3 of the degrees of 5

freedom allow positioning in 3D space (X, Y, Z), while the other 2 or 3 are used for orientation of the end effector (yaw, pitch and roll). Six degrees of freedom are enough to allow the robot to reach all positions and orientations in 3D space. Five DOF requires a restriction to 2D space, or else it limits orientations. Five DOF robots are commonly used for handling tools such as arc welders. Resolution: The smallest increment of motion or distance that can be detected or controlled by the robotic control system. It is a function of encoder pulses per revolution and drive (e.g. reduction gear) ratio. And it is dependent on the distance between the tool center point and the joint axis. Envelope: A three-dimensional shape that defines the boundaries that the robot manipulator can reach; also known as reach envelope.

Maximum envelope: the envelope that encompasses the maximum designed movements of all robot parts, including the end effector, workpiece and attachments. Restricted envelope is that portion of the maximum envelope which a robot is restricted by limiting devices. Operating envelope: the restricted envelope that is used by the robot while performing its programmed motions.

Reach: The maximum horizontal distance from the center of the robot base to the end of its wrist. Maximum Speed: A robot moving at full extension with all joints moving simultaneously in complimentary directions at full speed. The maximum speed is the theoretical values which does not consider under loading condition. Payload: The maximum payload is the amount of weight carried by the robot manipulator at reduced speed while maintaining rated precision. Nominal payload is measured at maximum speed while maintaining rated precision. These ratings are highly dependent on the size and shape of the payload due to variation in inertia.

2.2.1 Robot controller


The robot controller used in this project is a KR C2 edition 2005 (see Figure 2.4). The KR C2 uses an embedded version of Windows XP as operating system (basically providing the userinterface) and VxWorks real-time operating system (for actual execution of robot commands) (see Figure 2.5). KRL (Kuka Robotic Language) and RSI (Robot Sensor Interface) are installed on the PC of this controller. It has single-axis servo-amplifiers, PC-based controller and electronic safety system.

Figure 2.2: KUKA KR-6 Dimensions

Figure 2.3: KUKA KR-6 specifications 7

Figure 2.4: KUKA KR C2 controller

Figure 2.5: Internals KUKA KR C2 controller 8

2.2.2 3-COM Card


A 3-Com LAN card (see Figure 2.6) is required to establish communication between KUKA KR C2 controller and external system. This configuration has advantage over other communications like OPC communication because card is directly connected with RTOS VxWorks. Thus execution latency is avoided and robot can be controlled in real-time from an external computer.

Figure 2.6: 3-Com LAN card

2.3 Robot Control


In this section we shall explain the common practice in industrial robot control. First subsection, explains how to make the robot perform a task by writing a robot code that are read by the robot controller computer. Further down in the section 2.3.2 (Programming of KUKA robots) it will be explained more specifically about how the controlling is done with KUKA robots and how the robot language code works. Then it will be explained more about what type of controlling method was used in this project, and what type of tools that were used under the sections 2.3.3(Realtime control).

2.3.1 Conventional control


The common practice for industrial robot control is either to move the robot manually with the robots teach pendant, or with a program written in the robots language, in this case KUKA Robotic Language (KRL) that is run on the robot controller. Either the programs are written directly in the specific robot language or the programmer uses some kind of post processor in combination with CAD/CAM software. The programming of an industrial robot can be divided into two categories, online, and offline programming. Offline programming means that the robot code is created offline with no connection to the physical robot. The generated program is later transferred to the robot and executed in the real environment. Normally a verification of the code is made in some sort of simulation software before transferring the program. Online programming includes programming when the software is directly connected to the physical robot. This can be done with a teach pendant (see Figure 2.7) used for moving the robot through certain positions 9

which are stored, and then a trajectory is created between the stored points. This process of adjusting a position in space is commonly referred to as "jogging" or "inching" the robot.

Figure 2.7: KUKA teach-pendent Most industrial robots have a similar programming structure, telling them how to act. One defines points in space and then says how the robot is supposed to reach these points, normally called P1, P2, P3 etc. An example of program syntax is shown below in Figure 2.8. Many industrial robot manufacturers offer a simulation software package for their robots which eases the programming and at the same time makes it possible to perform offline programming. The benefits from using offline programming are many; it prevents costly damages that could happen in the real world, it can save money since you dont have to interrupt the production while programming. It speeds up for example rampup time when switching to production of a new model etc.

Figure 2.8: Example of short robot program syntax

2.3.2 Programming of KUKA robots (KRL)


For programming of a KUKA robot, a specific language called KRL (KUKA Robot Language) is used. The structure of the code is similar to most other robot languages; Points are specified in 10

space and commands are used to express how the robot is supposed to reach these points. When programming in KRL, a Sourcefile is created containing the real program code that will be read by the robot controller and a Data file containing the declaration of variables, and other prerequisites that are read and stated automatically before executing the .SRCfile. The name of the two files shall always be the same to be identified as the same program by the robot controller. A small example of the contents in a .SRCfile is shown in Figure 2.9. The declaration and initialization part can also be located in the .DAT file, but are here showed in the same program to more clearly explain the different steps. The PTP HOME in the main section makes the robot perform a pointtopoint motion to the defined home position, which is defined in the initialization part. The first instruction to the robot is normally called a BCOrun (Block Coincidence), which makes the robot go to a predefined point in space. This is done in order to set the correspondence between the current real robot position and the programmed position. The next line makes the robot go to the specified point, which in this case is defined directly in the same row as a point in the base coordinate system specified by six values. The six values are; X, Y, Z that defines the point in a 3Dspace and A (rotation around Zaxis), B (rotation around Yaxis) and C (rotation around Xaxis) that defines the tool orientation.

Figure 2.9: An example of KRLcode

2.3.3 Real-time control (KUKA.Ethernet RSI XML)


Normally industrial robots are programmed to perform a single task, for example welding the same part over and over throughout the day; hence it is not controlled in realtime. However, realtime control can be used for tasks that are nonerepeatable and require the user to alter

11

the robots movement from time to time. Example of this can be inspection with a camera, place a gripper as the endeffector in order to be able to pick up and move different objects etc. The KUKA.Ethernet RSI XML software package is required to control the robot from an external computer in real time. The term external system is often mentioned and refers to the computer(s) that are connected to the robot controller (in this case the computer that holds the server that the robots controller is communicating with). The KUKA.Ethernet RSI XML is an addon technology package with the following functions [14]: Cyclical data transmission from the robot controller to an external system in the interpolation cycle of 12ms (e.g. position data, axis angles, operating mode, etc Cyclical data transmission from an external system to the robot controller in the interpolation cycle of 12ms (e.g. sensor data) Influencing the robot in the interpolation cycle of 12ms Direct intervention in the path planning of the robot

The characteristics of the package are the following: Reloadable RSIobject for communication with an external system, in conformity with KUKA.RobotSensorInterface (RSI) Communications module with access to standard Ethernet Freely definable inputs and outputs of the communication object Data exchange timeout monitoring Expandable data frame that is sent to the external system. The data frame consists of a fixed section that is always sent and a freely definable section.

The KUKA.Ethernet RSI XML enables the robot controller to communicate with the external system via a realtimecapable pointtopoint network link. The exchanged data is transmitted via the TCP/IP or UDP/IP protocol as XML strings. For our work, the TCP/IP was used since it makes a more secure communication with response from the destination to the host, while the UDP/IP lacks this response because TCP/IP provides the reliable and connection-oriented transfer of packets between two communicating entities [15]. Programming of the KUKA.Ethernet RSI XML package is based on creating and linking RSI objects. RSIobjects are small pieces of preprogrammed code that can be executed and has additional functionalities than the normal KRLcode. To be able to communicate externally through Ethernet, a specific standard object (ST_ETHERNET) needs to be created. The code line for creating for example the ST_ETHERNET is typically: err =ST_ETHERNET (A,B,config_file.xml) Where, err = type of string used by the RSI XML (called RSIERR) containing the error code produced when creating the object (normally #RSIOK when it works) 12

A= an integer value that contains the specific RSIobject ID so that it is possible to locate and refer to B= an integer value for the container to which you want the RSIobject to belong, config_file.xml is a configuration file located in the INIT folder (path C:/KRC/ROBOTER/INIT) on the robot controller that specifies what should be sent and received by the robot controller. The content of this file will be explained further down. ST_ETHERNET is the object that can be influenced by external signals, and also send data back to the external system in form of XML files, containing different tags with data. The data can be for example information about the robots actual axis positions, Cartesian actual positions etc. This data shall be sent to the server and back within each interpolation cycle of 12ms. In this project, the communication object ST_ETHERNET was used. When one of these objects is created and linked correctly, the robot controller connects to the external system as a client. There are many different types of RSIobjects and depending on what you want to do, you have to create and link the correct objects to each other. Besides the standard Ethernet card, an additional card (3-COM) was needed, to be able to handle the speed of the transferred data [14].

Figure 2.10: Functional principle of data exchange

The robot controller initiates the cyclical data exchange with a KRC data packet and transfers further KRC data packets to the external system in the interpolation cycle of 12 ms. This communication cycle is called a IPOcycle (Input Process Output) and can be seen in Figure 2.10 above. The external system must respond to the KRC data packets with a data packet of its own.

13

To be able to influence the robot, one needs to initiate an RSIobject for the movements. There are mainly two objects used for this. First an object called ST_AXISCORR(A,B) for specific movements in axis A1 to A6, where A is the specific ID of the created object, and B is the container that the object belongs to. The second object is called ST_PATHCORR(A,B) for movements in Cartesian coordinates, where A, B are the same as for the ST_AXISCORR object. A coordinate system is also needed (normally BASE, TCP, WORLD) as a reference for the movements. This is done by creating a RSIobject called ST_ON1(A,B) where the parameter A is a string containing the coordinate system that is supposed to be used (expressed as #BASE, #TCP or #WORLD) and B is an integer value, 0 if the correction values sent to the robot shall be absolute or 1 if they shall be relative. A schematic picture of the data exchange sequence with the different RSIobjects is shown in Figure 2.11 above.

Figure 2.11: Data exchange sequences When the robot is delivered from the factory, the BASE coordinate system is the same as the WORLD and both are located in the base of the robot by default. BASE is normally moved to the base of the work piece on which the robot is working. The differences between the different coordinate systems can be seen in Figure 2.12. (NOTE: For the robot used in this thesis, all three systems; WORLD CS, ROBOT CS and BASE CS have the same origin located on the base of the robot.)

14

When the robot controller communicates with the external system it interchanges XML (Extensible Markup Language) Strings. The IP address and port to which the robot controller will connect, when establishing the connection, are set in this file. The sub tags under <SEND> are what the robot controller sends to the external system. The most important tags for sending to the external system for this project are the tags called DEF_RIst, which is the real position of the robots endeffector. Under the tag <RECEIVE> is described what the robot controller expects to receive from the external system. In this case corrections in six values (X, Y, Z, A, B and C) are included, tagged as RKorr.

Figure 2.12: The different coordinate systems

15

3. The Force/Torque Sensor


This section describes the working of a general 6-axis Force/Torque Sensor and then provides details of the sensor used for our work.

3.1 A General 6-Axis Force/Torque Sensor


A 6- axis Force/Torque sensor is an electromechanical device that can sense the forces in the vector space i.e. Fx, Fy, Fz as well as the torques or moments in the vector space i.e. tx, ty, tz. Such sensors are strain-gauge based and their working is explained below.

3.1.1 Stress and Strain


When a material receives a tensile force P, it has a stress that corresponds to the applied force. In proportion to the stress, the cross-section contracts and the length elongates by L from the length L the material had before receiving the tensile force (see upper illustration in Figure 3.1).

Figure 3.1 : Effects of a tensile and compressive force 16

The ratio of the elongation to the original length is called a tensile strain and is expressed as under: = L/L , : Strain, L: Original length, L: Elongation If the material receives a compressive force (see the lower illustration in Figure 3.1), it bears a compressive strain expressed given under: = L/L The relation between stress and the strain initiated in a material by an applied force is expressed as follows based on Hooke's law: = E, : Stress, E: Elastic modulus, : Strain Stress is thus obtained by multiplying strain by the elastic modulus. When a material receives a tensile force, it elongates in the axial direction while contracting in the transverse direction. Elongation in the axial direction is called longitudinal strain and contraction in the transverse direction, transverse strain. The absolute value of the ratio between the longitudinal strain and transverse strain is called Poisson's ratio, which is expressed as follows: v=|E2/E1|, v=Poissons ratio, E1=Longitudinal Strain, E2=Transverse Strain Poisson's ratio differs depending on the material. For reference, major industrial materials have the following mechanical properties including Poisson's ratio (see Figure 3.2).

Figure 3.2 : Mechanical Properties of some materials

17

3.1.2 Working Of A Strain-Gauge


Each metal has its specific resistance. An external tensile force (compressive force) increases (decreases) the resistance by elongating (contracting) it. Suppose the original resistance is R and a strain-initiated change in resistance is R. Then, the following relation is concluded: R/R = Ks L/L = Ks. where, Ks is a gage factor, the coefficient expressing strain gage sensitivity. General-purpose strain gages use copper-nickel or nickel-chrome alloy for the resistive element, and the gage factor provided by these alloys is approximately 2.

3.1.3 Principle of Strain Measurement


Strain-initiated resistance change is extremely small. Thus, for strain measurement, a Wheatstone bridge is formed to convert the resistance change to a voltage change. Suppose in Figure 3.3 resistances () are R1, R2, R3 and R4 and the bridge voltage (V) is E. Then, the output voltage eo (V) is obtained with the following equation: eo = ((R1R3 R2R4) /(R1 + R2) (R3 + R4)). E The strain gage is bonded to the measuring object with a dedicated adhesive. Strain occurring on the measuring site is transferred to the strain sensing element via the gage base.

Figure 3.3: Converting measured Strain into Voltage value

18

For accurate measurement, the strain gage and adhesive should match the measuring material and operating conditions including temperature. eo =(( (R1 + R)R3 R2R4)/ (R1 + R + R2) (R3 + R4)). E If R1 = R2 = R3 = R4 = R, eo = ((R2 + R R R2)/ (2R + R) 2R) . E Since R may be regarded extremely larger than R, eo = 1/4 . R/R . E = 1/4 . Ks . . E Thus obtained is an output voltage that is proportional to a change in resistance, i.e. a change in strain. This microscopic output voltage is amplified for analog recording or digital indication of the strain. So, we can see that the output voltages of the sensor are proportional to the forces and torques being applied to the sensors gages. A Force/Torque sensor, in principle, consists of multiple such gages so as to provide the values for forces and torques in multiple axis. Figure 3.4 shows a dismantled Force/Torque Sensor.

Figure 3.4: Converting measured Strain into Voltage value 19

Finally, the three components of forces and the three components of torques can be calculated as under : [Fx Fy Fz Tx Ty Tz]T= [A] 6x6 X [V1 V2 V3 V4 V5 V6]T V1-V6 are the 6 output voltages and A is a 6X6 matrix also known as the calibration matrix.

3.2 ATI 660-60 F/T Sensor


The sensor used for our work is ATI 660-60 Delta F/T Sensor (Force/Torque Sensor) (see Figure 3.5). The specifications for our sensor are given in Table 3.1.

Figure 3.5 ATI 660-60 Delta Sensor

Table 3.1: Specifications for ATI 660-60 Delta Sensor (Last Row)

20

4. Hybrid Position/Force Control


This section describes the theory of Hybrid Position/Force Control that is widely used in ForceControlled applications with active sensing.

4.1 The Theory


The theory was given by M. H. Raibert and J. J. Craig in 1981 [4].The approach taken by them is based on a theory of compliant force and position manipulator control. Every manipulation task can be broken down into elemental components that are defined by a particular set of contacting surfaces. With each elemental component is associated a set of constraints, called the natural constraints, that result from the particular mechanical and geometric characteristics of the task configuration. For instance, a hand in contact with a stationary rigid surface is not free to move through that surface (position constraint), and, if the surface is frictionless, it is not free to apply arbitrary forces tangent to the surface (force constraint). Figure 4.1 describes a task configuration for which compliant control is useful along with the associated natural constraints. In general, for each task configuration a generalized surface can be defined in a constraint space having N degrees of freedom, with position constraints along the normals to this surface and force constraints along the tangents. These two types of constraint, force and position, partition the degrees of freedom of possible hand motions into two orthogonal sets that must be controlled according to different criteria. Additional constraints, called artificial constraints, are introduced in accordance with these criteria to specify desired motions or force patterns in the task configuration. That is, each time the user specifies a desired trajectory in either position or force, an artificial constraint is defined. These constraints also occur along the tangents and normals to the generalized surface, but, unlike natural constraints, artificial force constraints are specified along surface normals, and artificial position constraints along tangents consistency with the natural constraints is preserved.

Figure 4.1: Example of a Force-Controlled task-turning a screwdriver 21

Once the natural constraints are used to partition the degrees of freedom into a positioncontrolled subset and a force-controlled subset, and desired position and force control trajectories are specified through artificial constraints, it remains to control the manipulator. The controller is shown in Figure 4.2 below.

Figure 4.2: Schematic of A Hybrid Controller

There are two sub-loops in the controller, one for the position control and another for the force control. The position control loop continuously senses the manipulator position and rectifies it according to the set-point. This is done for those degrees of freedom that had been partitioned for the position control. On the other hand, the other su b-loop continuously senses the forces/torques at the manipulator end (through the use of Force/Torque sensor) and tries to maintain the set-point values for the forces and torques. This is done for the degrees of freedom that had been partitioned for the f orce control.

22

5. Preliminary Experiments using Force Sensing and Control


This section describes the initial stage experiments done so as to get acquaintance with the force control scheme.

5.1 Recalibrating the Force/Torque Sensor


Just before this project started, our F/T sensor was partly damaged in collision, and one of its output voltages (V1) showed signal saturation. We decided to recalibrate it and use it, for some simpler applications that require force and torque values in lesser dimensions than six, till the new sensor arrives. We recalibrated the sensor by keeping the force in one direction as constant. We assumed that for our applications, Fy, referring to the coordinate system shown in Figure 5.1, would remain constant. The calibration Matrix for the working sensor is shown in Table 5.1.

Figure 5.1: Coordinate System followed for the recalibration of F/T sensor

23

Table 5.1: Original Calibration Matrix -0.03943 0.046096 6.642287 -7.92787 89.45006 4.510241 145.2416 0.052026 5.769174 0.160803 -5.80604 144.9416 -0.1381 -0.2505 -2.62329 -82.7549 -3.25622 86.55779 -50.2394 -8.00946 -0.2009

-46.9585 2.268988 -6.48073 146.0442 4.96702

-5.00965 0.304252 -2.82034 0.011043 0.21388

-2.88219 0.295451 -2.92291

-2.78102 0.205997

So, [Fx Fy Fz Tx Ty Tz]T= [A] 6x6 X [V1 V2 V3 V4 V5 V6]T We assumed that Fz would remain constant for our experiments and eliminated one variable i.e. Fz and also the dependency on V1 from the system of equations given above. Finally, we come up with a new Calibration Matrix as shown in Table 5.2.

Table 5.2: Calibration Matrix For Recalibrated Sensor 0.044519 6.68164 -82.7566 -3.21656 86.55561 -50.6765 -0.19803

89.13315 12.42173 -0.13602 -0.01988

-47.3123 10.24067

-5.06157 0.306574 4.914707 -8.57759 0.268465

-8.68325 0.613597 -2.91404

-2.61686 0.053409

-2.77385 0.044305

And now, [Fx Fy Tx Ty Tz]T= [A] 5x5 X [V2 V3 V4 V5 V6]T So, finally our experiments become independent of the signal V1 provided that we could keep Fz as constant in these experiments. Chapter 5 employs the recalibrated sensor for its experiments while Chapter 6 onwards, we have used the new sensor because for further experiments, it was almost impossible to keep the force component Fz as constant.

24

5.2 Kuka Surface Scanner (KSS)


We developed a discrete-step surface scanner that uses one-dimensional force feedback and generates the profile of any arbitrary surface over which it is used. Figure 5.2 shows the GUI for KSS and Figure 5.3 shows the robot scanning a surface.

Figure 5.2: GUI for KSS

Figure 5.3: Kuka Scanning a surface using KSS 25

The KSS has to be supplied with two Cartesian points (Start Point and End Point) (see Figure 5.2) that are the two extremes of a volume enclosing the surface that has to be scanned. The robot is having a probe and will start from the Start Point and move down until it experiences some threshold force value in upper direction due to the contact of the probe with the surface (see Figure 5.4). The robot will scan the surface in a grid like fashion discretely till the End Point, where the step size or quanta being the value specified by the user (see Figure 5.2).

Figure 5.4: Schematic for working of KSS

These discrete points at which the probe stops are stored and plotted so as to generate the profile of the surface scanned. The profile generated for the surface shown in Figure 5.3 is shown in Figure 5.5.

Figure 5.5: Profile generated by KSS for the surface shown in Figure 5.3 26

5.3 Kuka Stiffness Finder (KSF)


After KSS, we developed another application that also needs 1-Dimensional Force Feedback. The Kuka Stiffness Finder finds the stiffness of any unknown material placed under the probe. The stiffness of a material is its extensive mechanical property and is defined as (see Figure 5.6):

Figure 5.6: Defining Stiffness of A Material The stiffness, k, of a body is a measure of the resistance offered by an elastic body to deformation. For an elastic body with a single Degree of Freedom (for example, stretching or compression of a rod), the stiffness is defined as

Where, F is the force applied on the body is the displacement produced by the force along the same degree of freedom (for instance, the change in length of a stretched spring)[16]. Kuka Stiffness Finder first tries to identify contact with the material surface and afterwards, it presses the material until some predefined force is sensed. In the process, it records the penetration caused and calculates the stiffness value as defined in the equation above. KSF also maintains a log of stiffness values for all the materials that had been tested with it. It then uses its log to compare the new material placed under it with the materials already within the log and tries to predict the new material. We tested KSF over a mouse pad (see Figure 5.7 and Figure 5.8) two times and the second time, it correctly predicted the material as the mouse pad.

27

Figure 5.7: KSF working over a mouse pad

Figure 5.8: GUI for KSF 28

5.4 Lead Through Programming (LTP) Through Force Control


Lead Through Programming or LTP is an application which requires that the manipulator be driven through the various motions needed to perform a given task, recording the motions into the robots computer memory [17]. LTP through force control is the easiest way for LTP because the operator can easily lead the robot, which is otherwise rigid, through any path by his/her push or pull. This leads to the direct teaching of any arbitrary path to the robot that the operator has in mind. This is also useful when some heavy load has to be transferred from one place to another (Payload Assist). The robot need not be taught the destination positions again and again but the operator can lift the load in the robots gripper and take it to any place by his slight push or pull. In our application, we have kept the distances traversed by the robot as proportional to the force applied by the operator so that harder is the push/pull, more is the displacement. Figure 5.9 shows a heavy mass being transferred by an operator very easily using this application. (Note that we have performed this experiment in the X-Z plane only thereby keeping the Fy as constant since we are using the recalibrated sensor (Section 5.1).

Figure 5.9: Payload Assist

29

5.5 A Continuous Tracing Algorithm (TUSS)


TUSS or Trace Using State Space is an improvement over the KSS. Since KSS was discrete step analyzer, we moved on to develop an algorithm that can guide the robot to continuously trace an arbitrary surface while maintaining a contact with that surface using force control. Most of the work done in robot force control assumes that a model of the environment is known apriori ( Demey, Bruyninckx and De Schutter [18], Masoud and Masoud [19] ). But generally, it is difficult to get the correct model of environment. Also sometimes force control is required to be implemented on an unknown surface. TUSS does not assume any model of the environment. We have implemented hybrid Force/Position control as defined by M.H. Raibert and J.J. Craig [4] . Here we have kept the orientation of the robotic tool as constant. The contour tracing of the surface is done by applying a constant force in downward x direction while robot motion is in y direction. The tracing tool makes a point contact with the surface. Hence here, out of six Cartesian degrees of freedom, x is force controlled whereas y, z, a, b, c are position controlled. Many applications require the orientation of the tool to be normal to the surface of the work piece. This type of work is stated by many authors using Velocity/Force control ( Kazerooni and Her [20], Goddard, Zheng, and Hemami [21] ). The orientation control in normal direction is not considered here. Suppose a person has to move the finger over a surface keeping the same orientation of the finger, the two-dimensional force feedback suffices as shown in Figure 5.10:

Figure 5.10: A human finger tracing a surface from right to left while maintaining the contact

This approach can be stated in steps as under: Move down until touch is sensed or there is obstacle in downward direction. This indicates that surface to be traced is reached. Move left, feeling the touch in downward direction and there is no obstacle in the direction of motion. Move slightly up if there is any obstacle on the surface. The three steps stated above are executed in parallel.

30

So, to implement this approach, we have to create a black-box as shown in Figure 5.11:

Figure 5.11: Black Box depicting the human approach

5.5.1 Working of TUSS


We have to implement the black box stated in Figure 5.11 on the robot side. For our algorithm, we have used the hybrid control approach in which the force control is done in z axis and the position control is done in x axis. Suppose, we have to move over a surface as shown in Figure 5.12:

Figure 5.12: Robot with a probe, moving over a surface while maintaining the contact

We have fixed up a 2-D coordinate system with our aluminum probe i.e. the tool (Figure 5.12). Some basic elements of the TUSS algorithm are:

31

1. Inputs: FZ: To sense the feeling of touch, we need to monitor the force in upward direction. FZ is a binary variable that, when set, represents the feeling of touch or the force in the positive Z direction. So, we can say that if the force in the positive Z direction, i.e. the Force in +Z, crosses a threshold value, FZ=1 else FZ =0. FX: To identify or sense an obstacle while in motion, we need to detect the force in the positive X direction (remember that we are moving in the negative X direction). FX is a binary variable that, when set, represents the feeling of obstacle or the force in the positive X direction. So, we can say that if the force in the positive X direction, i.e. the Force in +X, crosses a threshold value, FX=1 else FX=0. This can be given as under: FX=1, when Force in +X > FXThreshold else, FX=0. 1. Outputs: UP: UP is a binary variable that, when set, commands the robot to move in the upward direction i.e. the positive Z direction. When UP is reset, there is no motion of the probe in the positive Z direction. DOWN: DOWN is a binary variable that, when set, commands the robot to move in the downward direction i.e. the negative Z direction. When DOWN is reset, there is no motion of the probe in the negative Z direction. LEFT: LEFT is a binary variable that, when set, commands the robot to move in the left direction i.e. the negative X direction. When LEFT is reset, there is no motion of the probe in the negative X direction. So, our black-box now looks like as shown in Figure 5.13:

32

Figure 5.13: Black-Box depicting the TUSS Algorithm

Now, we can simply state our algorithm as: Move down until touch is sensed and there is no obstacle, i.e. DOWN=1 when FZ=0 and FX=0, else DOWN=0. Move left if touch is felt and there is no obstacle, i.e. LEFT=1 when FZ=1 and FX=0, else LEFT=0. Move up if there is any obstacle, i.e. UP=1 when FX=1, else UP=0. Our algorithm can be presented using a truth table as shown in Table 5.3: Table 5.3: TUSS Algorithm depicted in the Truth Table

Therefore, the downward motion is controlled by monitoring the force in +Z direction and the left and upward motions are controlled by monitoring the force in +X direction. We have made the distance to be moved dependent on the corresponding forces, i.e.: When DOWN=1, move in Z direction by: KDOWN X ( FZThreshold - Current Force in +Z direction ) When UP=1, move in +Z direction by:

33

KUP X ( Current Force in +X direction - FXThreshold ) When LEFT=1, move in -X direction by: KLEFT ( Note: The motion in -X direction is kept a positive constant so as to have a net motion from Right-To-Left while the forces in Z and X being maintained by moving up and down. ) Where, KDOWN, KUP and KLEFT are respective positive constants. This leads to a constant speed motion until some touch or obstacle is sensed and subsequent motion is like that of a spring.

5.5.2 Analyzing TUSS under various cases


Any arbitrary surface can be divided into four basic kinds. So, we will be discussing the working of TUSS algorithm in these four cases using State Space approach.

Figure 5.14: Analysing TUSS over various surfaces

Case 1: Flat Surface Fully Horizontal. This case is shown in Figure 5.14(a): (Note: The arrows depict the motion of the probe.) The state diagram for this case is shown in Figure 5.15(a): In this case, the probe moves on the surface smoothly while maintaining a continuous contact.

34

Figure 5.15: State Diagrams of TUSS for various surfaces

Case 2: Flat Surface Fully Vertical. This case is shown in Figure 5.14(b): The state diagram for this case is shown in Figure 5.15(b): In this case, the probe moves on the surface smoothly while maintaining a continuous contact. Case 3: Slant Surface Type 1. This case is shown in Figure 5.14(c): The state diagram for this case is shown in Figure 5.15(c): From the directions of the arrows in Figure 5.14(c), we can see that the probe is not in continuous contact with the surface. It tries to maintain the contact in steps. Even the human finger loses the contact in such a case and as soon as it detects it (The Fingertip Reaction Time [22]), it tries to make the contact again. Case 4: Slant Surface Type 2. This case is shown in Figure 5.14(d): The state diagram for this case is shown in Figure 5.15(d): This case is the most interesting one. Although the probe will maintain a continuous contact, but there is a problem of Deadlock Cycles. Here, state A and state B can lead to a deadlock cycle of ABABAB......... causing a cycle of Up and Down motions at the same point thereby leading to no motion from right to left. So, reaching either of the states A & B may lead to a deadlock that may last for indefinite time.

35

The State-Space solution we devised is a recurrent Markovian system (or Markov Chain) [23][24] due to following characteristics: 1. Since the model of the environment is unknown apriori, the occurrence of next state is completely stochastic and does not depend on the current state. 2. Since the robot has repeatability of 0.1 mm., even in the cyclic motion UP-DOWN-UPDOWN may or may not lead to the graph-cycles [25][26]. Due to the unknown surface environment and repeatability error, the next state is independent of the past states, which satisfies the memoryless property. Explanation of Deadlock Cycle: In a slant as shown in Figure 5.14(d), the probe will experience forces in both the +Z as well as +X directions. Now, the probe being in state A will go up and reach state B. From B, it will move down and reach state A. There is a probability that state A transits to state D or state B to state C (depending upon the threshold forces and values of the constants KDOWN, KUP and KLEFT), which can lead to motion, otherwise if the probe gets stuck in the Deadlock Cycle ABABABAB......, the probe will perform up and down motion at the same point. Similarly, BDBDBDBD........ constitutes another Deadlock Cycle. To avoid this problem, we suggest an improvement over this algorithm, by considering the previous history, in the next section.

5.5.5 Improved TUSS to avoid the problem of Deadlock Cycles


There are basically two Deadlock Cycles identified. One is ABABABAB............. and the other is BCBCBC......... So, our aim is to break these two deadlock cycles i.e. there should be no contiguous Up and Down motions. To accomplish this, we incorporate two binary flags in our algorithm. These are: LastWasUp, LastWasDown. LastWasUp=1 denotes that the last motion command was an Up motion command. LastWasDown=1 denotes that the last motion command was a Down motion command. So, whenever we get an Up motion command, we should check the flag LastWasDown. If it is reset, then we can perform the Up motion (also setting the LastWasUpFlag and resetting the LastWasDown flag), otherwise we will perform a Left motion and reset both the flags. Similarly, when we receive a Down motion command, we should check the flag LastWasUp. If it is reset, 36

then we can perform the Down motion (also setting the LastWasDown flag and resetting the LastWasUp flag), otherwise we will perform a Left motion and reset both the flags. This can be stated through the binary equations shown in Figure 5.16:

Figure 5.16: Binary equations for Improved TUSS

Therefore, finally our system becomes a Mealy Machine [27] whose output depends upon the current state as well as the inputs (LastWasUp, LastWasDown) and is shown in Figure 5.17:

Figure 5.17: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles

5.5.6 Some Results With TUSS


We chose a semi - circular contour ( since a circle has a continuously varying slope ) to test the Improved TUSS algorithm. The experimental - setup is shown in Figure 5.18(a):

37

Figure 5.18(a): Experimental Setup to test Improved TUSS Algorithm

Figure 5.18(b): GUI To Collect Data From The Improved TUSS Algorithm

The probe is a champhered one with radius = 5.00 mm. ; The direction of motion is from Right To Left ; FZThreshold = 4.0 N, FXThreshold = 4.0 N, KDOWN = 0.01 mm./N, KUP = 0.005 mm./N and KLEFT = 0.1mm. Figure 5.19 compares the actual contour with the path followed by the Improved TUSS algorithm:

38

Figure 5.19: Comparing the actual contour with the path followed by Improved TUSS Algorithm

Since we have taken a champhered probe, the point of contact changes while tracing, so there is an initial mismatch of 5.00 mm., i.e. the radius of the probe, between the actual contour and the path traced.

5.6 Use of such applications in Peg-in-hole insertion


The simple algorithm for contour tracing mentioned in this section form an essential part of the Peg In The Hole assembly. The approach used in Kuka Stiffness Finder (Section 5.3) can be used to determine the stiffness of the materials involved in the assembly so that we can keep safe values for the forces that do not harm the materials as the assembly progresses. The LTP (Section 5.4) is used in Assembly to bring the peg nearer to the hole by hand so as to reduce the blind search (Section 6.1.1). TUSS also is heavily employed in the blind search strategies.

39

6. Peg In The Hole


This section describes the development of algorithms to achieve a Peg In The Hole assembly. Peg-In-Hole Problem is the benchmark problem for Robotic Assembly; Given the nominal position and orientation of the hole, we have to use the signals from F/T sensor to position and align the peg for insertion into the hole. In general, there is a three step solution to the problem: 1. Initially, the peg can be guided either through the position control or vision control to approximately reach and hit the hole. Then the search for the holes center begins so as to make the center of the peg within the clearance area around the centre of the hole [2]. This removes the positional error between the peg and the hole (see Figure 6.1).

Figure 6.1: Hole Search 2. Now, since the peg is sufficiently at the centre of the hole, the directional or orientational error between the peg and the hole has to be removed so that the peg easily inserts into the hole. The first case involves the removal of Large Directional Error where the peg may still be outside the clearance region of the hole [9] (see Figure 6.2). 3. Then comes the removal of Small Directional Error, where, the peg is accurately within the clearance range of the hole and only the directional manipulation needs to be done in the pegs orientation so as to have a smooth peg insertion without any jamming [9] (see Figure 6.3).

40

Figure 6.2: Depiction of Large Directional Error

Figure 6.3: Depiction of Small Directional Error

6.1 Hole Search


The aim of hole search is to place the pegs center within the clearance region of the holes center which is a circular area of diameter equal to the clearance between the Peg and The Hole (see Figure 6.1). 41

There are basically two types of Hole search strategies: a) Blind Search: That deals with exhaustive search within the search space until the goal is reached. b) Intelligent Search: That deals with the intelligent and decision based search to reach the goal without exhaustive search.

6.1.1 Blind Search Strategies


We have implemented two blind search strategies, viz. Grid Search and Spiral Search. These strategies use the Hybrid Control Scheme (discussed in Chapter 4), where the Force Control is done in the direction of the hole and the position control is done in the plane of the surface containing the hole.

6.1.1.1 Grid Search


In this type of search, a continuous tracing of the surface, where hole is assumed to be present, is done in a grid like fashion (see Figure 6.4).

Figure 6.4: Search Points for Grid Search For the search to be exhaustive and to ensure that the peg does not miss the hole, the spacing between the search points should not be greater than 2 [2] where c is the clearance between the peg and the hole and is defined as: c=(D-d)/2, 42

where D=Hole Diameter, D=Peg Diameter This kind of search can be done using 1-Dimensional Force Feedback (assuming that the hole surface is plane). The Peg needs to be moved down until it touches the hole surface and then either a discrete or a continuous trace path can be followed until the peg descends into the hole. This search can be used for both the cases: a) When the peg is parallel to the hole surface: The peg will descend into the hole when it comes within the clearance range of the hole. b) When the peg is tilted: The peg may descend into the hole even when it has not reached the center of the hole; In such a case, the goal achievement is identified when the peg descends the most. Please note that the tilted peg may hit the hole walls, so a twodimensional force feedback continuous tracing algorithm like TUSS is required for this search in tilted peg case. We implemented a Grid search with a clearance of 0.5 mm. The path traced is shown in Figure 6.5.

Figure 6.5: Grid Search with clearance=0.5 mm. The dip at the center denotes that the peg has reached the center of the hole.

6.1.1.2 Spiral Search


Spiral search is another kind of blind search that involves the tracing of surface in a spiral fashion. The Spiral Search is better than the Grid Search because it involves a much shorter path to be searched and it has no sharp changes in directions of search both the factors leading 43

to lesser time of search. For our work, we chose the Archimedean Spiral. In polar coordinates (r, ) it can be described by the equation

with real numbers a and b. Changing the parameter a will turn the spiral, while b controls the distance between successive turnings. The pitch of such a spiral is defined as = 2 . The pitch refers to the space between the turns of a spiral. For the Spiral search to be exhaustive, the criterion is that the pitch should be less than or equal to the assembly clearance c [2]. Figure 6.6 shows an Archemedian Spiral. Pitch P should be equal to the clearance so as to make the search exhaustive. The rate at which the spiral needs to be progressed so as to move at a constant path speed is given under: =v/(r2+p2/42)1/2 Here, represents the angular velocity, r is the current radius of the spiral, p is the pitch and v is the constant speed of spiral motion.

Figure 6.6: Spiral Search pattern Again, this search can be used for both the cases: a) When the peg is parallel to the hole surface: The peg will descend into the hole when it comes within the clearance range of the hole. 44

b) When the peg is tilted: The peg may descend into the hole even when it has not reached the center of the hole; In such a case, the goal achievement is identified when the peg descends the most. Please note that the tilted peg may hit the hole walls, so a twodimensional force feedback based continuous tracing algorithm like TUSS is required for this search in tilted peg case. Figure 6.7 shows the path traced in Spiral search with clearance=0.5 mm.

Figure 6.7: Spiral Search with clearance=0.5 mm. Again, the dip at the end denotes that the peg has reached the centre of the hole.

6.1.2 Intelligent Search Strategies1


The intelligent search strategies provide the estimate of the holes center as soon as the hole is sensed. We have implemented two intelligent search strategies, viz. Search using Torque Information and Neural Networks, and Search using Precession Strategy. The First one is implemented in simulation while the second one actually on the robot.

(Note: Till now we were able to use the damaged F/T sensor keeping the assumption that one force component is kept constant, but further experiments are free from such assumption, therefore, we used the new F/T sensor for further work)

45

6.1.2.1 Neural Peg In The Hole


When the peg moves over the hole while maintaining contact, the moment profile changes. So, for a particular position of peg center with respect to the hole center, there is a specific value of moments sensed by the peg. This model to get the position of peg with respect to the hole can be derived analytically as done in Section 6.1.2.1.2. To find the position of peg from the moments sensed, we need an inverse mapping of model. This inverse model can again be acquired by analytical deduction but it is very difficult to find this inverse model for the purpose of control in practice because of highly complex and non-linear nature of the function. To approximate such a mapping, the artificial neural networks were used by Wyatt S. Newman and Yonghong Zhao [5] because of the powerful nonlinear computational ability of neural networks.

6.1.2.1.1 Neural Networks For a general system, there are some internal relations among its different states and measurable features. These relations can be written in the mathematical form of
y = f (x) y={Rm} is an m-dimensional vector that denotes the state of the system. x={Rn} is an ndimensional vector of measured physical quantities. The mapping from x to y can be represented by a function f. If x is measurable and if y is observable then we can estimate the state y from a model f* of the function f. If a goal state is given, we can attempt to control the system to the desired state by some control strategy using the identified mapping f*. But in practice, the system may be too complex for an analytic approach to succeed. The mapping may be highly nonlinear and difficult to model mathematically. Due to the powerful nonlinear computational ability of neural networks, we choose to use a neural-net to construct an approximate mapping for function f instead of attempting an analytic derivation. We expect a neural net will generate a mapping from measured features to the system state like y= g(x) y is an estimate of the state from the neural-net mapping g, which is an approximation of function f. When this neural-net mapping is used in control, we get the system measurements x from sensors and then estimate the current states of the system by the neural-net mapping.

Figure 6.8: Basic structure of a neurocontroller 46

By computing the difference between the current state and the goal state, we can derive a control action from an action generator. In response to the appropriate control actions, the system state will evolve to converge on the goal state. The combination of the neural-net mapping and the action generator is called a neurocontroller. The structure of the neurocontroller is illustrated in Figure 6.8. The mapping in which we are interested is that from the moments or torques to the position of the peg with respect to the hole. The basic processing unit of a neural network is called a neuron or node. A neural network is formed through weighted connections among the neurons. A neuron consists of multiple inputs, an activation function and an output, as shown in Figure 6.9.

Figure 6.9: Structure of a neuron

The neurons inputs are from external inputs or outputs of other neurons. The weighted sum of these inputs drives the neurons activation function. An output is produced by the activation function, which will have different forms for different kinds of neural networks. The weights shown in Figure 6.9 are the storage elements of the neural network. Before the neural net is trained, they are assigned random values. Training the neural net consists of adjusting these weights according to some example data from the system. The example data is called a pattern or training set for the neural network. Each pattern is a pair of input and output vectors. The input is a vector of measurable features, and the output, in our case, is a vector that describes the location of the hole. After learning, the weights store the information of the system resulting in an approximate mapping from the input space domain to the output space. It is thus useful to rewrite our mapping in a different form, recognizing a vector of weights as another input y= g(x,w) 47

where w is the vector of weights. There exists a variety of methods for seeking an optimal set of weights to best approximate the desired mapping. In all cases, though, the goal is to adjust the weights w to model the system as precisely as possible. Here, we introduce some neural-net methods used in this thesis. The most commonly used neural-net method is the traditional multi-layer feedforward neural net with a backpropagation learning algorithm [28]. The architecture of this kind of neural net is shown in Figure 6.10.

Figure 6.10: 3-layer Backpropagation Neural Network

It consists of an input layer, one or more hidden layers, and an output layer. The neurons between any two adjacent layers are fully interconnected in the feedforward direction. The weight of each connection is adjusted during training. The activation function can be Gaussian, logistic or a sigmoid function for the hidden layers. We choose a linear function for the output layer nodes. To simulate the functional mapping precisely through a BP neural network, we must select the proper number of hidden nodes, the parameters for the activation function and the connection weights.

6.1.2.1.2 Mathematical Model for Parallel Peg The basic peg-in-hole problem is shown in Figure 6.11. In this model, we assume that both the surface of the subassembly and the peg bottom surface are parallel to each other. So when the
48

peg moves in contact with the subassembly, there is surface-to-surface contact (except for some conditions we will discuss later). With the peg moving towards the hole, the contact state will change. This change will be reflected through the reaction forces and moments.

Figure 6.11: The Parallel Peg case

As shown in Figure 6.12, when the center of the peg is outside the line between points A and B, the reaction moments and forces provide no information about the pegs location relative to the hole. Here, outside the line means the distance from the hole center to the peg center is greater than the distance from the hole center to the chord AB. As the peg moves inside this line, the reaction force due to contact must be off-center with respect to the peg center, leading to a measurable reaction moment. (The peg will tilt slightly relative to the subassembly surface under this condition as shown in Figure 6.13). As the peg position changes, the direction and value of the moment will be different. The neural-net controller can use this moment information as clues to indicate the peg position and then guide the peg to move towards the desired destination. Given an arbitrary position of the peg, we want to know how large the moments are in this position. If our torque sensor is located at point rsensor, and if contact force vector fcontact acts through point rcontact, then the resulting moment at the sensor, msensor, will be: m sensor = (r contact - r sensor) X f contact If we ignore the friction forces fx and fy and let the downward force fz exerted by the robot be a constant, then the moment is related to the distance between point P (pegs center) and E (the middle point of the two contact points) as shown in Figure 6.14. Here we will deduce this relationship. 49

Figure 6.12: Case when the center of peg lies outside the chord of contact

Figure 6.13: Case when the center of peg lies inside the chord of contact

In Figure 6.14, point P, (xp, yp), denotes the pegs center and point H, (xh, yh), denotes the hole center. A, (xa, ya), and B, (xb, yb), are the two points at the intersections between the circular boundaries of the peg and the hole. To compute a reaction moment, we need to know the location of the resultant contact force. The distribution of the contact pressure over the region of overlap between the peg and subassembly is unknown. However, if the peg tips even infinitesimally into the hole, then the contact forces must be concentrated at points A and B. In this case, the resultant contact force must act through a point lying on the line A-B. If the concentrated reaction forces at A and B are balanced, then the resultant force will be at point E, mid-way between A and B. Under these assumptions, we can compute the relationship between measured moments and relative location of the center. 50

Figure 6.14: Computing the moments

To obtain the forward mapping, we assume knowledge of the coordinates of P and H then derive the moment based on computed coordinates of point E. First, we compute the area of triangle APH:

( )2 + ( )2 = (+ + )/2

Where lPH is the distance from point P to point H and the value s is defined as half the perimeter of triangle APH. We can compute the area of triangle APH, ADAPH, by Herons formula [29] as follows:

. ( ). ( ). ( )

Because the area of APH is also equal to half of lAE times lPH, we derive lHE as follows:

= 2. / =
Then, we can get the coordinates of point E.

2 2

51

= =

. + . +

Finally, we can get the moments in the x and y directions [5].

= . ( ) = . ( )
Moments in the x and y directions are non-zero only within a limited region. If the peg moves out of this range, the moments are both zero or at least provide no information regarding the hole location. One boundary of this region corresponds to the peg falling into the hole, which occurs when:

<

A second boundary corresponds to the center of the peg moving outside the line A-B (i.e. points H and P lie on opposite sides of line AB), which occurs when

>

2 2

Under the second condition, the moments are both zero. The plots in Figure 6.15 show the computed moments as a function of peg coordinates relative to the hole center. The computations are based on parameters rhole=50mm, rpeg=47mm, and fz = 1N. We can see that moments in the x and y directions have similar maps except for a 90-degree rotation about the z axis.

52

a)

b)

Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case.
a) Moment In X. b). Moment In Y.

We can see from Figure 6.15 that the moments in x and y tend to increase when the peg passes through the center line of the hole and suddenly reverse the direction as expected.

6.1.2.1.3 Simulation Results The neural network was trained using mathematical model as shown in Figure 6.16:

53

Figure 6.16: Neural Network Training The two moments mx and my form a unique pair for one-to-one mapping to the relative position of the peg centre with respect to the hole center. The result of using the neural network is shown in Figure 6.17:

a)

54

b)

Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D. As shown in Figure 6.17 that as soon as the moments are sensed, the moment values are sent to the trained neural network. The neural network, then, provides the relative peg position with respect to the hole. The peg, then, directly jumps to the center of the hole.

6.1.2.1.4 Tilted Peg Case The previous model assumed an ideal condition. In fact, the surface of the subassembly cannot be perfectly parallel to the bottom surface of the peg. There is always a tilt between the peg and the subassembly surface. Thus in most positions there is only one contact point between the peg and subassembly (as shown in Figure 6.18).

Figure 6.18: Tilted Peg Case

In this case, when the peg moves around the surface, the moments are not zero even if the peg does not overlap the hole. But this moment information cannot be used to guide the assembly, because it does not change unless the contact point moves relative to the peg. This can only 55

occur when the peg overlaps the hole, as illustrated in Figure 6.19. For positions 1 and 3 of Figure 6.19, the moments are identical, although in position 1, the peg is above the hole while in position 3 it is not. In position 2, however, the contact point is at a different location relative to the peg, which results in a different reaction moment.

Figure 6.19: Contact States for Tilted Peg Case

From Figure 6.19, we also see that there is a lowest point in the Z direction on the pegs bottom surface. This lowest point will contact the subassembly surface unless it is within the region of the hole. In the latter case, there are two possible contact points between the peg and the hole. The one that is lower in the Z direction will be the actual contact point. Note that we made an assumption here. We could not find the precise inverse model for the tilted-peg case, because in this model there are different peg positions corresponding to the same contact point. For example, consider a position of the peg for which the contact point on the peg coincides with a point on the rim of the hole. Call this point E on the peg. If we subsequently move the peg in a circular arc such that point E traces the rim of the hole, then over at least part of this arc point E will remain the contact point. Thus we see that over a range of positions we would detect identical moments, and therefore the moment function is non-invertible. As a result, the training error is relatively large and the control result is not as good as the parallel model [5]. Since this solution is not appropriate for tilted case (tilted case most often is present), we dropped the idea to implement this solution. We moved to another approach that is robust and suited for the tilted case.

6.1.2.2 Precession Strategy


6.1.2.2.1 Need for Precession Precession is a change in the orientation of the rotation axis of a rotating body [30]. The main emphasis is on the state when the peg makes two-point contact with the hole. In this state, the peg is oriented in the direction of the center and if the peg is moved maintaining two-point

56

contact, the center of the hole will be reached. Thus we use precession to make a two-point contact.

6.1.2.2.2 The Precession Strategy The precession strategy is an intelligent localization strategy based on measurements of the peg position as it precesses while maintaining contact with the hole [2]. To execute a precise precession trajectory, the robot needs to be under stiff position control in the (x, y, Thetax, Thetay) dimensions, i.e., positions along and rotations about the x- and y-axes. On the other hand, to maintain soft contact between the peg and hole, the robot needs to be under compliant control along the vertical (z) axis. This combination of position and compliant control on selective axes is achieved using the hybrid control scheme described in Section 4. The precession strategy is described below in the context of a circular peg-in-hole assembly with hole position uncertainty in (x, y). The first step is to tilt the peg about a tilt axis, by the tiltangle, Thetatilt. As shown in Figure 6.20 (a), the tilt-axis, initially aligned with the negative x-axis, passes through the bottom center of the peg. Next, the peg is lowered into contact with the hole surface (Figure 6.20(b)). Using the hybrid compliant controller described in Section 4, a steady downward force is applied by the robot through the peg, while the tilt axis is rotated about the vertical axis, so the tilted peg precesses as shown in Figure 6.20(c).

Figure 6.20: The precession strategy: (a) The peg is initially tilted by Thetatilt, (b) The peg touches the hole surface, and peg height h1 is recorded. (c) As the tilt axis is rotated, the peg precesses. (d) The peg dips into the hole, height h2 < h1. Consider the initial condition shown in Figure 6.20(b). The point of contact between the peg and the hole is on the hole surface. As the peg precesses, the contact point moves along the perimeter of the peg, and on a corresponding circular path on the hole surface until it reaches the hole edge. During this interval, the nominal height of the peg is constant. As the peg dips into the hole, the peg height decreases until a critical point where the peg is in contact with the hole edge in two places. With further precession, the peg rises out of the hole and the peg 57

height increases. This change in peg height reveals not only the direction of the hole center relative to the peg-position, but also the distance. The direction of the hole center is given by the vector perpendicular to the tilt-axis at the moment of minimum peg height. The distance of the hole center from the peg-center can either be calculated analytically from the decrease in peg height, or looked up from a table of sampled peg height values corresponding to peg-hole distance. A visualization of such a table is shown in Figure 6.21. The minimum peg height values recorded during precession for different relative peg-hole positions are plotted. Hence, for the experiment, the minimum peg height would be matched to this table to obtain the possible peg positions relative to the hole. This is shown in Figure 6.21. With the relative peg-hole position localized to two possible values, we can proceed in a variety of different ways. One option is to select one of the two values and utilize it to compute the hole-configuration with respect to the peg, and attempt assembly. If assembly fails, then we know for sure that the other value is the actual relative peg-hole position. Another option is to move the peg to a different position and repeat the precession strategy. The results from the two experiments analyzed together will be sufficient to localize the relative peg-hole position. For the precession strategy to be successful, the precessing peg has to pass over the hole. For this to happen, we can initially use any of the blind search strategies described in Section 6.1.1. As soon as a specified value of dip occurs, the hole is sensed and the precession starts.

Figure 6.21: Peg height with respect to distance from holes center

6.2 Peg Insertion


After the search is complete and we are confident enough that the peg has reached to the center of the hole, the next step that comes is its insertion. The peg needs to be inserted

58

smoothly into the hole without any jamming. This requires the correction in the orientation of the peg with respect to the hole. How this is achieved is explained in further subsections.

6.2.1 Gravity Compensation


Till now, whatever experiments we performed, the orientation of the robots Tool Center Point (TCP) was kept constant. But now for the peg insertion purpose, we need to vary the orientation of the TCP. Since the F/T Sensor is mounted with the robot tool i.e. the peg in our case, the sensor also orients along with the peg. This leads to the change in the readings from the sensor as it feels the forces and torques occurring from the load on the sensor, i.e. due to the robot gripper and the peg, redistributed in some different orientation. The load due to mass m (see Figure 6.22) remains same with respect to the Base Coordinate System (BCS) but changes with respect to the Tool Coordinate System (TCS) of the sensor. The load that was in Pure Y of Tool Coordinate System, after rotation of the sensor, becomes the load in pure X of the Tool Coordinate System. Since the F/T Sensor is mounted along with the tool, it gives the readings in the Tool Coordinate System. Therefore the readings change with varying sensor or tool orientation. This calls up for some correction that makes the sensor readings independent of the orientation of the sensor. This correction process is known as Gravity Compensation.

Figure 6.22: Need for Gravity Compensation

Kuka uses Euler angle ZYX convention for the representation of its TCPs orientation i.e. if the orientation of the TCP is given by the triplet {a,b,c}, it means that the tool has rotated first in 59

the Z-axis by a degrees, then in the Y axis by b degrees and finally in the X-axis by c degrees all rotations performed with respect to the Base Coordinate System. This is similar to the rotation in the X-axis by c degrees, then in the Y axis by b degrees, and finally in the Z-axis by a degree with respect to the Tool Coordinate System. Since the load vector always remains the same with respect to the Base Coordinate System, to find the load in new TCS, we need to find out the new vector components of the load as viewed from the new TCS. The load can be considered as a point in the coordinate space with (x,y,z) representing the three components of net force or the net torque. Now, if we wish to find the new coordinates of the same point in new TCS, it is done as follows: T2xyz =Rz(a)*Ry(b)*Rx(c)*T1xyz*Rx(c)*Ry(b)*Rz(a) Where, Rx,Ry,Rz are the Rotation Matrices in X,Y and Z respectively given by:

And, T1xyz and T2xyz is the Translation Matrix given by:

Here, x, y, z are the respective components of a vector. 60

In T1xyz, we have to place the three components of the Net Force or Torque when the TCS was orientationally aligned with the BCS (that remains constant), then T2xyz will provide the new components of the Net Force or Torque in the new TCS. If we subtract these new components from the sensor readings, what we get are the Gravity Compensated Force and Torque readings purely due to the Contact Forces and Torques.

6.2.2 Large Directional Error (LDE) Removal


Even after the Hole Search is complete, we cannot say for sure that the Peg Center lies within the clearance range of the Hole Center. Such a case is shown in Figure 6.23 along with the forces experienced by the Peg. This usually occurs when the Peg having large amount of tilt (Large Directional Error) with respect to the Hole or the clearance value is very small.

Figure 6.23: Large Directional Error Case In such a case, the peg is outside the hole and makes the three-point contact with the hole. In such a case, not only orientational but positional correction is also required to put the peg inside the hole. This can be done in two ways:

6.2.2.1 LDE Removal Using Information from Hole Search


If we use the information from the Hole Search, we know the direction in which the center of the hole lies. This is shown in Figure 6.24.

61

From the direction of the hole center, we can get the direction (perpendicular to the direction of the hole center) in which the peg needs to be rotated to align it with the hole. Now using the Hybrid Control Scheme (discussed in Chapter 4), the Position Control is done in the rotational axis defined by the Direction Of Rotation in Figure 6.24 and the Force Control is done in Z and X of TCP (or the Peg) (see Figure 6.23) to maintain the three-point contact. Gradually, the ThreePoint Contact will convert into The Two-Point Contact as shown in Figure 6.3 and it calls up for the next step i.e. The Small Directional Error Removal.

Figure 6.24: Large Directional Error Removal Using Information From Search

6.2.2.2 LDE Removal Using Moment Information


Since the peg makes the three- point contact with the hole, the direction of the moment gives us the direction of the hole center. The hole center lies in the line perpendicular to the direction of the net Moment (see Figure 6.25).

Figure 6.25: Large Directional Error Removal Using Moment Information 62

Then we can use the same approach as in Section 6.2.2.1.

6.2.2.3 Stopping Criterion For LDE Removal


To test if the peg is completely inside the hole and is making Two-Point Contact with the hole, we perform a Back-Hit Test. We perform back-hits in X direction of the peg (see Figure 6.23) to test any wall at back. If there is some wall (when the peg is completely inside the hole), it will feel force in +X direction of the peg which denotes the end of LDE Removal and calls up for the Small Directional Error Removal.

6.2.3 Small Directional Error (SDE) Removal


When the peg is completely inside the hole and makes two-point contact with the hole, it needs to be oriented according to the hole for smooth insertion without jamming. The directional error can be corrected using the direction of the moment sensed from the sensor. However, the direction of the moment changes as the tilted angle of the peg varies. When the tilted angle is small, the direction of the peg to be moved is the same as the direction of the moment sensed. However, for a large tilted angle the peg must be moved in the opposite direction of the sensed moment [9]. Figure 6.26 shows the relations between the direction of the moment sensed and the direction to be moved for the six possible cases.

Figure 6.26: Direction Of Moment Sensed and The Direction in which The Peg needs to be moved

63

Since the direction of moment does not specify the direction in which the peg needs to be moved, we try to rotate the peg in both the directions about the line of net moment and then record the moments obtained in both rotations. Then we compare both the moments and the direction in which the moments are decreasing is the direction in which the peg needs to be rotated to align with the hole. Again we are using the Hybrid Control Scheme (discussed in Section 4), the Position Control is done in the rotational axis defined by the Direction of Net Moment and the Force Control is done in the downward direction (see Figure 6.27).

Figure 6.27: Removing Small Directional Error

But this approach requires lot of active sensing and is time consuming. Therefore, we follow a different approach (See Figure 6.28). The BackJump is taken to avoid jamming while rotation and the wall hits are sensed by the force values. The value of BackJump can be calculated as shown in Figure 6.29. To avoid the pegs corner hitting the wall, the backjump can be taken as the arc length l. So, BackJump = l = *r, where r is the radius of the peg and is the tilt angle. The BackJump should not be too large so as to avoid the peg getting out of the hole. The maximum limit of the BackJump can be calculated as (see Figure 6.30):

64

Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the midpoint.

Figure 6.29: Calculating BackJump

According to Figure 6.30, the BackJump should be lesser than l. Since l = 2(R-r*Cos ) , and BackJump= *r, Sin = > *r <= 2(R-r*Cos )/Sin

65

Therefore, *r*Sin <=2(R-r*Cos ) => *Sin +2 *Cos <=2*R/r, where R and r are the Hole and Peg radii respectively.

Figure 6.30: Calculating the Upper Limit On BackJump

Figure 6.31 and 6.32 show the experimental results obtained while using SDE Removal on a Hole of diameter 57 mm. and a Peg of diameter 56.5 mm. We can see from Figure 6.31 that there are more than one back jump and insertion steps in SDE Removal. This is due to the inaccuracy in the direction of moments sensed. As soon as we get the correct direction of the moments, the insertion is done in a single step. As we see from Figure 6.32, the insertion is allowed until the upward force Fz crosses some threshold value (see red-marked points in Figure 6.32). As soon as upward force exceeds the threshold value set due to jamming, back jump is taken and the proper orientation for smooth insertion is searched for. Stopping Criterion for SDE Removal SDE Removal stops when a pre-specified amount of depth is reached by the Peg inside the Hole.

66

Figure 6.31: Insertion with Time using SDE Removal

Figure 6.32: Insertion and Upward Force Due to Jamming

67

7. Optimization
Several approaches to designing appropriate force control parameters have been presented. They can be classified as follows: (a) analytical approaches, (b) experimental approaches, and (c) learning approaches based on human skill. In the analytical approaches, the necessary and sufficient conditions for force control parameters that will enable successful operations are derived by geometric analysis of the target tasks. However, the analytical approaches cannot be utilized for obtaining the parameters to achieve operations efficiently since the cycle time cannot be estimated analytically [31]. Further, it is difficult to derive these necessary or sufficient conditions by geometric analysis for complex shaped objects. In the experimental approaches, optimal control parameters are obtained by learning or by explorations based on the results of iterative trials. In these approaches, the cycle time is measurable because operations are performed either actually or virtually. We have used Design Of Experiments (DOE) for optimization of our assembly task. Since the statistical nature of the assembly task and DOEs increasing popularity in manufacturing quality control, DOE has been used in the robot assembly parameter optimization [10]. We have used DOE to optimize the time of search as well as insertion. We have used a statistical analysis tool (JMP [32]) that creates design as well as analyzes the data to get the optimal value for the parameters affecting the time of search and insertion. Since we have implemented the search using Precession Strategy and insertion using LDE Removal and SDE Removal, these three algorithms are different and have different factors affecting their time of completion. Thus, we optimize these three tasks separately.

7.1 Design Of Experiments


Design of Experiments (DOE) is experimental methods used to quantify indeterminate measurements of factors and interactions between factors statistically through observance of forced changes made methodically as directed by mathematically systematic tables. DOE includes designing our experiments in such a manner that we can analyze the direct effects as well as interaction effects of some factors or parameters over the optimization goal. DOE has various kinds of designs like Full Factorial designs, Custom Designs, etc. Full factorial designs include all the possible combinations of the factors values to make the design. Thus if there are n factors each with two levels of values (High & Low), there will be in total 2n number of trials in the design. The custom designer of our statistical tool starts with a random set of points inside the range of each factor. The computational method is an iterative algorithm called coordinate exchange [33]. Each iteration of the algorithm involves testing every value of every factor in the design to determine if replacing that value increases the optimality criterion. If so, the new value replaces the old. This process continues until no replacement occurs for an entire iteration.

68

7.2 Optimizing the Search


The factors affecting the search are: a) Dip: The Dip refers to the amount by which the peg descends into the hole to sense it and after that start the precession. b) Contact Force for Search: This is the force that is maintained by the peg to maintain the contact with the hole. c) Angular Speed for Precession: This is the speed with which the peg precesses. Now we need to consider each of the parameters above as a two-level parameter. For that, we define the lower and upper limits for each of the above parameters (Lower Limits are given by subscript L and Upper Limits by subscript H, L and H standing for Low and High respectively). a) DipL = 2.0 mm., DipH = 2.5 mm. b) ContactForceL = 2 N, ContactForceH = 5 N c) AngularSpeedL = 0.05 o per command, AngularSpeedH= 0.2 o per command.

Now we design our experiments to analyze the direct effects as well as the interaction effects of the factors affecting the Time Of Search. So, we perceive The Time of Search As:

= 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 () Where, TOS is the Time Of Search, CF is the Contact Force and AS is the Angular Speed. In this formulation, we try to capture the direct effects of factors (given by the coefficients a1, a2, a3), quadratic effects (given by a8, a9, a10) and the interaction effects (given by a4-a7). If we wished to capture only linear effects, the full factorial design would have sufficed containing 23 i.e., 8 experiments, but to visualize quadratic and interaction effects of the factors over the Search Time, we made a custom design of experiments that could capture such effects too. We first, made a Design for our experiments that constituted 16 runs with three replicates for each run. To reduce the error range (which was 42.9 s. as shown in Figure 7.1 (c)), we performed some more experiments and increased the set to 20 experiments. This led to the 69

reduction in error range (which became 31 s. as shown in Figure 7.1 (b)). The three replicates were taken to note the Time data consistently and finally we take the average of the times recorded in these three runs. Table 7.1 shows the DOE for search: Table 7.1: DOE for Precession Search

DipToBe Sensed (mm.)

ContactForce AngularSpeed Search Search Search Mean ForHoleSearch ForPrecession Time Time Time Search (N) (degrees per (1st Run) (2nd Run) (3rd Run) Time command) (seconds) (seconds) (seconds) (seconds) 2 2 2 2 2 2 3.5 3.5 3.5 3.5 3.5 3.5 3.5 5 5 5 0.2 0.2 0.125 0.05 0.05 0.15 0.2 0.125 0.125 0.05 0.125 0.125 0.15 0.05 0.2 0.2 103 94.18 152 249.98 385.78 122 74.59 90.96 83.39 159.20 87.40 91 94.59 158.79 86.59 67.20 106.79 88.79 136.18 276.57 584.35 113.59 76.40 85.40 95.39 158 99.79 84.79 84.35 197.18 87 66.20 124.59 81.14 152.98 267.37 367.78 134.39 68.40 82.20 91.40 136.96 94.39 90.59 91.61 196.59 89.98 55.39 111.46 88.04 147.05 264.64 376.78 123.32 73.13 86.19 90.06 151.39 93.86 88.79 90.18 196.89 87.85 62.93

2.5 2 2.25 2 2.5* 2.4 2.25 2 2.25 2.25 2.5 2.25 2.4 2.5* 2.5 2

(Note: In the rows marked with *, we have taken best two to evaluate the mean) Then we analyze the results using our Statistics Tool that has been provided with Prediction Profiler that helps us to visualize the effects of various parameters on our optimization goal. The results are shown in Figure 7.1.

70

a)

b)

c) Figure 7.1: a) The plot of Actual V/S Predicted Search Time (in seconds). b) The Prediction Profiler for a set of 20 Experiments. c) The Prediction Profiler for a set of 16 Experiments.

71

The prediction profiler works on the basis of Standard Least Squares Regression for fitting the values and calculating various coefficients a1-a10. The value of RSq (See Figure 7.1 (a) ) should be 1 for an ideal fit. The more nearer it is to unity, the better is the fit. For our experiments, the value of RSq is 0.90 that signifies a good fit. The red straight line in Figure 7.1 (a) is the line of regression. The dotted blue line represents the mean of the search time data. The dotted red lines show the deviation in predicted values from the actual values. The comparison between the predicted and actual values can be done from the scattered black points. In the prediction profiler (see Figure 7.1(b)), the black lines show the fitted plots of dependence between the TimeForSearch and various factors. The blue dotted lines represent the 95% confidence interval. The profiler works on the desirability function. The minimum TimeForSearch is obtained where the Desirability is maximum (represented by horizontal dotted red line). The optimized values for various factors (represented by vertical dotted red lines) are those that correspond to the minimum TimeForSearch. This means that if we repeat our experiments with the optimal set of factor values, 95 % of those would give the search time in the range 46.431 seconds. The prediction profiler (see Figure 7.1(b)) predicted that the search time will be minimum at: Dip=2.1 mm., ContactForce (Fx)=3.7N and AngularSpeed=0.19o per command and the minimum search time was predicted to be 46.4 seconds. When we used the above given values for running the assembly, the Search Time came out to be 69 seconds which is well within the range predicted by the confidence interval.

7.3 Optimizing the LDE Removal


The factors affecting the LDE Removal are:
a) Contact Force: The force maintained by the peg with the hole. b) Angular Speed: The degrees by which the peg rotates per command.

The limiting values for these factors are: ContactForceL=2.0 N, ContactForceH=5.0 N AngularSpeedL=0.05o per command, AngularSpeedH=0.2o per command. Again we want to analyze the individual effects as well as the interaction effects of these parameters. For that we designed our experiments as shown in Table 7.2.

72

The prediction profiler (see Figure 7.2(b)) predicted that the LDE Removal time will be minimum at: ContactForce =3.75 N and AngularSpeed=0.2o per command and the minimum LDE Removal time was predicted to be 50.6 seconds. When we used the above given values for running the assembly, the LDE Removal Time came out to be 58.9 seconds. Table 7.2: DOE for LDE Removal

ContactForce AngularSpeed (N) (degrees per command)

Time For Time For Time For MeanTime LDE LDE LDE For LDE Removal Removal Removal Removal (1st Run) (2nd Run) (3rd Run) (seconds) (seconds) (seconds) (seconds) 78.4 59.7 181.3 206 90.7 56.5 77.6 137.7 75.2 57.9 167.8 201.5 89.3 62.2 80 169 79.3 58.9 175.1 206.2 87.2 65.3 78.3 168 77.6 58.8 174.8 204.5 89 61.3 78.6 158

3.5 2.51 3.5 5 2 5 3.5 2

0.125 0.2 0.05 0.05 0.125 0.2 0.125 0.05

a) 73

b) Figure 7.2: a) The plot of Actual V/S Predicted Search Time (in seconds). b) The Prediction Profiler.

7.4 Optimizing the SDE Removal


When we analyze SDE Removal, we can see that as soon as the peg gets the correct direction of moment, it will search for the two wall-hits and the insertion will be complete in a single step (But there may be more than one insertion steps as quoted in Section 6.2.3 due to inaccuracies in the moments sensed). So we could not find some meaningful parameters that could affect SDE Removal. We tried to take two parameters and examine the effects of these parameters on SDE Removal. These were ContactForce and BackJump. We got the results as shown in Table 7.3. We can see from Table 7.3 that these two parameters had almost no effect on the time for SDE Removal, so for final assembly we took one of the combinations randomly from Table 7.3. Table 7.3: Experimental Runs for SDE Removal

ContactForce BackJump (N) (mm.)

Time For Time For Time For Mean SDE SDE SDE Time For Removal Removal Removal SDE (1st Run) (2nd Run) (3rd Run) Removal (seconds) (seconds) (seconds) (seconds) 41.9 27.9 32.2 47.2 52.7 40 35.1 43.2 32.6 26.4 32.3 25.6 41.3 47.5 32.4 44.3 30.3 32.2 39.4 39.5 32.4 39.3 38.4 32.6

8 8 8 10 10 10

6 5 7 6 6 5

74

8. Results and Conclusions


There were three basic steps identified for the Peg In Hole assembly. All the three of them were optimized for the peg diameter 56.5 m.m. and hole diameter 57 m.m. The optimized values for the factors affecting the three processes are shown in Table 8.1, Table 8.2 and Table 8.3. Table 8.1: Optimized results for Hole Search Affecting Parameter Dip Contact Force Angular Speed Optimized Value 2.1 m.m. 3.7 N 0.19 o per command

Table 8.2: Optimized results for LDE Removal Affecting Parameter Contact Force Angular Speed Optimized Value 3.75 N 0.2 per command
o

Table 8.3: Optimized results for SDE Removal Affecting Parameter* Contact Force Back Jump Optimized Value 8N 6 m.m.

(* Note: The SDE was found to be independent of the parameters shown in Table 8.3, so the optimized values were taken at random from the allowable ranges)

8.1 Conclusions
Before directly jumping to the Hole Search problem, we had a tryst with the working of Force /Torque sensor as well as some simple applications using Hybrid Force-Position Control. We also successfully recalibrated the damaged F/T sensor to work for the applications where one 75

force component could be kept constant. To acquire the knowledge of stiffness of the materials in the working environment, we developed the KSF. Also the continuous tracing forms an elementary part of the hole search. For that purpose, we developed TUSS. The algorithm was successfully tested on some standard surfaces. For the Hole search part, we tried Blind Strategies (Grid Search & Spiral Search) as well as Intelligent Strategies (Neural Network Based & Precession Based). We found that Neural Network approach is only suited for Parallel Peg Case that is very much ideal to be possible. Therefore to have the real-world assembly task done, we moved on to the Precession Strategy that proved to be very accurate for the hole search. We found that the blind or exhaustive searches prove to be very time consuming and ineffective for real-world assemblies, therefore intelligent search strategies take over there. Insertion task required changing the orientation of peg. Initially the insertion algorithms written were not giving successful results. Then we found that there was the need for Gravity Compensation. We then added to our algorithms, the gravity Compensation module developed by us and then successfully tested the insertion algorithms. Both the LDE Removal and SDE Removal algorithms were successfully tested for various Peg sizes, Hole sizes and clearances. We then found some parameters affecting the performance of search and insertion algorithms and optimized the time for search and insertion using the DOE technique. DOE proved to be a good optimization technique for our assembly as it provided very much consistent results. One overall perception was that as much we know about the model of our working environment, faster is the assembly. Otherwise we need to resort to blind and time consuming strategies to acquire the system model. The Tilted peg case was not solved convincingly using the Neural Net approach. We are planning to get the neural net solution for tilted peg case as it is the normal case in real-world assembly. The neural approach will avoid storing and maintaining a database of values as required by the precession strategy.

76

References
1. W. Haskiya, K. Maycock and J. Knight, "Robotic assembly: chamferless peg-hole assembly", Robotica (1999) volume 17, pp. 621634. 2. S.R.Chhatpar and M.S.Branicky, Localization for robotic assemblies with position uncertainties, Proc. 2001 IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems. 3. Dimitry M. Gorinevsky, alexander M. Formalsky, and Anatoly YU. Schneider, Force Control of Robotics Systems, CRC Press LLC, 1997. 4. M.H. Raibert and J.J. Craig, "Hybrid Position/Force Control of Manipulators", Transactions of the ASME, Vol. 102, June 1981. 5. Wyatt S. Newman, Yonghong Zhao and Yoh-Han Pao, "A Force Guided Approach for Robotic Peg-in-Hole Assembly", Department of Electrical Engineering and Computer Science, CASE WESTERN RESERVE UNIVERSITY. 6. Phuong Nguyen, Fazel Naghdy, "Fuzzy Control of Automatic Peg-In-Hole Insertion", Department of Electrical and Computer Engineering University of Wollongong. 7. Vijaykumar Gullapalli, Roderic A. Grupen and Andrew G. Barto, "Learning Reactive Admittance Control", Proceedings of the 1992 IEEE, International Confeernce on Robotics & Automation, France, May-1992. 8. Shashank Shekhar and Oussama Khatib, "Force strategies in Real Time Fine Motion Assemblies", ASME Winter Annual Meeting, 1987. 9. In-Wook Kim, Dong-Jin Lim, "Active Peg-in-hole of Chamferless Parts using ForceMoment Sensor", Proceedings of the 1999 IEEE iRSJ International Conference on Intelligent Robots and Systems. 10. Dave Gravel, George Zhang, Arnold Bell, and Biao Zhang, "Objective Metric Study for DOE-Based Parameter Optimization in Robotic Torque Converter Assembly", Advanced Manufacturing Technology Development, Ford Motor Company, Livonia, MI. 11. Jeremy A. Marvel, Wyatt S. Newman, Dave P. Gravel, George Zhang, Jianjun Wang, and Tom Fuhlbrigge, "Automated Learning for Parameter Optimization of Robotic Assembly Tasks Utilizing Genetic Algorithms", Electrical Engineering and Computer Science Dept., Case Western Reserve University, Cleveland, Ohio. 12. ISO Standard 8373:1994, Manipulating Industrial Robots Vocabulary 13. The Editors of Encyclopedia Britannica Online, 2008, Article: Robot (Technology), Available at : http://www.eb.com 14. http://www.kuka-robotics.com/usa/en/products/industrial_robots/low/kr6_2/ 15. http://en.wikipedia.org/wiki/Transmission_Control_Protocol 16. http://en.wikipedia.org/wiki/Stiffness 17. http://www.britannica.com/EBchecked/topic/333644/lead-through-programming 18. Sabine Demey, Herman Bruyninckx, Joris De Schutter, "Model-Based Planar Contour Following in the Presence of Pose and Model Errors", I. J. Robotic Res., 1997: 840~858.

77

19. A. Masoud and S. Masoud, Evolutionary action maps for navigating a robot in an unknown, multidimensional, stationary, environment, part II: Implementation results, in IEEE Int. Conf. Robotics and Automation, NM, Apr. 2127, 1997, pp. 20902096. 20. H. Kazerooni and M.G. Her , Robotic deburring of two dimentional parts with unknown geometry, IEEE International symposium on Intelligent Control, August, 1998. 21. Ralph E. Goddard, Yuan F. Zheng, and Hooshang Hemami, Dynamic Hybrid Velocity/Force Control of Robot Compliant Motion over Globally Unknown Objects, IEEE Transactions on Robotics and Automation, VOL. 8, NO. 1, February 1992 22. http://hypertextbook.com/facts/2006/reactiontime.shtml 23. Statistics, Probability and Random Processes by Jain and Rawat, CBC Publications, Jaipur, India. 24. Statistics and Probability Theory by Dr. Y.N. Gaur and Nupur Srivastava, ISBN 978-8188870-28-8, Genius Publications, Jaipur, India. 25. Discrete Mathematical Structures by Jain and Rawat, CBC Publications, Jaipur, India. 26. Discrete Mathematical Structures by Dr. V.B.L. Chaurasia and Dr. Amber Srivastava, ISBN 81-88870-12-9, Genius Publications, Jaipur, India. 27. Theory of Computer Science by K.L.P. Misra and N. Chandrasekaran, ISBN 81-203-12716, Prentice Hall of India, New Delhi, India. 28. Rumelhart, D.E.; Hinton, G.E.; Williams R.J., Learning Representations of BackPropagation Errors, Nature (London), vol.323, pp533-536, 1986. 29. Zwillinger, D. CRC Standard Mathematical Tables & Formulae 30th edition, pp.462, 1996. 30. http://en.wikipedia.org/wiki/Precession 31. Natsuki Yamanobe, Hiromitsu Fujii, Tamio Arai and Ryuichi Ueda, Motion Planning By Integration of Multiple Policies for Complex Assembly Tasks, Cutting Edge Robotics2010, Intech. 32. http://www.jmp.com 33. Meyer, Nachtsheim, The Coordinate-Exchange Algorithm for Constructing Exact Optimal Experimental Designs, Technometrics, 37, 60-69, 1995.

78

Das könnte Ihnen auch gefallen