Sie sind auf Seite 1von 6

Calibration of Cartesian Robot Based on Machine

Vision
Wang Rui1* ,Qu Huawei1 ,Sui Yuyang1 ,Lv Jianliang1
1. School of Naval Architecture and Ocean Engineering, Harbin Institute of Technology at Weihai
Weihai, China
wrhit@163.com, quhuawei@foxmail.com, 626667745@qq.com, 476670487@qq.com

Abstract—Machine vision makes robots more intelligent, but each radiating unit consists of eight radiating units and a
most Cartesian robots rely on manual programming to perform bottom plate.
operations. This paper presents a new nine points calibration
method for Cartesian robot based on machine vision, and realizes In order to meet the need of installing radiation unit in
the automatic assembly of Cartesian robot. Firstly, affine assembly array radar, the Cartesian robot has been developed
transformation model of Cartesian robot was established. The independently. The Cartesian robot comprises an industrial
homogeneous matrices of four affine transformations formed a computer, a fixed high motion control card, three sets of motor
closed chain. Secondly, the HALCON image processing technique lead screw modules, an industrial camera, a light source, a
was used to extract the image coordinates of the center points of guide rail, a cylinder, an air claw, a radiation unit, a substrate,
interest and record the current mechanical coordinates. Thirdly, and so on, shown as Fig.1.
through the obtained image coordinates and mechanical
coordinates, the transformation relationship between the self- The robot X axis, Y axis and Z axis are respectively
made calibration board and the basic coordinate system was composed of three sets of motor lead screw modules. What's
obtained. Finally, the effectiveness of the calibration proposed in more, any two axes are perpendicular to each other. In
this paper was validated through case study. particular, the radiation unit mounting substrate is fastened to
the X axis, and the end effector (the air claw) as well as the
Keywords—Cartesian robot; machine vision; nine points industrial camera are mounted on the shaft.
calibration; self-made calibration plate; HALCON

I. INTRODUCTION
Machine vision is an important research field in robotics [1-
3]. Industrial robot localization based on machine vision
technology obtains the advantages of high positioning accuracy,
high automation and intelligence, low labor cost and so on [4].
Therefore, it has been widely used in positioning, grasping and
assembly. The Cartesian robot manipulator is widely used in
industry because of its high carrying capacity, high speed and
high repeatability and positioning accuracy [5].
In the 2025 Chinese Manufacturing, along with the
transformation and upgrading of China's manufacturing
industry, the rapid development trend of mechanical industry
toward automation, intelligent, high precision, automation of
the traditional Cartesian robot has been unable to meet the
current industrial demand. In order to meet automatic and
intelligent motion control of Cartesian robot, a new method of
Fig. 1. System structure of Cartesian robot.
Cartesian robot calibration based on machine vision is
proposed in this paper. The method based on nine points
calibration has the advantages of simple operation, low cost B. The Mathematical Coordinate Transformation Model of
and high degree of automation. Cartesian Robot
According to the physical system structure of Cartesian
II. MACHINE VISION CALIBRATION MODEL OF CARTESIAN robot, the mathematical coordinate transformation model of the
ROBOT Cartesian robot is established, shown as Fig.2. There are four
coordinate systems: the base coordinate system, the calibration
A. The Physical System Structure of Cartesian Robot plate coordinate system, the tool coordinate system and the
The Cartesian robot is designed to install the radiating camera coordinate system. The calibration of Cartesian robot
element of the radar array in laboratory. The radar array vision system is exactly to determine the pose of the object in
antenna is composed of a number of radiation unit groups, and the robot base coordinate system, which is the most important

978-1-5090-5363-6/17/$31.00 ©2017 IEEE 1103


and fundamental task for the system [6]. The calibration following formula, shown as formula (1). The mathematical
method proposed in this paper can be described by the coordinate transformation model is shown as Fig.2.

Fig. 2. Mathematical coordinate transformation model of Cartesian robot.

One of the most common problem formulations is based on But the expression is a bit tedious because it always needs to be
the closed chain of transformations [7]. The closed chain translated out of the translation part. To solve this problem, we
consisting of four homogeneous matrices can be obtained from introduce third coordinates of 1 numerical values on the basis
Fig.2, in order to indicate the position of the black square of the original coordinate system. This representation method
center of the self-made calibration board in the base coordinate allows us to represent affine transformations by simple matrix
system [8-9], which can be obtained by the following multiplication:
transformations:
§r · § a11 a12 a13 ·§ r ·
base
H cal base
Htool ˜ tool H cam ˜ cam H cal (1) ¨ ¸ ¨ ¸¨ ¸
base
Htool means the pose of the tool in the
¨c¸ ¨ a21 a22 a23 ¸¨ c ¸ (3)
Where, ¨1 ¸ ¨ 0 0 1 ¸¨ ¸
tool © ¹ © ¹©1 ¹
Cartesian robot base coordinate system, H cam represents
Where, the translation part is represented by elements a13
the relationship between the camera coordinate system and the
base
tool coordinate system,
cam
H cal stands for the pose of the self- and a23 in the matrix H cal .
made calibration board in the camera coordinate system. base
To implement H cal , we need at least three sets of data
C. The Simplified Mathematical Model that contain image coordinates and mechanical coordinates. If
more than three points correspondences are passed, the
If the position of the object on the mechanism and the angle transformation is overdetermined. In this case, the returned
of rotation can not be kept constant, we need correct the transformation is the transformation that minimizes the
translation and rotation angles of the object. Because of the
distance between the camera and the object changes, the size of distances between the input point ( Px , Py ) and the transformed
the object in the image changes obviously. The transformations point ( M x , M y ), as described in the following equation:
used in these cases are called affine transformations [1], which
can be described by the following formula (represented by two 2
coordinates, called inhomogeneous coordinates): § M xi · § Pxi ·
¨ ¸ base ¨ ¸
§ r · § a11 a12 · § r · § tr · ¦ ¨ M yi ¸  H cal ˜ ¨ Pyi ¸ minimum (4)
¨ ¸ ¨a ¸¨ ¸  ¨ ¸ (2) i
¨ 1 ¸ ¨1 ¸
© c ¹ © 21 a22 ¹ © c ¹ © tc ¹ © ¹ © ¹
In the upper formula, the affine transformation consists of a
linear part given by the 2 u 2 matrix and a translation portion.

1104
III. THE METHOD OF NINE POINTS CALIBRATION BASED ON between the calibration board coordinate system and the robot
MACHINE VISION SYSTEM coordinate system, that is
base
H cal . In order to obtain the
base
A. The Process of Nine Points Calibration transformation matrix H cal , the nine points calibration
The calibration of the Cartesian robot machine vision method is creatively introduced in this paper. The nine
system is actually the process of obtaining the relationship calibration process is shown as Fig.3.

Fig. 3. Calibration flow chart of nine points.

B. The Notes of Nine Points Calibration


The Cartesian robot reset operation, correcting industrial Then, the industrial camera is moved to the nine positions
camera distortion are completed before the nine points through the motion control card, shown as Fig.4.
calibration.

Fig. 4. Nine points are calibrated at nine different positions. From (a) to (k), they represent nine orientations based on the camera's center of view, and collect
black squares pictures at the nine locations respectively. For example, (a) means the collected black squares at the top left of the camera's field of view.

For each of the collected images, we should do the square in the camera coordinate system. The mechanical
manipulation to extract the operation of the central point, coordinates of the current Cartesian robot are recorded by the
shown as Fig.5. Through the powerful mathematical operation motion control card, shown as Table 1.
function of HALCON, we can obtained the pose of the black

1105
Fig. 5. The process of image processing by HALCON. Where, (a) means the original collected picture, (b) represents the position of the square, (c) stands for the
central point coordinate of the black square in image coordinate.

TABLE I. IMAGE COORDINATES AND MECHANICAL COORDINATES OF NINE POINTS CALIBRATION RECORDS

Points Image coordinate Mechanical coordinate


No. Row/pixel Column/pixel X-axis/mm Y-axis/mm
1 203.185 199.959 29.547 56.292
2 199.046 830.322 29.5465 20.762
3 210.93 1402.25 29.5465 -11.5015
4 564.341 202.357 9.2395 55.9935
5 552.551 785.297 9.2395 22.9525
6 569.944 1398.33 9.2395 -11.353
7 986.452 210.336 -14.1815 55.355
8 1002.33 869.192 -14.1815 18.8005
9 984.343 1406.28 -14.1815 -11.8995
calibration can be obtained by comparing the coordinates of
C. The Result of Calibration the obtained central point with the ideal center coordinates of
The relation between the calibration plate coordinate the camera.
system and the robot coordinate system is obtained by formula
(4) and the data is showed as Table 1, and the radiation matrix Then, the image coordinates of the center point of the
is generated: circle in the ideal state can be obtained, denoted as
( PRow , PCol ). The accuracy of the nine points calibration can
§r · § 0.
0.0555372 0.000199703 40.5789 ·§ r · be determined by the following formula:
¨ ¸ ¨ ¸¨ ¸
¨c¸ ¨ 0.000106907
0.0
0 0 0.0562974 67.4644 ¸¨ c ¸ (5)
¨1 ¸ ¨ § n ·
1 ¸¨ ¸
¨¦ PRow  CRow  PCol  CCol
2 2
© ¹ © 0 0 ¹©1 ¹ N ¸ / n (6)
©i1 ¹
IV. EXPERIMENTS AND RESULTS Where, N is the measurement parameter of calibration
accuracy, n means many samples collected. The camera
A. The Process of Experiment resolution is 1200h1800, therefore, ( CRow , CCol ) is (600,
In order to verify the accuracy of the calibration method
800).
based on machine vision in Cartesian robot, this experiment is
designed. Through the experiment, the center of the region of
interest(ROI) can be coincident with the Z axis of the camera B. An Example of Collecting Samples
coordinate system. Firstly, the circle of the radiation unit in the camera's field
of view is randomly collected, and the coordinates of the
Firstly, the image coordinates at the central point of the center of the circle in the image coordinate system are
ROI is collected. Secondly, In order to move the central point extracted, shown as Fig.6. Secondly, the mechanical
of the ROI to coincide with the Z axis of the camera coordinate
system, the distance required for X axis and Y axis to move can coordinates of the circle center( PCenterRow , PCenterCol ) in the
be calculated by the image center coordinate and the base coordinate system are obtained by the affine
calibration results obtained previously. Thirdly, the image transformation. Thirdly, the camera image coordinate center
coordinates of the ROI is collected again after moving the X ( CRow , CCol ) is transformed to obtain ( P600 , P800 ).
axis and Y axis. Finally, the accuracy of the nine points

1106
By subtracting the ( P600 , P800 ) coordinates from the The image coordinates of the circle center are obtained and
compared with ideal center coordinate of camera (600, 800).
( CRow , CCol ) coordinates, the X axis movements and Y axis The error of the calibration method can be obtained by the
movements can be obtained. When the movement is finished, formula (6). Where, x means row coordinates, y stands for
the center point of the circle is extracted again, shown as Fig.7. column coordinates.

Fig. 6. The center coordinates of a circle are collected at random. Fig. 7. After the camera moves to the center of the circle, the coordinates of
the center of the circle are extracted again.

C. Results
Based on the example of collecting samples, Ten sets of Calibration accuracy can be obtained by the formula (6). The
data collected at random were shown as Table 2. The results are shown as Table 3.

TABLE II. RANDOM COLLECTION OF TEN SETS OF DATA

Image coordinate of substrate Image coordinate of substrate


Points hole before centering operation hole after centering operation
No.
Row/pixel Column/pixel Row/pixel Column/pixel
1 563.874 611.631 600.107 801.306
2 576.562 736.023 599.021 799.375
3 271.277 236.681 600.466 798.827
4 574.323 747.375 600.747 799.361
5 501.729 786.785 599.602 799.015
6 489.073 790.069 601.132 799.024
7 718.435 237.139 601.251 800.402
8 521.604 754.456 599.782 801.012
9 439.358 817.751 601.234 799.569
10 494.128 1102.257 600.982 800.782

TABLE III. EXPERIMENTAL RESULT

Calibration Numbers of samples Center point in camera Calibration


parameter collected coordinate system accuracy/pixel

Calibration result n=10 (600, 800) N | 1.2006

1107
This work was supported by the Preliminary Research Fund
V. CONLCUSION for Military Equipment of the Central Military Commission
In this paper, the object model and mathematical model of under Grant No.41423010401.
Cartesian robot are established. Through the Cartesian robot
modes, the homogeneous matrix transformation relation chain
between the base coordinate system and the calibration plate REFERENCES
coordinate system can be obtained by a new calibration [1] P. Corke, "The Machine Vision Toolbox," Ieee Robotics & Automation
method for nine points calibration. With the aid of HALCON Magazine, vol. 12, pp. 16-25, 2005.
image processing software, the center point is extracted [2] W. I. Clement and K. A. Knowles, "An instructional robotics and
through the nine points calibration method. Besides, the machine vision laboratory," IEEE Transactions on Education, vol. 37,
pp. 87-90, 1994.
current mechanical coordinates are recorded. Then, the
base [3] C. Steger et al, Machine Vision Algorithms and Applications, Ringgold
transformation matrix H cal between the base coordinate Inc, Conshohocken, vol. 3, pp. 89-354, 2008.
system and the calibration board coordinate system of the [4] D. I. Lee, "Robotics," Journal of Endourology, vol. 25, pp. 1242, 2011.
Cartesian robot is obtained by the affine transformation [5] D. Lee et al, "Development of a vision-based automatic vaccine
between the image coordinate and the mechanical coordinate. injection system for flatfish," Aquacultural Engineering, vol. 54, pp. 78-
84, 2013.
Finally, the accuracy of the calibration can reach 1.2006 pixel
[6] X. Deng et al, "Calibration of a robot vision system coupled with
after nine points calibration, as the accuracy can reach structured light: Method and experiments," Lecture Notes in Computer
5.4027um for the Cartesian robot. Science (Including Subseries Lecture Notes in Artificial Intelligence
and Lecture Notes in Bioinformatics), vol. 8918, pp. 256-264, 2014.
It is proved that the nine points calibration is simple,
[7] J. Heller, M. Havlena and T. Pajdla, "Globally Optimal Hand-Eye
effective and practical. It is suitable for the Cartesian robot in Calibration Using Branch-and-Bound," IEEE Transactions on Pattern
industry which requires high precision grasping, handling and Analysis and Machine Intelligence, vol. 38, pp. 1027-1033, 2016.
assembling. [8] M. Ulrich and C. Steger, "Hand-eye calibration of SCARA robots using
dual quaternions," Pattern Recognition and Image Analysis, vol. 26, pp.
231-239, 2016.
ACKNOWLEDGMENT
[9] M. Shah, "Solving the Robot-World/Hand-Eye Calibration Problem
Using the Kronecker Product," Journal of Mechanisms and Robotics-
Transactions of the Asme, vol. 5, pp. 31007, 2013.

1108

Das könnte Ihnen auch gefallen