Sie sind auf Seite 1von 6

VISUAL CONTROL OF 6 DOF ROBOTS WITH CONSTANT

OBJECT SIZE IN THE IMAGE BY MEANS OF ZOOM CAMERA


Oliver Lang, Axel Graser

University of Bremen
Faculty of Electrical Engineering, Institute of Automation (IAT)
Bremen, D-28359, Germany
phone: 4-49-421-218-7523/7326,fax: +49-421-218-4596
olang@iat.uni-bremen.de

In this paper a new approach for robust visual-servoing in 6 degrees of fieedom (DOF) is described. The procedures
of image-based visual-servoing provide calibration robust methods for visual control of robots. One of the main
problems for visual-servoing with a gripper mounted camera is the variable object size due to camera movements.
This leads to very high demands on the object recognition algorithms. Furthermore, with the decrease of the object
size in the image, the influence of measurement noise increases. In this paper, an approach using a zoom camera to
get constant object sizes during movement is described. Object identification is simplified and the influence of
measurement noise is reduced. Because of the variable focal length objects may become invisible by the camera due
to gripper movements. Pose control in 6 DOF is used to overcome this problem. The control strategy is divided into
the phases exploration, rough-approach, alignment, and fine-approach. Marked objects are introduced to the system
by Teaching-by-Showing. The whole process is calibration robust and therefore suitable for only rough calibrated
camera and robot systems. No object knowledge, except the distance of the object to the camera while teaching, is
used for adaptation of the system model. The approach and its verification by a simulation is described in this paper.

on the sensor is only 0,lmm in width, that is equiva-


I. Introduction lent to 16 pixels in the image. The point diameter in
To increase the autonomy of service robots, these the image decreases even to only 2 pixels. At this
robots are often equipped with CCD cameras. They small tag size in the image, no safe detection is pos-
should allow an action of the robot in a natural envi- sible.
ronment. The calibration of the system represents a In this paper, a approach which overcomes these
central problem in addition to object detection in the problems is presented. Resolution is increased by use
digitalized image of the CCD camera. In order to be of a zoom lens. The tag is scaled to the desired size.
robust against calibration errors it is usefkl to control The influence of the measurement noise is decreased,
the robot in an image-based visual-servoing loop [l, since the signal-to-noise ratio is enlarged. The hole
21. If the camera is fixed at the gripper (hand- approach is calibration robust and therefore suitable
mounted), the image of the environmental objects for only rough calibrated camera and robot systems.
changes with movement of the robot. In chapter 11, some aspects of zoom cameras and
In order to find the object more easily in the image visual-servoing are represented. Finally, the control
the object is provided with a tag. The tag consists of process is dismantled in four phases to realise visual
four dark marker points on light ground. The centres control of the robot in 6 DOF with constant object
of these points serve as features for the image-based size. In chapter 111, the realisation of the four phases
control-loop. On the area between the points infor- is described in detail, and the approach is tested in a
mation about the object can be placed. This may be simulation in chapter IV.
an identification number (ID), which must also be
extracted by image processing. 11. Visual-Servoing with Constant Object Size
Small tags (e.g. 20mm x 20mm) which can be at-
tached to small objects are practically useful. If the A Use of a Zoom Camera
gripper (i.e. the camera) is in larger distance to the The use of a zoom camera instead of a camera with a
object, marker points (e.g. 0 3mm) and IDS can be fixed focal length in robot vision has the following
extracted safety no more. Already at a distance of lm advantages and disadvantages:
and a focal length of 5mm,the projection of the tag

0-7803-5735-3/39/$10.00 Q 1999 lEEE 1342


By taking two images at different focal length marked object are put in the desired relative pose to
depth information can be gained [3], however, each other, the corresponding image is taken and
the attainable accuracy is very small [4]. features are extracted. The features are stored as
Camera movements and focal length changes are desired features Ixd, the focal length as desired focal
approximately redundant in the long-distance length fd and the distance of the camera to the tag as
area [ 5 ] i.e. both can cause similar changes in the Q. G must be known only rough. The left superior

image. This effect is used in chapter 111. alphabetic characters characterise the respective co-
ordinate system.
The resolution can be influenced. Objects can be
scaled in the image by change of the focal During control the current image features are
length. In an image-based visual-servoing sys- changed to the desired image features. If actual image
tem, size of the image features may be kept al- features and focal length are equal to the desired
most constant during the control sequence. This values, the camera (and therefore the gripper) is in
is shown in [6] for three DOF and in this paper the same relative pose to the object as during teach-
for 6 DOF. ing even if the robot calibration and the interior and
exterior camera calibration are inaccurate.
The calibration of zoom cameras is very complex
and inaccurate and therefore, they are hardly C Some Aspects of Image-Based Visual Servoing
suitable for look-and-move applications. Other
zoom camera model parameters also change de- Before we describe the four phases, some aspects of
pending on the adjusted focal length. the image-based process are described.
Hosoda et al. [ 5 ] proved that the control of a mom The pin-hole camera model is used for modeling the
camera in a closed loop is possible and verified this camera [2]. A point x described in the camera coor-
on a robot with 3 DOF. In their approach they stored dinate system {C} is projected on the sensor plane
also the value of the focal length in the course of W.
Teaching-by-Showing as a desired value. During The used automatic controllers are based on the im-
execution, image features and focal length are both age Jacobian J. This matrix represents a linear trans-
controlled to the desired values. In their case, the
formation of a change of the camera pose p to a
focal length changed in open loop control using an
exponential function and without respect to the image corresponding change of the projection x. The
features. Hosada et al. extended the image Jacobian change of the camera pose (cause) is described in
so that the rate of the focal length change is taken camera coordinates, the change of the projection
into consideration. The control of the focal length is (effect) is described in sensor coordinates.
not coordinated with the control of the robot position. J is a b c t i o n of the current sensor output x and y,
A practical application of the approach is not men- of the current coordinate z of the point in the camera
tioned. coordinate system {C} and of the current focal length
f. Consequently, J depends on the operating-point.
B Teaching by Showing Since the tag is small, the distance z for all points on
Image-based control of the gripper is executed with a the tag is assumed as equal. In order to be robust
camera mounted on the gripper. The task of the con- against always existing measurement noise the num-
troller is to drive the gripper in a desired pose relative ber of the image features should be higher than the
to an object based on visual information of the cam- number of the DOF. In this case, the approximation
era. For this, the automatic controller uses an image for the unknown vector p results from use of the
based error, defined in the two dimensional image pseudo inverse J.
coordinate system {I}, to calculate the new pose of
the robot in the robot base system {R}.
The components of p and x are velocities. Since
From the camera image relevant features are ex- the system is a sampled-data control system, the
tracted with an image processing system. The auto- following control algorithm considering a diagonal
matic controller computes the change of the robot amplification matrix KMis used:
pose from an comparison between current and de-
sired features. In the new pose another image is taken
and the control algorithm is repeated until the task is In our set-up the robot is controlled in robot base
fulfilled. coordinates. In principle it is also possible to control
First of all, the desired image features must be de- the robot in joint coordinates. Since, however, most
fined. This is done by showing (Teaching-by- of the commercially available industrial and service
Showing, El]). For this purpose, the gripper and the

1343
robots are equipped with internal inverse kinematics, working space of the robot. If a possible tag is
the structure chosen by us is practical. sighted, the camera is aligned to the tag image-
The manipulated variable for the robot Ap must be based (change of camera orientation) and the tag
transformed from camera coordinates to robot base is zoomed in on (increase of focal length). Is the
coordinates. The automatic controller computes a object the searched object, the exploration phase
is concluded.
manipulated variable A p which represents the new
pose of the camera coordinate system {C) described 2. Rough-Approach:
in the old position of the camera coordinate system The camera is moved to the object. Position and
(C}. Cause of the movement of {C} is a movement roll angle of the camera are controlled image-
of the tool coordinate system. The new robot pose based using the three translational DOF and the
can be computed from the old robot pose, the exterior angle a,. Tag size in the image is almost kept
camera parameters and the manipulated variable [2]. constant by the zoom mechanism. The phase
ends if the distance to the tag has decreased to a
An image error e is defrned as a quantity. It is com- specific multiple of the desired distance.
puted fiom the control errors of the image features
which result fiom the four tag points [2]. A pose error 3. Alignment:
e is defined moreover. It is measure for the control In the third phase, camera pose is controlled im-
error of the actual camera pose described in the de- age-based in six DOF. The distance to the object
sired pose of the camera. It can only be calculated in is kept constant. The object is driven round so
a simulation and is not used for the control algo- far, that in phase 4 the camera needs to execute
rithms. movements almost only in z-direction.
4. Fine-Approach:
D Control in Four Phases Pose of the camera is controlled image-based at
The object may be located anywhere in the working all six DOF. The camera is moved mainly along
space of the robot and should be grasped. For this, the z-axis in the direction of the object, with
the gripper must be driven to the desired pose relative constant tag size in the image.
to the object. To get a stabil movement from any start
position following conditions should be considered:
For simplification of object identification and for
increase of signal-to-noise ratio, the tag should
appear in the image with almost constant size
during control.
The tag must always be in the image of the cam-
era. Therefore, no large movements at all six
ll ll
DOF should be executed simultaneously.
Finally, the gripper should approach to the object
in the direction of the z-axis (Fine Approach)
due to get the object suitable between the gripper
jaws.
To obtain calibration robust control, all phases
should be executed completely image-based. Fig. 1 : Division into four phases
The image Jacobian is operating-point-
dependent. To get a optimal control result, the HI. Realisation of the Four Phases
image Jacobian should be adapted. To keep Cali- In this section realisation of the four phases is de-
bration and calculation efforts small, only image scribed in detail.
information and distance q should be used for
this. A Exploration (Phase 1)
Since a simultaneous image based control of all six
Alignment to the tag centre and scaling of the tag in
DOF considering the above demands is not practical, the image are done one after another.
the control process is divided into four phases (see
Fig. 1). 1) Orientation Toward the Object
1. Exploration: The optical axis of the camera is oriented toward the
In the exploration phase, the object (i.e. the tag) tag centre by change of the angles a, and ay.The
is searched with a small focal length in the

1344
image Jacobian employed for this is represented in variable. Moreover, 'az is controlled. For this, tag
(2). angle (s. Fig. 2) is used as an image feature.

desired

1
tag size 4

R, mbotwith m - e %
desired image features zOOm camera system calculation

The position of the tag centre ('xC, 'yC) in the image is


controlled. The desired values are ('x,d, 'yc,d) = (0,
0). The image Jacobian is adapted by use of the cur-
rent values of ' x , and 'yC. The focal length is constant
'and equal the desired focal length fd.
The control error in image coordinates A'x is trans-
formed back to a control error in sensor coordinates Fig. 3: Control structure
A'x making use of interior camera parameters. MA- 1) Control of the Object Size
ing use of the image Jacobian, a manipulated variable
The upper closed loop (object size control loop) in
Acp which is transformed into the new robot pose Rp
Fig. 3 secures an almost constant tag size 'a in the
is computed.
image by altering the focal length. On the one hand,
2) Control of the Object Size an existing tag size error must be brought to zero (4),
The tag size is got from the camera image and is also on the other hand, the computed camera movement
used as an image feature. If the camera is adjusted, Acz must be compensated (5).
the tag size in the image 'a is changed to desired tag
size 'ad using the focal length f as control variable.

The necessary focal length change results from (4)


and (5):

Fig. 2: Calculation of tag size a and


+ Afsiz = f .
Af = Afmofion [-eE + - I) (6)

' is estimated with equation (7). The signal flow for


z
tag angle c1 in the image the calculation of z' is not depicted in Fig. 3.
s,=-. 1, + I , 1, +I, (n
(3) (7)
2 2 CZ = J+.CZd
For fast control, a one step output to reach the desired
value is used. In the ideal case the control error is 2) Control of the Camera Position
corrected in one sampling step. In the lower closed loop in Fig. 3, compensation of
focal length change is executed in the feedback. With
(4) (8), features which would appear by using the desired
focal length fd instead o f f are computed. These "fo-
cal-length-corrected values are marked with *.
If the currently investigated object is the searched
object, the exploration phase is concluded. In the next
three phases, tag size in the image is almost kept 'x' = L.SX (8)
f
constant on 'a+
In order to achiev? a damped exponential decrease of
B Rough-Approach (Phase 2) the error in 'z, 'b is chosen as a fiu-ther focal-length-
corrected image feature as follows:
In phase 2 a translational movement with constant tag
size in the image based on the approach described in Sb* =("a']14 (9)
[6] is realised. The image-based closed loop to con-
trol the robot is supplemented with a second closed Since the tag centre is in the middle of the image, a
loop (Fig. 3). This closed loop controls the tag size in change of the tag angle corresponds directly to a turn
the image with the focal length as a manipulated of the camera around the 'z axis.

1345
The image Jacobian for the control of the focal- process the camera was moved (200mm, -2OOmm,
length-corrected values reads as follows: -lOOOmm) and turned (-20, -35, -20). The feature
amplifications for the point coordinates were 0.5
respective and for b 0.002. In Phase 3 (most critical
phase), amplification was only 0.3.
Aim of the control was the movement of the camera
back to the teaching pose. This lasted 42 sampling
steps altogether: Exploration (8+1), Rough-Approach
(1 S), Alignment (I 0) and Fine-Approach (5).

This phase is stopped if the distance to the object is


reduced to a value r-times as much as the desired
distance a.This is achieved when the desired value
for the focal-length-correctedtag size a is chosen
as sa,= I?.
&* and yC*are controlled to zero and z is computed 0 5 j0 15 20 25 :i 30 35 40 45

with (7). As a final focal length fiom phase 2, ap-


proximately f, = r .f,,will be adjusted.

C Alignment (Phase 3)
Drive around the object is achieved by image-based
control with 6 DOF. xd are.used as the desired val-
ues. The focal length from phase 2 is kept. Since the
values are not focal-length-corrected, the camera Fig. 4: Image error and pose error
remains at a distance z, = r . q from the object. The
image Jacobian is filled with constant values q and
and adapted with current values x.

D Fine-Approach (Phase 4)
The focal length is adjusted to the desired value fd in
five steps. The robot compensates the focal length 0 5 tO IS 20 25 i 30 35 i 40 45

change by a movement of the camera and finally the


robot achieves the desired pose with the desired focal
length.
Final pose control of the camera is executed image-
based at all six DOF. In order to keep the tag size in
the image constant the focal length change Af is con-
sidered in addition. In accordance with (1l), the focal
length change c a be compensated by a movement of
the robot. z is computed with (7) again. Fig. 5: Tag size and focal length
Fig. 4 shows image error e and pose error e during
the four phases: In the exploration phase, the image
error is reduced by orientation towards the object and
The manipulated variable is the s u m of the results minimised suddenly means subsequent zooming. e
from (1) and (1 1): remains small in the following phases. The pose error
of the camera (and consequently of the gripper) fi-
Acpphe4 =Acp+(O 0 A%,, 0 0 Or (12) nally decreases completely in phase 4. The desired
pose is achieved.
IV. Results At the beginning the tag size is much too small, but
almost constant during phases 2-4 (Fig. 5 , above).
The approach was tested in a simulation. Teachmg-
by-Showing was done with following marker point This is achieved by adaptation of the focal length
(Fig. 5, below). Teaching values are represented as
coordinates in [mm]:described in {C): (-10, -10,
dotted lines.
loo), (IO, -10, loo), (-10, 10, loo), and (10, 10, 100).
The focal length was f = 5mm. After the teaching

I346
damped exponentially. By driving around the object
in phase 3, all DOF are used. In phase 4 finally the
desired values are achieved at all DOF.
Fig. 8 shows the path of the marker point in the im-
age. In phases 2-4 is the image almost constant, al-
though the camera is moved.

V. Summary
In this paper a new approach for robust visual-
servoing in 6 degrees of keedom (DOF) was de-
scribed. The object size in the image was kept almost
constant during robot movement. Object identifica-
tion was simplified and influence of measurement
noise was reduced. The image-based robot control
Fig. 6: Camera position in {Cdesired) loop is supplemented with a second closed loop to
control the object size in the image with the focal
length as a manipulated variable. Compensation of
focal length change is executed in the feedback of the
robot loop. The control strategy is divided into the
-"
phases exploration, rough-approach, alignment, and
o 5 jo 15 20 2s I 30 35 40 4s
fine-approach. No object knowledge, except the dis-
g o tance of the object to the camera while teaching, is
e -20 used for adaptation of the automatic controller. The
2 hole approach is calibration robust and therefore
%
40
0 5 ill 15 20 25 30 35 40 45 suitable for rough calibrated camera and robot sys-
1 tems. The approach was verified by a simulation.
In further investigations sensor data were disturbed
with a Gaussian noise (0.5 pixel). The results are
0 5 10 15. 20 25 30 35 40 45 comparable with the represented ones.

Fig. 7: Camera orientation in {Cdesired) Literature


[I] S. Hutchinson, G . Hager, P. Corke, A Tutorial on
Visual Servo Control, IEEE Transactions on Ro-
.. botics and Automation (Oct. 1996) vo1.12, no.5
....... A..".".....
[2] 0. Lang, T. Lietmann, Visual Servoing - Ein An-
=.200 +.-,K-.;
i
8 300 1 ..L
! \ '

,$.-- ....
satz zur kalibrierungsrobusten visuellen Regelung
7
..
400 !.......
/
5 id
: j
....
500 ij.." i... L......... von Robotem, In B. Lohmann, A. GrBser (Hrsg.),
i j

0 200 ,400 600


Methoden und Anwendungen der Automatisierung-
U [pixel] stechnik, Shaker Verlag 1999 (in German)
131 J. Ma, S. I. Olsen, Depth from Zooming, Journal of
the Optical Society of America, Vol. 7, No. 10
....... (10/90), S.1883-1890
[4] M. Ima, T. Uno, Qualitative Depth from Zooming,
ACCV'95, Proceedings of Second Asian, Confer-
... 0 200 400 600 ence on Computer Vision, Singapur 1995, Vol. 2
U IPiXerl

[5] K. Hosoda, H. Moriyama, M. Asada, Visual Ser-


Fig. 8: Path of the marker points in the image voing Utilizing Zoom Mechanism, Proc. of IEEE
Fig. 6 shows the process of the translational and Fig. Int. Conf. on Robotics and Automation 1995
7 of the rotative DOF: The camera is aligned in phase [6] 0. Lang, A. Graser, Regelung eines Teilautonomen
1 by changing the orientation. The control of the Roboters mittels Zoomkamera, In D. Henrich
position and the roll angle of the camera in phase 2 (Hrsg.), Autonome Mobile Systeme, Informatik
can be recognised clearly. The error of 'z decreases Aktuell, Springer-Verlag 1998, (in German)

1347

Das könnte Ihnen auch gefallen