Sie sind auf Seite 1von 22

© ASNT - ALL RIGHTS RESERVED

Distribution and duplication are prohibited.

C
7H A P T E R

Machine Vision for Visual


Testing

Zheng Liu, National Research Council Canada, Ottawa,


Ontario, Canada
Hiroyuki Ukida, University of Tokushima,
Tokushima-shi, Japan
© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

PART 1. System Architecture of Machine Vision


System

Machine vision is the application of vision does not necessarily mean the use
computer vision to industry and of a computer. Specialized image
manufacturing. It is a specialization processing hardware is even capable of
within system engineering, which achieving a higher processing speed and
encompasses computer science, optics, can replace the computer.1 The modern
mechanical engineering, and industrial approaches may use a camera with the
automation. One definition of machine capability to interface directly with a
vision is “the use of devices for optical, personal computer, a system designed on
noncontact sensing to automatically an image processing board, or a vision
receive and interpret an image of a real engine that plugs into a personal
scene in order to obtain information computer.2
and/or control machines or processes.”1 A smart camera is a self-contained and
For nondestructive inspection, visual standalone unit with communication
inspection is usually performed by an interfaces. A typical smart camera may
experienced inspector. However, a tedious consist of the following components:
and difficult task may cause the inspector image sensor, image digitization circuit,
to tire prematurely and degrade the memory, digital signal processor,
quality of inspection. Repetitive and communication interface, input/output
dangerous inspection demands machine ports and a built-in illumination device.
vision to replace human inspection so An embedded vision computer, which is a
that precise information can be extracted standalone box with frame storage and
and interpreted consistently. With intelligence, is intermediate to the
technological advances on computer, personal computer based vision system
camera, illumination, and and a smart camera.2 The system differs
communication, widespread application from smart cameras, in that the camera is
of machine vision systems to tethered to the unit rather than
nondestructive testing is foreseen. self-contained. Different system
The basic architecture of a personal configurations have their own advantages
computer based machine vision system is for different applications. A personal
given in Fig. 1. The main components computer based machine vision system
include light source, detector, optics, has the greatest flexibility and capability
frame grabber and computer. Machine of handling a wider range of applications.

FIGURE 1. Typical architecture of machine FIGURE 2. Four basic parameters for optics.
vision system.

Light source

Frame
grabber

Cameras

Specimen

Working
distance

Depth
of view Resolution

Computer

Field of
view

158 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

transmission from the source, whereas


Optics and Lighting dark field is outside of the line of sight of
the camera upon direct transmission
(Fig. 3b).
Optics Front lighting is the most convenient
Optics is one of the basic elements for a configuration of illumination for machine
machine vision system. The optics creates vision systems. A front lighting setup may
an image such that there is a be a single point source, a combination of
correspondence between object points lighting configurations, or may
and image points and contributes to the encompass the entire dome. Figure 4
object enhancement.3 However, the optics shows the directional lighting, diffuse
may also introduce distortions and dome lighting, and oblique lighting. Light
aberrations. The optics includes lenses, from one direction can create high
mirrors, beam splitters, prisms, polarizers, contrast transitions to highlight an
color filters and gratings. Optics has three object’s features. A diffuse dome light is
functions in a machine vision system: one in which the light may scatter in
produce a two-dimensional image of the many different directions. This makes it
scene at the sensor; eliminate some of the useful to light curved and specular
undesired information from the scene surfaces. Oblique lighting is obtained by
image with various filters; transfer or restricting the light to lower incoming
modify the light before it arrives at the angles. With this technique, flat specular
scene.3 surfaces reflect light away from the
There are four basic parameters for camera while small raised or depressed
specifying the optics of a vision system: areas reflect light back.
field of view, resolution, working distance In back lighting in Fig. 5a, the light
and depth of field.4 They are illustrated in source and camera are placed on opposite
Fig. 2. Field of view is the extent of the sides of the object under inspection. This
observable scene measured as an angle arrangement creates dark silhouettes
subtended from the focal point. The
resolution of a system is the minimum
size of a distinguishable feature of the FIGURE 3. Bright and dark field mode:
object under inspection. The depth of (a) front light; (b) backlighting.
field of a lens is its ability to maintain a
desired resolution as the object is (a)
positioned closer to or farther from the
lens focus.
The second element of a machine
vision system is lighting. An application
specific lighting or illumination can yield
consistent appearance, which is essential Bright
to the success of a machine vision system. field Dark field
The lighting should maximize the
contrast of features of interest while
minimizing all other features.

Illumination
Illumination can be provided by one or
more of the following techniques: front (b)
lighting, backlighting, coaxial lighting,
structured illumination, strobed
illumination or polarized light.
As illustrated in Fig. 3a, the bright field
mode for front lighting uses any light
source in the line of sight of the camera
upon direct reflection from the test
surface. Matte surfaces will appear darker
than specular surfaces because the
scattering of the matte surface returns less
light to the camera. In contrast, sharp
reflection returns more light. Dark field is
any light source that is outside the line of Bright field
sight of the camera upon direct reflection.
In a dark field, light scattering from a
matte surface will reach the camera and Dark field
create a bright region. Similarly, a bright
field for backlighting is any light source in
the line of sight of the camera upon direct

Machine Vision for Visual Testing 159


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

against a bright background. A beam can be translated into geometric


splitter is used to create a coaxial or information. Thus, the shape of an object
on-axis illumination as shown in Fig. 5b. can be determined.
On-axis or coaxial illumination is Strobed illumination is also known as
particularly effective for enhancing strobe lighting. A flash of light illuminates
differentially angled, textured, or the test object momentarily.6 Strobe
topographic features on a flat object. lighting is applied in alarm systems,
Structured illumination is the theatrical lighting and high visibility
projection of a light pattern at a known running lights.
angle onto an object so that the Unpolarized light is an electromagnetic
dimensional information can be wave vibrating in various directions.
acquired.5 The light pattern can be a Polarization limits such vibration to a
plane, grid, or other complex shapes. The single plane, which includes the optical
intersection of the pattern and an object axis. The techniques include transmission,
reflection, refraction and scattering.
Polarization techniques can improve the
optical front portion of a machine vision
FIGURE 4. Some configurations of front system.
lighting: (a) directional lighting; (b) diffuse
dome lighting; (c) oblique lighting. Light Shapes
(a) The shapes of the light source include
point, line, and area.7 Compared to the
environment where it stands, a point
source can be approximated by an
extremely small sphere. The point source
has two types of models: a nearby point
source and a point source at infinity.7 The
line source has the geometry of a line. A
tubular fluorescent light bulb is an
example. An area source is an area that
radiates light. Area sources are often
modeled as surface patches and the
emitted radiance is independent of
position and direction.

(b)
FIGURE 5. Illumination: (a) backlighting;
(b) coaxial illumination.

(a)

Light box

(c)
(b)

160 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

Lighting Arrangement can also be created using integral color


filter arrays (CFA) over the charge coupled
Machine vision commonly uses lighting
device.11 A color filter array registers the
sources of the following types:
intensity of a single color at each pixel.11
fluorescent, quartz halogen, light emitting
By interpolation with the color intensities
diode, metal halide (mercury), xenon and
of adjacent pixels, the intensity of a color
sodium.8 Table 1 gives a brief description
at each pixel can be estimated.
of each type of lighting source.
A three–charge coupled device camera
The choice of a light source depends
has three separate charge coupled devices
on the requirements for the brightness
and provides higher image quality than
and spectrum content. Each type has its
does a one–charge coupled device camera.
advantages and disadvantages. An
In a three–charge coupled device camera,
inspection must find the suitable lighting
each charge coupled device takes a
solution for the application.
separate measurement of red, green and
blue light for each pixel. Light is split by a
trichroic prism assembly and a
Cameras corresponding color filter is placed in
each of the three imaging planes. A color
A camera is used to capture images of still image is obtained by synchronizing the
or moving objects. Traditional cameras outputs of the three charge coupled
capture light onto photographic film or devices.
photographic plate. Nowadays, digital
cameras use the solid imaging device —
that is, a charge coupled device or a Complementary Metal Oxide
complementary metal oxide Semiconductor
semiconductor (CMOS) — to capture In a complementary metal oxide
images, which can be stored in computer semiconductor camera, each pixel has its
memory for later use. own charge-to-voltage conversion. Most
functions, such as amplification, noise
Charge Coupled Device correction, and digitization circuits, are
integrated into a chip, which can output
A charge coupled device (CCD) is an
digital bits.12 Although less flexible, such
electronic detector consisting of many
design makes a complementary metal
square photosensitive pixels,9 known as a
oxide semiconductor camera more
capacitor array (photoactive region) or
reliable.
transmission region. An image is projected
on the capacitor array through a lens.
Each capacitor accumulates an electric Charge Injection Devices
charge proportional to the light intensity Charge injection device (CID) cameras
at that location. A control circuit can have a detection mechanism similar to
transfer the charge to a charge amplifier, that of a charge coupled device. The
which converts the charge into voltage, difference lies in their readout system.
by a control circuit. The collected charge in a charge injection
Silicon based charge coupled devices device camera does not transfer from site
are monochromatic in nature. Three to site in the charge injection device
techniques are commonly used to extract array. Instead, a displacement current,
color information for a given scene as which is proportional to the stored
shown in Fig. 6.10 Color sequencing charge, is read when charge packets are
alternates optical filters with desired red shifted between capacitors within
green blue characteristics. A color image

TABLE 1. Brief description of lighting sources.


Lighting Description

Fluorescent lighting Illumination using electricity to excite mercury vapor to produce short wave ultraviolet radiation, which causes
a phosphor to fluoresce, producing visible light.
Quartz halogen lamp Incandescent light bulb with envelope made of quartz and with filament surrounded by halogen gas.
Light emitting diode (LED) Semiconductor diode that emits light when electrical current is applied.
Metal halide lamp Lamp that produces light by passing an electric arc through high pressure mixture of argon, mercury and
various metal halides.
Xenon Element used in arc and flash lamps. Xenon arc lamps use ionized xenon gas to produce bright white light;
xenon flash lamps are electric glow discharge lamps that produce flashes of very intense, incoherent, full
spectrum white light.
Sodium Element used in some vapor lamps. Sodium gas discharge lamps use sodium in excited state to produce light.
There are two types: low pressure and high pressure lamps.

Machine Vision for Visual Testing 161


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

individually selected pixels.13 The between O and image center o. The


displacement current is then amplified, three-dimensional reference frame XYZ is
converted to a voltage and fed outside as called the camera frame. The purpose of
a digitized signal. To clear the array for camera calibration is to estimate the
new frame integration, the electrodes in intrinsic and extrinsic parameters of the
each pixel are momentarily switched to be camera model.
grounded. 1. Intrinsic parameters link the pixel
coordinates of an image point with
Camera Calibration the corresponding coordinates in the
camera reference frame.
The most common camera model is the
2. Extrinsic parameters define the
pinhole model shown in Fig. 7.14 The
location and orientation of the camera
focal length f represents the distance
reference frame with respect to a
known world reference frame.
In detail, the intrinsic parameters
FIGURE 6. Three techniques to extract color include the focal length f, the
information: (a) color sequential capture; transformation between camera frame
(b) integral color filter array; (c) three-chip coordinates and pixel coordinates, and
color capture. the geometric distortion introduced by
the optics. The extrinsic parameters
(a) include a translation vector T and a
rotation matrix R, which relate the
coordinates of a point P in the world Pw
R and in camera frame Pc:14
CCD

G B
FIGURE 7. Pinhole camera: (a) model; (b) transformation
Color wheel between camera frame coordinates and pixel coordinates.14

Image scene
(a) Y

X
(b)
G RG B p
GRGB
GRGB P
G RG B
G RGG
O o
G R GG Z
CCD Optical
GGGB
f axis
G GG B
G RG R
G RG R Image plane

Image scene G BG B
G BG B
(b) Yw
(c) Yc

CCD 1
R
Beam
splitter G Xw
CCD 2 Zc
Xc
Pw Pc
B
Zw
CCD 3

Image scene Legend


c = subscript denoting camera
f = focal length
Legend O = center of projection
CCD = charge coupled device o = center of image
B = blue OZ = optical axis
G = green w = subscript denoting world
R = red

162 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

(1) Pc = R ( Pw − T ) Analog frame grabbers accept and process


analog video signals while digital ones
deal with digital video streams. The
Numerous approaches are available to physical interface standards include
achieve this.15,16 CameraLink®, USB (universal serial bus),
GigE Vision®, and IEEE 1394.17,18 Table 2
gives the basic information about these
Camera Interface standards.19-22
The distinguishing difference between
The camera interface is hardware that these interfaces is the speed.18 Other
connects different image sensors and differences include cable length, whether
provides a standard output. A frame power is supplied over the cable, and the
grabber is such a device that can capture level of software support. Table 3 shows
individual frames from an analog or the major differences between these
digital video stream and store and/or interfaces.23
compress these frames in real time.

TABLE 2. Summary of camera interfaces.


Interface
Standard Issuing Organization Comment

Camera Link® Automated Imaging Association, serial communication protocol that extends base
Ann Arbor, MI technology of Channel Link® for vision application19
USB USB Implementers Forum universal serial bus
GigE Vision® Automated Imaging Association, based on gigabit ethernet standard with fast data
Ann Arbor, MI transfer, allowing standard, long, low cost cables21
IEEE 1394 IEEE, New York, NY Interface standard for high speed communication and
isochronous (real time) data transfer for high
performance and time sensitive applications22

TABLE 3. Comparison of camera interfaces.


Category Camera Link® USBa GigE Vision® IEEE 1394b

Topology master and slave master and slave networked, peer to peer
(“on the fly”) peer to peer
Maximum bit ratec 2380 Mbps 480 Mbps 1000 Mbps ~400 ~800 Mbps
Isochronous mode yes yes no yes
Maximum sustained bit rate 2380 Mbps 432 Mbps 930 Mbps ~320 to ~640 Mbps
Cable distance (copper) 10 m 5m 25 m ~4.5 to ~100 m
Bus power none up to 0.5 A none up to 1.5 A
a. USB = universal serial bus
b. IEEE = IEEE [formerly Institute of Electrical and Electronics Engineers], New York, NY
c. Mbps = 106 bits per second

Machine Vision for Visual Testing 163


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

PART 2. Algorithms and Software

Software has been developed to summed. This calculation is the discrete


implement algorithms in a variety of version of the orthogonal transformation.
machine vision tasks: image processing, Hence, the convolution operator is related
image segmentation, geometric to the fourier transformation in the
transformations, pattern recognition and frequency domain. The following sections
nondestructive testing.6,24-26 describe image processing using the
convolution operators.

Image Processing Edge Detection


The edge in the image is the part in
which the pixel value (intensity or color)
Convolution Operator changes drastically. Therefore, parts of the
In the image processing, the convolution edge in the image are estimated by
operator is the most useful and basic calculating the differentiation of pixel
operation. Figure 8 shows an example of values. However, image data cannot be
this operation. expressed by mathematical functions, so
Let image I(x,y) be an image to process. the difference is usually used instead of
To apply the convolution operator, a the differentiation. Moreover, the
small image w(i,j), which includes weights convolution operator is used to calculate
multiplied by each pixel in I(x,y), is the difference. Figure 9 shows a simple
prepared. w(i,j) is also called the operator, operator of difference. Figure 9a is the
mask or kernel. When the size of w(i,j) is difference along the X axis. The image
M × N, the convolution operator in a data are spread on a two-dimensional
coordinate (x,y) is described: plane; hence the operator along the Y axis
(Fig. 9b) is also used.
M N Here, Ix(x,y) and Iy(x,y) are the results of
2 2 differentiation along the X and Y axes
⎡ ⎛ ⎞
(2) I ′ ( x, y ) = ∑ ∑ ⎢⎣w ⎜⎝i + M2 , j + N2 ⎟⎠ respectively. The value √[Ix2(x,y) + Iy2(x,y)]
is the magnitude of the gradient, and the
M N
i =− j =−
2 2
× I ( x + j , y + j )⎤⎦
FIGURE 9. Differential operator:
(a) X direction; (b) Y direction.
In this calculation, w(i,j) and I(x,y)
overlap the center of w(i,j) and the
coordinate (x,y) at the same position. (a)
Corresponding pixels are multiplied and
0 0 0

FIGURE 8. Arrows depict movement of convolution operator. 0 1 –1

0 0 0

(b)

0 0 0

0 1 0

0 –1 0
Image I(x,y) Operator w(i,j)

164 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

value of tan–1[Iy2(x,y) + Ix2(x,y)] is the pixels in random positions, not fixed. The
direction of the edge. A simpler version of intensities of these noises differ from
the magnitude of the gradient is those of surrounding pixels, so the noise
|Ix(x,y)| + |Iy(x,y)|. is conspicuous. The salt and pepper noise
Moreover, there are other operators to is caused by the flicker of the illumination
calculate differences, such as roberts, and the variation in the performance of
prewitt and sobel operators (Fig. 10). The imaging sensor elements.
roberts operator can detect edges in the The operation called smoothing is a
direction of a slant. The edges detected by simple way to reduce such noises. This
prewitt and sobel operators tend to be process is used to obtain the local
thick. averaged intensity value. Such a
Edge parts can be extracted by using calculation can be performed by the
the first order and second order convolution operation. Figure 12 shows a
differentiation. A typical operator is the 3 × 3 smoothing operator. Smoothing
laplacian operator. Three commonly used blurs the image. The smoothing operation
small kernels are given in Fig. 11. This is a kind of low pass filtering in the
operator expresses the magnitude of the frequency domain.
edge, combining x and y directions.
According to definitions, there are some
variations. The position of edge is the zero
origin point because large gradient values FIGURE 11. Laplacian operators:
are found around the edge points. (a) laplacian 1; (b) laplacian 2;
The edge parts correspond to the high (c) laplacian 3.
frequency parts of the image intensity.
Therefore, the edge extraction process is (a)
the high pass filtering in the frequency
0 1 0
domain. (This is proved mathematically.)

1 –4 1
Noise Reduction
In image data, there are various noises of
0 1 0
which the noise called salt and pepper is
typical. Such noises are expressed by

(b)
1 1 1
FIGURE 10. Operators for edge detection:
(a) roberts operators; (b) prewitt operators;
(c) sobel operators. 1 –8 1

(a)
1 1 1
0 0 0 0 0 0

0 1 0 0 0 1 (c)
–1 2 –1

0 0 –1 0 –1 0
2 –4 2

(b)
–1 2 –1

–1 0 1 –1 –1 –1

–1 0 1 0 0 0

FIGURE 12. Smoothing operator.


–1 0 1 1 1 1

(c)
1/9 1/9 1/9
–1 0 1 –1 –2 –1

1/9 1/9 1/9


–2 0 2 0 0 0

–1 0 1 1 2 1 1/9 1/9 1/9

Machine Vision for Visual Testing 165


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

An effective technique to remove noise The importance of the binarization is


is median filtering. In this technique, the how to estimate the threshold value. Two
pixels in a local region are sorted in terms popular techniques for estimating the
of its intensity value, and the median is threshold, P tile and discriminant
picked up as the new intensity for the analysis, are described below.
pixel at the center position (x,y) (Fig. 13).
If the noise is included in the local region, P Tile Technique
it will be arranged in the first or the last
after sorting. Unlike the convolution The P tile technique can estimate the
operator, the median filter is nonlinear. threshold when the number of pixels (rate
Smoothing blurs the edges of an image, of areas) of the object and the background
but the median filter does not. of the image are known. When this
technique is applied to a monochrome
image, the histogram (the frequency
distribution) of the intensity in the image
Image Segmentation can be obtained. The area of the object
occupies P percent in the histogram from
Binarization the bright (dark) side. Therefore, the
threshold is decided from the histogram
Binarization is used to identify pixels in (Fig. 14).
the image, as being either the object or In the P tile technique, the rate of area
the background. From the assumption P should be known in order to estimate
that the intensities of the pixels in the the threshold. For example, in a
object are brighter (or darker) than the document image there are only letters and
background, the binarization can be the rate of black pixels (letters) is
expressed as follows. considered about 5 percent in the whole
For an object with a higher intensity image, but if figures, tables and
than the background, there are: photographs are included in the image, it
is difficult to define the rate P.
(3) I ( x, y ) ≥ t → B ( x, y ) = 1
Discriminant Analysis
Discriminant analysis can estimate a
(4) I ( x, y ) < t → B ( x, y ) = 0 threshold from only image data. This
technique also uses the histogram of pixel
For an object that is darker than the intensity.
background, there are: With this technique, the histogram is
divided into two classes: dark and bright
pixels. Using the between-class variance
(5) I ( x, y ) ≤ t → B ( x, y ) = 1 2 and the within-class variance σ 2 , the
σbc wc
separation metric σbc2 (σ2 )–1 is defined.
wc
The threshold is determined when the
(6) I ( x, y ) > t → B ( x, y ) = 0 separation metric reaches its maximum
value.
Assuming that the histogram is divided
where I(x,y) is the original monochrome into two classes by a threshold t (Fig. 15),
image and B(x,y) is the binary image. let ω1 be the number of pixels, m1 be the
B(x,y) = 1 denotes the object and 0 is the average and σ12 be the variance in the dark
background. The boundary value t is the pixel class. In the bright pixel class, ω2, m2
threshold that divides the object from the and σ22 are also defined similarly while m
background. and σ2 show the average and variance in
the whole image.

FIGURE 13. Process of median filtering.


FIGURE 14. P tile technique.

Frequency
4 4 3

2 10 3 Sorting 4
P percent
5 2 4

2 2 3 3 4 4 4 5 10

Median Threshold Intensity

166 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.
2 is
The within-class variance σwc
expressed: Mathematical Morphology
ω1σ 12+ ω2σ 22 A logical process applied to a binary
(7) σ 2wc = image, mathematical morphology can
ω1 + ω2 perform noise reduction, lacuna
restoration and so on. Some practical
2 ) is:
The between-class variance (σbc techniques of this process are described
below.
2 2
ω1 (m1 − m) + ω2 (m2 − m)
(8)
2
σ bc = Dilation
ω1 + ω2
Dilation expands the object region in a
2 binary image X using the structuring
2ω1ω2 (m1 − m2 )
= element Y. The structuring element is a
2
(ω1 + ω2 ) small binary image with various shapes. A
pixel in the structuring element is defined
as an origin of the element.
On the other hand, there is a following The operation of dilation is defined as:
relation between σ2, σwc
2 and σ 2 :
wc

(11) X ⊕ Y = {z z = x + y for x ∈ X , y ∈ Y }

(9) σ 2 = σ 2wc + σ 2bc This formulation is valid only when


∀x ∈ X and when the origin of the
structuring element yorg should be located
From the above equations, the separation in X. Examples of the dilation are shown
metric is formulated: in Fig. 16. When the structuring elements

σ 2bc σ 2bc
(10) =
σ 2wc σ 2 − σ 2bc
FIGURE 16. Examples of dilation with different structuring
elements: (a) 2 × 2 set; (b) four connected sets.
Because σ2 is fixed, when σbc2 reaches
its maximum value, the separation metric (a)
is maximized. Therefore, in the Image X
discriminant analysis technique, σbc2 is
calculated by changing the value of t and
searching for the adequate threshold in
case of maximum σbc2 .
By using discriminant analysis, the
threshold is uniquely estimated for any
monochrome images. Although only the
case of binarization (two classes, black
and white) is demonstrated, this
technique can also be applied to estimate
multiple thresholds. Origin

Structuring element Y
FIGURE 15. Discriminant analysis technique.
(b)
Image X
Frequency

Class 2
Class 1

Origin

t Intensity
ω1, m1, σ1 ω2, m2, σ2 Structuring element Y

Machine Vision for Visual Testing 167


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

are different, the results are of different (13) X  Y


shape.
= (X ⊗ Y) ⊕ Y
The closing operation fills the holes:
Erosion
Erosion is the operation opposite to (14) X • Y = (X ⊕ Y ) ⊗ Y
dilation. Erosion shrinks the binary image
according to the structuring element. The
operation of erosion is defined as:
FIGURE 18. Geometric transformations:
(12) X ⊗ Y = {z z + y ∈ X for ∀y ∈ Y } (a) original image; (b) scaling (1.2x, 0.6y):
(c) translation; (d) rotation.
Note that this formulation is valid only
when z and yorg are at the same position. (a) x
Examples of the erosion are shown in
Fig. 17. As in the case of dilation, when
the structuring elements are different, the
resulting shapes are different.

Opening and Closing


The operations that combine dilation and
erosion are called the opening and closing
operations. They are defined as follows.
The opening operation eliminates the
y
isolated noise and small regions:
(b) x

FIGURE 17. Examples of erosion with different structuring


elements: (a) 3 × 3 set; (b) four connected sets.

(a)
Image X

(c) x

Origin

Structuring element Y

(b) Image X

(d) x

Origin

Structuring element Y y

168 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

When the dilation is applied iteratively, Figure 20 depicts reflection in three


the object region becomes large. When directions — along X axis (Eq. 20), along
the erosion is iterated, the object region Y axis (Eq. 21) and where y = x (Eq. 22):
becomes small. The opening and closing
operations can conserve the size of main ⎡ x′⎤ ⎡1 0 0 ⎤ ⎡ x ⎤
parts of the image, even if the iterative ⎢ ⎥ ⎢ ⎥⎢ ⎥
operations are applied. (20) ⎢ y ′⎥ = ⎢0 −1 0 ⎥ ⎢ y ⎥
⎢ 1 ⎥⎦ ⎢0 0 1 ⎥⎦ ⎢⎣1⎥⎦
⎣ ⎣

Geometric Transformation ⎡ x′⎤ ⎡−1 0 0 ⎤ ⎡ x⎤


The geometric transformation (for ⎢ ⎥ ⎢ ⎥⎢ ⎥
example, the object size is changed, the (21) ⎢ y ′⎥ = ⎢ 0 1 0 ⎥ ⎢ y ⎥
slant is corrected and so on) is a major ⎢⎣ 1 ⎥⎦ ⎢⎣ 0 0 1 ⎥⎦ ⎢⎣1⎥⎦
technique in image processing. Many
geometric transformations can be
expressed as the affine transformation ⎡ x′⎤ ⎡0 1 0 ⎤ ⎡ x ⎤
using a 3 × 3 matrix. Let (x,y) be the ⎢ ⎥ ⎢ ⎥⎢ ⎥
coordinate before transformation and (22) ⎢ y ′⎥ = ⎢1 0 0⎥ ⎢y ⎥
(x’,y’ ) be the transformed coordinates. ⎢⎣ 1 ⎥⎦ ⎢⎣0 0 1⎥⎦ ⎢⎣1⎥⎦
The affine transformation is formulated:
By multiplying several matrices, a
⎡ x′⎤ ⎡ a b c ⎤ ⎡ x⎤ composite transformation can be
⎢ ⎥ ⎢ ⎥⎢ ⎥ obtained. Because the transformation is a
(15) ⎢ y ′⎥ = ⎢d e f ⎥ ⎢ y ⎥
⎢⎣ 1 ⎥⎦ ⎢⎣0 0 1⎥⎦ ⎢⎣1⎥⎦ multiplication of multiple matrices, the
change of the sequence of these matrices
will give a different (transformation)
The following equations describe the result.
geometric transformations of the original
image in (Fig. 18a). Equation 16 gives
scaling rates α and β respectively for X
FIGURE 19. Skewed geometric
and Y axes (Fig. 18b):
transformations of image in Fig. 18a:
(a) skewing (p = tan θp, q = 0); (b) skewing
⎡ x′ ⎤ ⎡ α 0 0 ⎤⎡ x ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥ (p = 0, q = tan θq).
(16) ⎢ y′ ⎥ = ⎢ 0 β 0 ⎥ ⎢ y ⎥
⎢ 1 ⎥⎦ ⎢⎣ 0 0 1 ⎥⎦ ⎢⎣ 1 ⎥⎦
⎣ (a)
x

Equation 17 gives the amount of


translation tx,ty (Fig. 18c):

⎡ x′⎤ ⎡1 0 t ⎤ ⎡ x ⎤
⎢ x⎥
⎢ ⎥ ⎢ ⎥
(17) ⎢ y ′⎥ = ⎢0 1 t y ⎥ ⎢y ⎥
⎢⎣ 1 ⎥⎦ ⎢ ⎥⎢ ⎥
⎣0 0 1 ⎦ ⎣1⎦
θp
Equation 18 gives rotation as an angle
from the X axis around the origin
(Fig. 18d):
y
(b)
⎡ x′⎤ ⎡cos θ − sin θ 0 ⎤ ⎡ x⎤ x
⎢ ⎥ ⎢ ⎥⎢ ⎥
(18) ⎢ y ′⎥ = ⎢ sin θ cos θ 0 ⎥ ⎢ y ⎥ θq
⎢ 1 ⎥⎦

⎢ 0
⎣ 0 1 ⎥⎦ ⎢⎣1⎥⎦

The skewing in Eq. 19 uses slants p,q from


X or Y axis (Fig. 19):

⎡ x′⎤ ⎡1 p 0 ⎤ ⎡ x ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥
(19) ⎢ y ′⎥ = ⎢ q 1 0 ⎥ ⎢ y ⎥
⎢⎣ 1 ⎥⎦ ⎢⎣0 0 1 ⎥⎦ ⎢⎣1⎥⎦

Machine Vision for Visual Testing 169


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

similarity is estimated at each location of


Pattern Recognition the image. As a result, the object in the
template is considered to be where the
similarity reaches the maximum value for
Template Matching the whole image (Fig. 21).
Let the size of template image T(x,y) be
Template matching is usually used to
tw × th. To calculate the similarity at (x,y)
search for an object in an image. In this
in the input image I(x,y), Eqs. 23 to 25 are
technique, a small image of an object
applied. Equation 23 expresses the sum of
called the template is prepared first. The
absolute difference (SAD):
next step is to match the template with
the whole image. In this process, the t w −1 t h −1
(23) RSAD ( x, y ) = ∑ ∑ I ( x + i, y + j )
i =0 j =0
FIGURE 20. Reflected geometric transformations of image in − T (i , j )
Fig. 18a: (a) reflection (y); (b) reflection (y = x);
(c) reflection (x). Equation 23 expresses the sum of squared
difference (SSD):
(a) t w −1 th −1
(24) RSSD ( x, y ) = ∑ ∑ ⎡⎣ I (x + i, y + j)
i=0 j= 0
2
− T (i , j )⎤⎦

Equation 24 expresses the (normalized


x cross correlation (NCC):

t w −1 t h −1
(25) RNCC ( x, y ) = ∑ ∑ ⎡⎣I (x + i, y + j)
i =0 j =0

× T (i , j )⎤⎦
⎡ t w −1 t h −1
⎢ 2
÷ ⎢

∑ ∑ I ( x + i , y + j)
⎣ i =0 j =0
y
t w −1 th −1 ⎤
2⎥
(b)
x × ∑ ∑ T (i , j ) ⎥

i=0 j= 0 ⎦

In Eqs. 23 to 25, the smaller the value of


the sum of absolute difference and the
sum of squared difference, the higher the
similarity. For normalized cross

FIGURE 21. Template matching technique.


y

(c) x

Tw

Th

y=x
Image I(x,y) Template
y T(i,j)

170 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

correlation, a higher value indicates a space, parameters â and b of this line can
higher similarity. The similarity be expressed:
estimation is carried out over the whole
input image and the position at which ⎛y − y x y − y x ⎞
the highest similarity is obtained is the (28) (aˆ, bˆ) = ⎜⎜ 2 1, 2 1
⎝ x2 − x1
2 1⎟
x2 − x1 ⎟⎠
location of the object.
The computational costs of the sum of
absolute difference and the sum of On the other hand, points (x1,y1) and
squared difference are small and they can (x2,y2) in x-y space correspond to the
be estimated rapidly. However, if the following lines in the parameter space:
brightness (or gain) of the input image is
different from that of the template image
(in other words, the input and template (29) b = (−x1 ) a + y1
images are of different illumination
conditions), the similarity will be low.
Therefore, an accurate match will not be (30) b = (−x2 ) a + y2
achieved. The normalized cross
correlation estimates the correlation
between the input and template images, And the cross point of these lines is equal
which is less likely to be affected by to â,^b (Fig. 22b).
illumination changes, but comes with a Therefore, points on a straight line in
higher computational cost. an image (x-y space) correspond to lines
So far as the template matching that cross a point in the parameter space.
technique is concerned, the size and pose By estimating the crossing point of
of the object in the template image and straight lines in the parameter space, we
those of the corresponding patterns in the can obtain parameters (a and b) of a
input image need to be the same. If not, straight line in an image (x-y space).
the similarity becomes low and an However, there are many cross points
accurate match will not be obtained. In of the straight lines in the parameter
this case, it is necessary to apply the space, so it is difficult to estimate
geometric transformations, described in adequate points mathematically. To
the previous section, to the template overcome this problem, the cross points
image. However, this step requires are obtained by a voting process in the
estimating the transformation parameters, hough transform. In this technique, the
so it is not efficient. When the size and
pose of the object in the template image
vary from those in the input image, it is FIGURE 22. a-b hough transform:
still possible to do the matching with (a) x-y image space; (a) a-b parameter
color information or other high space.
dimensional features.
(a) y
a-b Hough Transform
The template matching technique can be
^ +^ x2, y2
applied to any pattern of an object. If the y = ax b
object can be expressed by the
mathematical models (such as line or
circle), it can be searched more efficiently x1, y1
in the input image with a hough
transform.
The line detection with hough
transform is first described. A straight line x
in x-y space is modeled by parameters a 0
and b:
(b) b
(26) y = ax + b

This formulation can be transformed as:


b = (–x1) a + y1

(27) b = ( −x ) a + y
^ ^
(a, b)
This formulation shows another straight
line in a-b space. Here, the a-b space is
called the parameter space. b = (–x2) a + y2
As shown in Fig. 22a, when a line
a
crosses two points (x1,y1) and (x2,y2) in x-y 0

Machine Vision for Visual Testing 171


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

parameter space is divided into small parameter space. By using the voting
rectangular regions along each axis. Each process along this trajectory, the cross
small rectangular region is called a cell. points can be estimated (Fig. 23).
The cell works as a counter. If a line The ρ-θ hough transform has a limited
crosses a cell, the counter increases by parameter space, but the computational
one. Finally, the coordinates of the cell, cost is still high because of the calculation
which have maximum value of voting, are of sine waves. To avoid such a problem,
the estimated parameters of the line in another technique is proposed. The
the image. The voting process in the γ-ω hough transform uses piecewise line
parameter space can be considered as a segments in the parameter space to
parameter estimation process for a line, perform the voting process rapidly.27
which crosses (or is supported by) many Moreover, because the coordinates of the
points. image plane are discrete values, by
The least square technique can also be considering the cell redivision in the
used to estimate line parameters. Basically, parameter space, line parameters can be
this technique estimates only one line. estimated more accurately.28
The hough transform can estimate If the parameter space is expanded to
multiple lines simultaneously by picking three dimensions, circles can be detected
up cells whose voting number is beyond a in an image. Moreover, if high
threshold. dimensional parameter space is
considered, it is possible to detect various
ρ-θ Hough Transform patterns. However, the higher the
dimension of parameter space, the more
Generally, the range of line parameters is the computational costs. In this case,
from –∞ to +∞, which is the same as the huge numbers of cells are needed and it is
range of the parameter space. The range difficult to perform any pattern detection.
of the image (in x-y space) is limited but To overcome this problem, a generalized
the range of parameter a (slant of line) is hough transform, which detects the
from –∞ to +∞. It is difficult to prepare position and pose of a pattern by the
cells in such a range for computation. voting process, is an option.
Therefore, the model of a straight line can
be rewritten as:

(31) ρ = x cos θ + y sin θ FIGURE 23. ρ-θ hough transform:


(a) x-y image space; (a) ρ-θ parameter
where ρ is an assigned distance from the space.
origin to the line and θ is an angle (a)
representing the altitude of the line. y
When the range of the image (x-y space)
is limited to 0 ≤ x ≤ w, 0 ≤ y ≤ h, the
x i,y i
ranges of ρ and θ are:

(32) − w 2 + h2 ≤ ρ ≤ w 2 + h2
^
ρ
^
θ
(33) 0 ≤ θ ≤ π x
0

For a coordinate, (x1,y1) on the image (x-y


space), Eq. 31 becomes:
(b)
ρ
(34) ρ = Asin (θ + α)

where: ^
ρ,^
θ

(35) A = x12 + y12

and where: θ
0

x1
(36) α = tan −1
y1

It means that a point in x-y space


corresponds to a sine wave in (ρ-θ)

172 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

picture of the specimen whereas Fig. 25b


Machine Vision for is the double-pass retroreflection image.
These two techniques implement
Nondestructive Testing enhanced visual inspection through the
Nondestructive testing is one of the design of a special machine vision system.
important applications of machine vision. Image processing techniques can also
Nondestructive testing using radiation achieve an enhanced image to facilitate
beyond the visible spectrum, such as the inspection. A three-dimensional
infrared, ultraviolet and X-ray radiation, is stereoscopic visual system has been built
described in other volumes of the to inspect aircraft skin.33 Algorithms to
NDT Handbook and in the introductory enhance monoscopic and stereoscopic
chapter of this volume. This chapter images were developed. A high frequency
focuses on the use of visible light in a emphasis algorithm consists of two steps
machine vision system. In nondestructive as illustrated in Fig. 26.27 The live image
testing, the purpose of a machine vision passed through low pass and high pass
system is to capture and characterize filters. Then, a fraction of low frequency
anomalies of the object under inspection, content was added back to the high pass
that is, to inspect for structural and filtered image. This algorithm emphasized
surface quality.29 the high frequency features while
Three types of results may be obtained attenuating the background low pass
from a machine vision system. The first information. Therefore, the potential
type is an enhanced image in which the surface flaws or cracks were highlighted.
discontinuities are highlighted or For stereoscopic image enhancement
intuitively presented so that the inspector (Fig. 26), the high frequency emphasis
can easily make a subjective assessment. algorithm was applied to the left and
One example is the edge-of-light surface right images of the stereoscopic image. An
inspection technique, which uses the edge augmented stereoscopic, high frequency
of light to highlight the surface slope or emphasis algorithm was implemented:
deformation.30 Figure 24 shows the result
of edge-of-light inspection of an aircraft
lap joint. Figure 24a shows the lap joint;
in the corresponding edge-of-light scan FIGURE 25. Enhanced surface inspection of
(Fig. 24b), bright and dark regions present aircraft lap joint: (a) aircraft lap joint;
the surface deformation. Such (b) image resulting from double-pass
deformation implies the potential hidden retroreflection technique.
corrosion between the two layers.
A similar technique, double-pass (a)
retroreflection surface inspection, has also
been applied to the same application.31,32
Figure 25 also shows the inspection result
of aircraft lap joints. Figure 25a shows a

FIGURE 24. Enhanced surface inspection of (b)


aircraft lap joint: (a) aircraft lap joint;
(b) image resulting from edge-of-light
technique.

(a)

FIGURE 26. High frequency emphasis algorithm.27


(b)

High frequency
High pass emphasized image
filter
Live image

Low pass
Fraction
filter

Machine Vision for Visual Testing 173


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

1. High frequency emphasis algorithms Once the edge maps of the rivets are
are applied to the left and right detected, the region of interest can be
images. determined with the centroid of the rivet.
2. To identify the features of interest, Once the region of interest is
images are dynamically threshold identified, the multiscale edge detection is
filtered. applied to the region of interest to
3. The original left and right images are generate a list of edges at different scales.
overlaid with the depth offset desired This technique will help discriminate
for identified features. cracks from noncracks according to the
4. The processed images are displayed size of a typical crack in comparison to
stereoscopically on the screen. The other objects such as scratches and repair
eyewear of the inspector or operator plates appearing on the surface. A
can help highlight features of interest. coarse-to-fine edge linking process traced
The second type of result is binary — an edge from the coarse resolution (high
that is, crack or noncrack. Binary results scale) to a fine resolution (low scale). The
are useful for the inspection of a specific propagation depth of all edges presented
part, where a binary accept/reject decision at scale one was found. Here, the
may follow. As described in one study,34 a propagation depth means the number of
crack detection algorithm shown in scales in which the edge appears. A
Fig. 27 was developed to identify the feature vector for each edge in scale was
surface cracks on aircraft skin. Cracks generated so that the edges of cracks
frequently happen near rivets; therefore, could be discriminated from those of
the first step is to detect rivets by noncracks. The feature vector includes the
detecting the circular arcs in the image. following: average wavelet magnitude of
active pixels, which belong to the edges;
the propagation depth number; average
wavelet magnitudes of any linked edges in
FIGURE 27. Surface crack detection algorithm. scale two and scale four; spins of sum WX
and sum WY, where WX, WY are the
wavelet coefficients in the x and y
direction of an active pixel at scale one;
Rivet detection
and
and the number of active pixels.34
region-of-interest A neural network as shown in Fig. 28
identification was trained to classify the inputs —
Image feature vectors of edges in the region of
interest — into cracks and noncracks. The
Multiscale Feature
edge
Edge
Classification
feature vectors used for the training may
linking vector
detection calculation represent the cracks that need immediate
repair. In this case, the classification result
indicating a crack calls for further
investigation of the corresponding region
of interest for repairing. An accuracy rate
of 71.5 percent and a false alarm rate
FIGURE 28. Neural network used for crack classification. 27 percent for the neural network based
classification were reported.
The third type is more informative,
Hidden layer
which allows quantitative information

FIGURE 29. Pillowing deformation: (a) on


aircraft lap joints; (b) on hidden, faying
surface.

(a)

Input Output

(b)

174 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

about the discontinuity to be derived. For decomposed into subimages with a


the application of aircraft inspection, discrete wavelet transform. Figure 30
corrosion detection is crucial to the risk shows a three-level decomposition, which
assessment of structural integrity. One consists of ten subimages. Let Wj(k,l) be
type of corrosion occurs on the interior, the wavelet coefficient at (k,l) in the
hidden surface of aircraft lap joints if subimage Wj. The original image was
sealant and corrosion protection systems divided into nonoverlapping blocks each
break down. Corroded product is of much of 8 × 8 pixels. For each block B(i), a
higher volume than the original material ten-dimensional feature vector was
and this will cause an expansion of the created. The element is Ej(i)(j = 1, …, 10),
skins between rivets. This phenomenon is the corresponding energy function in
known as pillowing. An example is shown subimages and can be expressed:
in Fig. 29. Figure 29a shows an example of
pillowing on a lap joint whereas Fig. 29b 2
shows the corroded area on the faying (37) Ej (i ) = ∑ w j ( k ,l )
surface. Another type of corrosion ( k,l) ∈ B(i)
happens to the surface, which can be
detected by its suggestive texture captured
Then, a nearest neighbor classifier was
by a machine vision system. In a
trained to classify the original image into
procedure for surface corrosion
corrosion and corrosion free regions. A
detection,34 the image was first
detection rate of 95 percent of the test set
was reported.34 Once the original image is
classified, postprocessing can be carried
FIGURE 30. Wavelet decomposition of image: out to calculate the corrosion area.
(a) three-level decomposition into ten Therefore, the information about the size
images; (b) procedure for classification. of the corroded area is available.
A more general procedure is shown in
(a) Fig. 31. The image is first preprocessed for
enhancement and noise removal so that
LL HL the features of targeted objects
1 2
5
(discontinuities) are highlighted. The
3 4 discrimination of different objects will be
8 achieved in a feature space. Extraction of
image features can be done in the spatial
6 7 domain and frequency domain. There are
numerous approaches available for this
purpose. Some of these techniques have

9 10 FIGURE 31. Wavelet decomposition of image


and procedure for classification.

LH HH Image

Feature vectors
HH = high high
HL = high low Image preprocessing:
LH = low high · enhancement;
LL = low low · denoising;
· segmentation;
(b) · others
Image

Wavelet transform
Feature extraction:
· spatial domain;
· transform domain
Feature extraction

Classification
Classification

Postprocessing Postprocessing

Result Result

Machine Vision for Visual Testing 175


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

been described in the previous section of for nondestructive testing. Both the
this chapter. Sometimes, postprocessing is system architecture and algorithm
needed to further characterize the implementation for machine vision are
classified results as described in the described. A good understanding of the
example above. The measurement results application’s requirements is essential to
can also be compared with calibrated the success of a machine vision system.
samples for quantitative analysis. Such The technical advances in machine vision
comparison can also be done in the make it applicable to varied
feature space. nondestructive test applications. The
capability of a machine vision system can
be further expanded and enhanced by
incorporating multiple image modalities
Conclusion or other nondestructive test techniques,
This chapter provides a general which may provide complementary
description of machine vision techniques information.

176 Visual Testing


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

References

1. Batchelor, B.G. and P. F. Whelan. 13. Charge Injection Device Research at RIT.
Intelligent Vision Systems for Industry. Web site. Rochester, NY: Rochester
Bruce G. Batchelor, Cardiff, United Institute of Technology, Center for
Kingdom; Paul F. Whelan, Dublin, Imaging Science (2009).
Republic of Ireland (2002). 14. Trucco, E. and A. Verri. Introductory
2. Zuech, N. “Smart Cameras vs. Techniques for 3-D Computer Vision.
PC-Based Machine Vision Systems.” Upper Saddle River, NJ :Prentice Hall
Machine Vision Online. Ann Arbor, MI: (1998).
Automated Imaging Association 15. Bouguet, J.-Y. Camera Calibration
(October 2002). Toolbox for Matlab. Web pages.
3. Zuech, N. “Optics in Machine Vision Pasadena, CA: California Institute of
Applications.” Machine Vision Online. Technology (2009).
Ann Arbor, MI: Automated Imaging 16. Wang, J., F. Shi, J. Zhang and Y. Liu.
Association (August 2005). “A New Calibration Model of Camera
4. Fales, G. “Ten Lens Specifications You Lens Distortion.” Pattern Recognition.
Must Know for Machine-Vision Vol. 41, No. 2. Amsterdam,
Optics.” Test and Measurement World. Netherlands: Elsevier, for Pattern
Web page. Waltham, MA: Reed Elsevier Recognition Society (February 2008):
(27 October 2003). p 607-615.
5. “What is Structured Light?” Web page. 17. IEEE 1394, High-Performance Serial Bus.
Salem, NH: StockerYale (2009). New York, NY: IEEE (2008).
6. Casasent, D.[P.], Y.F. Cheu and 18. Wilson, A. “Camera Connections.”
D. Clark. Chapter 4: Part 4, “Machine Vision Systems Design. Tulsa, OK:
Vision Technology.” Nondestructive PennWell Corporation (April 2008).
Testing Handbook, second edition: 19. Specifications of the Camera Link
Vol. 8, Visual and Optical Testing. Interface Standard for Digital Cameras
Columbus, Ohio: American Society of and Frame Grabbers. Ann Arbor, MI:
Nondestructive Testing (1993): Automated Imaging Association
p 92-107. (Annex D, 2007).
7. Forsyth, D.A. and J. Ponce. Computer 20. “Universal Serial Bus,” Web site.
Vision: A Modern Approach. Upper Beaverton, OR: USB Implementers
Saddle River, NJ: Prentice Hall (2002). Forum (2008).
8. Martin, D. Practical Guide to Machine 21. “GigE Vision®.” Web page. Machine
Vision Lighting. Web pages. Austin, TX: Vision Online. Ann Arbor, MI:
National Instruments Corporation Automated Imaging Association
(November 2008). (2009).
9. Hainaut, O.R. “Basic Image 22. 1394 Technology. Web page. Southlake,
Processing.” Web pages. Santiago, TX: 1394 Trade Association (2008).
Chile: European Organisation for 23. Sgro, J. “USB Advantages Offset Other
Astronomical Research in the Southern Interfaces.” Vision Systems Design.
Hemisphere, European Southern Tulsa, OK: PennWell Corporation
Observatory (December 1996). (September 2003).
10. Users Manual MTD/PS-0218, Kodak 24. Jain, R., R. Kasturi and B.G. Schunck.
Image Sensors. Revision 2.0. Rochester, Machine Vision. New York, NY:
NY: Eastman Kodak (July 2008). McGraw-Hill (1995).
11. Peterson, C. “How It Works: The 25. Gonzalez, R.C. and R.E. Woods. Digital
Charged-Coupled Device, or CCD.” Image Processing: Upper Saddle River,
Journal of Young Investigators. Vol. 3, NJ: Prentice Hall (2002).
No. 1. Durham, NC: Journal of Young 26. Otsu, N. “A Threshold Selection
Investigators, Incorporated (March Method from Gray-Level Histograms.”
2001). SMC-9, IEEE Transactions on Systems,
12. Litwiller, D. “CCD vs. CMOS: Facts Man, and Cybernetics. Vol. 9, No. 1.
and Fiction.” Photonics Spectra. Vol. 35, New York, NY: IEEE
No. 1. Pittsfield, MA: Laurin (January 1979): p 62-66.
Publishing (January 2001): p 154-158.

Machine Vision for Visual Testing 177


© ASNT - ALL RIGHTS RESERVED
Distribution and duplication are prohibited.

27. Wada, T., T. Fujii and T. Matsuyama. 32. Komorowski, J.P., S. Krishnakumar,
“γ - ω Hough Transform — Linearizing R.W. Gould, N.C. Bellinger, F. Karpala,
Voting Curves in an Unbiased ρ - θ O.L. Hageniers. “Double Pass
Parameter Space” [in Japanese]. IEICE Retroreflection for Corrosion
D-II. Vol. J75, No. 1. Minato-ku, Detection in Aircraft Structures.”
Tokyo, Japan: Institute of Electronics, Materials Evaluation. Vol. 54, No. 1.
Information and Communication Columbus, OH: American Society for
Engineers (1992): p 21-30. Nondestructive Testing (January 1996):
28. T. Wada, M. Seki and T. Matsuyama, p 80-86.
“High Precision γ - ω Hough Transform 33. Siegel, M. and P. Gunatilake. “Remote
Algorithm to Detect Arbitrary Digital Inspection Technologies for Aircraft
Lines” [in Japanese]. IEICE D-II. Vol. Skin Inspection.” ETVSIM’97, IEEE
J77, No. 3. Minato-ku, Tokyo, Japan: Workshop on Emergent Technologies and
Institute of Electronics, Information Virtual Systems for Instrumentation and
and Communication Engineers (1994): Measurement [Niagara Falls, Ontario,
p 529-530. Canada, May 1997]. New York, NY:
29. Malamas, E.N., E.G.M. Petrakis, IEEE (1997): p 1-10.
M.E. Zervakis, L. Petit and J.D. Legat. 34. Gunatilake, P., M.W. Siegel,
“A Survey on Industrial Vision A.G. Jordan and G.W. Podnar. “Image
Systems, Applications and Tools.” Understanding Algorithms for Remote
Image and Vision Computing. Vol. 21, Visual Inspection of Aircraft Surface.”
No. 2. Amsterdam, Netherlands: Machine Vision Applications in Industrial
Elsevier (February 2003): p 171-188. Inspection V [Orlando, FL, April 1997].
30. Komorowski, J.P. and D.S. Forsyth. Proceedings Vol. 3029. Bellingham,
“The Role of Enhanced Visual WA: International Society for Optical
Inspection in the New Strategy for Engineering (Society of Photo-Optical
Corrosion Management.” Aircraft Instrumentation Engineers)
Engineering and Aerospace Technology. (1997): p 2-13.
Vol. 72, No. 1. Bingley, United
Kingdom: Emerald Group Publishing
(2000): p 5-13.
31. Komorowski, J.P., N.C. Bellinger,
R.W. Gould , A. Marincak and
R. Reynolds. “Quantification of
Corrosion in Aircraft Structures with
Double Pass Retroreflection.” Canadian
Aeronautics and Space Journal. Vol. 42,
No. 2. Kanata, Ontario, Canada:
Canadian Aeronautics and Space
Institute (1996): p 76-82.

178 Visual Testing

Das könnte Ihnen auch gefallen