Sie sind auf Seite 1von 47

Chapter 2

Machine Vision Online Measurements

Abstract With increased expectations for food products of high quality and safety
standards, the need for accurate, fast, and objective quality determination of these
characteristics in food products continues to grow. Machine vision systems are auto-
mated, nondestructive and cost-effective, and ideally suited for routine inspection
and quality assurance tasks which are common in the food and agro-products indus-
tries. Machine vision is a technology that allows the automation of visual inspec-
tion and measurement tasks using digital cameras and image analysis techniques.
Machine vision system generally consists of five basic components: light source, an
image-capturing device, an image capture board (frame grabber), and the appropri-
ate computer hardware and software. The potential of computer vision in the food
industry has long been recognized and the food industry is now ranked among the
top ten industries using this technology. Traditional visual quality inspection per-
formed by human inspectors has the potential to be replaced by computer vision
systems for many tasks. There is increasing evidence that machine vision is being
adopted at commercial level. This chapter highlighted the construction and image
processing of online detection by machine vision. First, an introduction to the image
acquisition system, including lighting system, camera, and lens, was given. Then,
the image processing, which includes imaging segmentation, interpretation, and
classification, was discussed. At last, three examples of online food quality detec-
tion were introduced.

KeywordsDetection Food Fruit Imaging Interpretation Machine vision


Online Segmentation

Abbreviation
ADO ActiveX Data Objects
ANN Artificial Neural Network
BP-ANN Back-propagation artificial neural network
CCD Charged coupled device
CMOS Complementary metal-oxide semiconductor
DCT Discrete cosine transform
DSP Digital signal processor
FOV Field of View
FP Feature parameters

Science Press, Beijing and Springer Science+Business Media Dordrecht 2015 11


X. Zou, J. Zhao, Nondestructive Measurement in Food and Agro-products,
DOI 10.1007/978-94-017-9676-7_2
12 2 Machine Vision Online Measurements

HSI Hue-saturation-intensity
HSV Hue, saturation, volume
IBPGR International Board for Plant Genetic Resources
LDA Linear discriminant analysis
LED Light-emitting diode
MIR Mid-infrared
MV Machine vision
NIR Near infrared
NTSC National Television Standards Committee
PAL Phase Alteration Line
PC Personal computer
PCA Principle component analysis
PCA Principal component analysis
RAM Random access memory
RGB Red, green, and blue
ROI Region of interest
SVM Support vector machine
TV Television

2.1Introduction

Machine vision (MV) is the technology and methods used to provide imaging-based
automatic inspection and analysis for such applications as automatic inspection,
process control, and robot guidance in food industry. Recent advances in hardware
and software have aided in this expansion by providing low-cost powerful solu-
tions, the field of MV, or computer vision, has been growing at a fast pace [13].
The technology aims to duplicate the effect of human vision by electronically per-
ceiving and understanding an image. Table2.1 illustrates the benefits and draw-
backs associated with this technology.
For the food industry, when consumers buy food, food perception is limited to
visual perception. This visual sensation is often the only direct information the con-
sumer receives from the product. The appearance together with former experiences
and the cultural background of the consumer direct the consumer decision to pur-
chase the product. The visual sensation is a mix of the color, the shape, and the size
of the product. Therefore, image processing is an important tool in quantifying the
external appearance of food. Imaging techniques have been developed as an inspec-
tion tool for quality and safety assessment of a variety of agricultural food products.
Imaging is generally nondestructive, reliable, and rapid, depending on the specific
technique used. These techniques have been successfully applied to fruit [4], meat
[5, 6], poultry [7, 8], and grain [9, 10].
Perception theory assumes that the human vision system is able to estimate the
size of an object independently of the distance between the eye and the object when
2.2Images Acquisition System 13

Table 2.1 Benefits and drawbacks of machine vision


Advantages Disadvantages
Generation of precise descriptive data Object identification being considerably more
Quick and objective difficult in unstructured scenes
Consistent, efficient, and cost-effective
Nondestructive and undisturbing Artificial lighting needed for dim or dark
Robust and competitively priced sensing conditions
technique
Permanent record, allowing further analysis
later

enough distance cues are available, nevertheless, this size constancy can be reduced
if less environmental information is provided. For example, in wholesale stores,
apples are presented in boxes without size cues. Marketing numbers show that for
a given color quality, the largest purchase is obtained for apples with a maximal
diameter between 75 and 80mm. Consequently, the farmer gets the highest price
for apples graded into the size class of 7580mm. Although machines mechani-
cally sort the apples by weight, a feature strongly correlated with the apple size [2,
3, 11], the question arose how well test persons could distinguish apples by size and
how quality grading per size can include the human visual perception abilities. It is
the consumer at the end of the commercial chain that assigns quality to the products
and evaluates whether or not he will purchase the product. As a result, automated
visual inspection is undergoing substantial growth in the food industry because of
its cost-effectiveness, consistency, superior speed, and accuracy.
The grading of food such as apples using MV can be arbitrarily divided into im-
age acquisition system and image processing system. The image acquisition system
as shown in Fig.2.1 is composed of lighting system, camera, lens, computer, con-
troller, and conveyor. The design of conveyor and controller should be adapted to
the detection food. The lighting system, camera, and lens are introduced as follows.

2.2Images Acquisition System

2.2.1Lighting System

The purpose of the lighting system is to provide radiant light with suited spec-
tral characteristic and a uniform spatial repartition. As with the human eye, vision
systems are affected by the level and quality of illumination. It was found that by
adjustment of the lighting, the appearance of an object can be radically changed
with the feature of interest clarified or blurred. Therefore, the performance of the il-
lumination system can greatly influence the quality of image and plays an important
role in the overall efficiency and accuracy of the system [12]. It should be noted that
14 2 Machine Vision Online Measurements

Camera Lens

Lighting system Computer


Controller

Convey

Fig. 2.1 Machine vision system

a well-designed illumination system can help to improve the success of the image
analysis by enhancing image contrast. Good lighting can reduce reflection, shadow,
and some noise giving decreased processing time. Various aspects of illumination
including location, lamp type, and color quality need to be considered when design-
ing an illumination system for applications in the food industry [12].
Most lighting arrangements can be grouped as either front or back lighting. Front
lighting (electron projection lithography or reflective illumination) is used in situa-
tions where surface feature extraction is required. In contrast, back lighting (trans-
mitted illumination) is employed for the production of a silhouette image for critical
edge dimensioning or for subsurface feature analysis. Light sources also differ but
may include incandescent, fluorescent, lasers, X-ray tubes, and infrared lamps. The
choice of lamp affects quality and image analysis performance. The elimination of
natural light effects from the image collection process is considered of importance
with most modern systems having built in compensatory circuitry [12].
The illumination system along with its associated optical components is the prin-
cipal determinant of contrast. There are two principles for the illumination system:
(1) give stable and symmetrical lighting and (2) make the object outstanding from
background. The lighting type could be an incandescent lamp, a high-frequency
fluorescence lamp, a fiber halogen lamp, and a light-emitting diode (LED) light as
shown in Fig.2.2. The principal advancement has led to the increasing use of LEDs.
The illumination system was calibrated by taking the image of a color pattern that
had different regions painted with solid colors (red, green, blue, and yellow). Using
the vision system, the average red, blue, green (RGB) values of each region were
calculated and stored. The color pattern was presented to the vision system before
each experiment in order to check whether the calibration of the color camera was
necessary.

2.2.2Camera

The camera is the key point for apple-sorting machine like human eyes. There
are many different sensors which can be used to generate an image, such as ultra-
sound, X-ray, and near-infrared (NIR) spectroscopy. Images can be also obtained
using displacement devices and documents scanners. Typically, the image sensors
used in MV are usually based on solid-state charge-coupled device (CCD) camera
2.2Images Acquisition System 15

Fig. 2.2 Normally visible lighting type. a Incandescent lamp; b High-frequency fluorescence
lamp; c Fiber halogen lamp; and d LED. LED light-emitting diode

technology. Compared to complementary metal-oxide-semiconductor (CMOS),


CCD has higher light sensitivity, translates into better images in low-light condi-
tions. A CCD can also consume as much as 100 times more power than an equiva-
lent CMOS sensor. Yang used monochrome cameras [13, 14], Wen and Tao (1998)
used a monochrome CCD camera equipped with a 700nm long-pass filter [15, 16],
while many others acquired color images [1727]. Some guidelines for selecting a
camera are given as follows.
First, CCD camera could be a color or a monochrome one. Monochrome camer-
as have a single sensor that outputs grayscale images. Each pixel generates informa-
tion only about intensity. While, in color camera, a mosaic filter is required, which
limits the resolution of a single sensor. Monochrome cameras typically have 10%
higher resolution than comparable single-chip color cameras. Monochrome cam-
eras have higher resolution, better signal-to-noise ratio, increased light sensitivity,
and greater contrast than similarly priced color cameras. In some cases, color filters
can be used with monochrome cameras to differentiate colored objects. When a
high-resolution color image is necessary, it is beneficial to use a three-chip camera.
These cameras offer the best of both worlds, yielding greater spatial resolution and
dynamic range than single-chip color cameras. The RGB output from a three-chip
camera is considered to be superior to the standard national television system com-
mittee NTSC/phase alternating line (PAL) and YC formats.
Second, camera could be an analog one or a digital one. Analog cameras are less
expensive and less complicated. However, analog cameras have upper limits on
both resolution (number of TV lines) and frame rate. Analog cameras are also very
susceptible to electronic noise. While, in digital, the video signal is exactly the same
when it leaves the camera as when it reaches an output device. Compared to analog
counterparts, digital cameras typically offer higher resolution, higher frame rates,
less noise, and more features. Unfortunately, these advantages come with costs;
digital cameras are generally more expensive than analog ones. Digital cameras
may involve more complicated setup, even for video systems that require only basic
capabilities. Digital cameras are also limited to shorter cable lengths in most cases.
Third, the scanning type of camera should be considered. There are two types
of scanning cameras, interlaced scanning and progressive scanning. Interlaced
16 2 Machine Vision Online Measurements

scanning and progressive scanning are the two techniques available today for read-
ing and displaying information produced by image sensors. Interlaced scanning is
used mainly in CCDs. Progressive scanning is used in either CCD or CMOS sen-
sors. Interlaced scanning is a transfer of data in which the odd-numbered lines of
the source are written to the destination image first, then the even-numbered lines
are written (or vice versa). Progressive scanning is a transfer of data in which the
lines of the source are written sequentially into the destination image. Each line of
an image is put on the screen one at a time in perfect order.
When interlaced video is shown on progressive scan monitors such as computer
monitors, which scans lines of an image consecutively, the artifacts become notice-
able. The artifacts, which can be seen as tearing, are caused by the slight delay
between odd and even line refreshes as only half the lines keep up with a moving
image while the other half waits to be refreshed. It is especially noticeable when the
video is stopped and a freeze frame of the video is analyzed. In Fig.2.3, an inter-
laced scan image shown on a progressive (computer) monitor (left), and a progres-
sive scan image on a computer monitor at right. Moving objects are, therefore, bet-
ter presented on computer screens using the progressive scan technique (Fig.2.3).
In an online sorting MV system, it can be critical in viewing details of a moving
subject (e.g., a fruit running away with conveyor).
At last, CCD cameras are either of the array type or line scan type. Array or area-
type cameras consist of a matrix of minute photosensitive elements (photosites)
from which the complete image of the object is obtained based on output propor-
tional to the amount of incident light. Alternatively, line scan cameras use a single
line of photosites which are repeatedly scanned up to 2000 times per minute to
provide an accurate image of the object as it moves under the sensor.

2.2.3Lens

Lens is also very important for online detection MV. It is always ignored in many
literatures. If a camera offers an exchangeable lens, it is important to select a lens
suitable for the camera. A lens (or better an objective containing several lenses) is
always designed for certain parameters. It is always a compromise between magni-
fication, field of view (FOV), focal number (F-number), spectral range, image size,
aberrations, and finally costs.
First, the size of a lens should be considered. A lens made for a 1/2-in image sen-
sor will work with 1/2-, 1/3-, and 1/4-in image sensors, but not with a 2/3-in image
sensor. Figure2.4 shows different lenses mounted onto a 1/3-in image sensor. If a
lens is made for a smaller image sensor than the one that is actually fitted inside
the camera, the image will have black corners (see left-hand illustration below). If
a lens is made for a larger image sensor than the one that is actually fitted inside
the camera, the field of view will be smaller than the lens capability since part of
the information will be lost outside the image sensor (see right-hand illustration).
This situation creates a telephoto effect as it makes everything look zoomed in.
2.2Images Acquisition System 17

Fig. 2.3 Interlaced scanning and progressive scanning

Fig. 2.4 Different lenses mounted onto a 1/3in image sensor

Second, it is also important to know what type of lens mount the camera has.
There are two main standards used on cameras: CS-mount and C-mount. Both have
a 1-in thread and they look same. What differs is the distance from the lenses to the
sensor when fitted on the camera:
CS-mount: The distance between the sensor and the lens should be 12.5mm.
C-mount: The distance between the sensor and the lens should be 17.526mm.
It is possible to mount a C-mount lens to a CS-mount camera body by using a 5-mm
spacer (C/CS adapter ring). If it is impossible to focus a camera, it is likely that the
wrong type of lens is used.
Third, in low-light situations, particularly in indoor environments, an important
factor to look for in a camera is the lens light-gathering ability. This can be deter-
mined by the lens f-number, also known as f-stop. An f-number defines how much
light can pass through a lens. An f-number is the ratio of the lens focal length to the
diameter of the aperture or iris diameter, that is,

f-number = focallength/aperture

Smaller the f-number (either short focal length relative to the aperture, or large ap-
erture relative to the focal length), better the lens light-gathering ability; i.e., more
light can pass through the lens to the image sensor. In low-light situations, a smaller
f-number generally produces a better image quality. (There may be some sensors,
18 2 Machine Vision Online Measurements

however, that may not be able to take advantage of a lower f-number in low-light
situations due to the way they are designed.) A higher f-number, on the other hand,
increases the depth of field, which is explained below. A lens with a lower f-number
is normally more expensive than a lens with a higher f-number.
F-numbers are often written as F/x. The slash indicates division. An F/4 means
that the iris diameter is equal to the focal length divided by 4; so if a camera has an
8-mm lens, light must pass through an iris opening that is 2mm in diameter.
While lenses with automatically adjustable iris (In-situ storage image sensor)
have a range of f-numbers, often only the maximum light gathering end of the range
(smallest f-number) is specified.
A lens light-gathering ability or f-number, and the exposure time (i.e., the length
of time an image sensor is exposed to light) are the two main elements that control
how much light an image sensor receives. A third element, the gain, is an amplifier
that is used to make the image brighter. However, increasing the gain also increases
the level of noise (graininess) in an image, so adjusting the exposure time or iris
opening is preferred.
Fourth, limits to the exposure time and gain can be set in some online detec-
tion environment. The longer the exposure time, the more light an image sensor
receives. Bright environments require shorter exposure time, while low-light condi-
tions require longer exposure time. It is important to be aware that increasing the
exposure time also increases motion blur, while increasing the iris opening has the
downside of reducing the depth of field, which is explained in the section below.
When deciding upon the exposure, a shorter exposure time is recommended
when rapid movement or when a high-frame rate is required. A longer exposure
time will improve the image quality in poor lighting conditions, but it may increase
motion blur and lower the total frame rate since a longer time is required to expose
each frame.
There are three main types of lenses:
Fixed Lens Such a lens offers a focal length that is fixed; that is, only one field of
view (either normal, telephoto, or wide angle). A common focal length of a fixed
network camera lens is 4mm.
Varifocal Lens This type of lens offers a range of focal lengths, and hence, differ-
ent fields of view. The field of view can be manually adjusted. Whenever the field
of view is changed, the user has to manually refocus the lens.
Zoom Lens Zoom lenses are like varifocal lenses in that they enable the user to
select different fields of view. However, with zoom lenses, there is no need to refo-
cus the lens if the field of view is changed. Focus can be maintained within a range
of focal lengths, for example, 648mm. Lens adjustments can be either manual or
motorized for remote control. When a lens states, for example, 3-zoom capability,
it is referring to the ratio between the lens longest and shortest focal length.
Fifth, the spectral ranges of the camera should also be taken into account. Basler
cameras cover a spectral range from 400 to 1000nm. This is more than the human
eye is able to seehuman eyes roughly detect about 400800nm. Color cameras
2.3Image Processing 19

Fig. 2.5 Iris opening and depth of field

usually have a Bayer pattern in front of the sensor. Note that the effective resolution
of the chip has to be divided by two in each direction. The blue channel is sensi-
tive from 400 to 500nm, the green from 500 to 600nm, and the red for more than
600nm. Unfortunately, the NIR opens all three channels for higher than 700nm. To
avoid incorrect colors (e.g., green leaves appearing yellow or orange), an infrared
(IR) cut filter is required. For C-mount cameras, it could be mounted in front of the
sensor. Some lenses are corrected for the visible range, some include correction for
NIR.
At last, a criterion that may be important to a video surveillance application
is depth of field. Depth of field refers to the distance in front of and beyond the
point of focus where objects appear to be sharp simultaneously. Depth of field is
affected by three factors: focal length, iris diameter, and distance of the camera to
the subject. A long focal length, a large iris opening, or a short distance between the
camera and the subject will limit the depth of field. The illustration of Fig.2.5 is an
example of the depth of field for different f-numbers with a focal distance of 2m
(7ft). A large f-number (smaller iris opening) enables objects to be in focus over
a longer range. (Depending on the pixel size, very small iris openings may blur an
image due to diffraction.)

2.3Image Processing

Image processing and image analysis are recognized as being the core of computer
vision. Image analysis and MV have a common goal of extracting information from
digital images. They differ mostly in what objects or parts they are applied to and
the type of information extracted. Both use image processingcomputations that
20 2 Machine Vision Online Measurements

modify an input image to make image elements more obvious. The image processing
could be mainly divided into four steps: the images acquisition, the segmentation,
the interpretation, and finally the fruit classification. For example, the grading of
apples into quality classes is a complex task involving different stages. The prior
step is the images acquisition, which was performed by CCD cameras during the
motion of the fruit on an adapted commercial machine. It was followed by a first
segmentation to locate the fruits on the background and a second one to find the
possible defects. Once the defects were located, they were characterized by a set
of features including color, shape, texture descriptors as well as the distance of the
defects to the nearest calyx or stem end. These data were accumulated for each fruit
and summarized in order to transform the dynamic table into a static table. The
grading was performed using quadratic discriminant analysis. Image processing/
analysis can be broadly divided into three levels: low-level processing, interme-
diate-level processing, and high-level processing as described in reference [12].
Low-level processing includes image acquisition and preprocessing. Intermediate-
level processing involves image segmentation, image representation, and descrip-
tion. High-level processing involves recognition and interpretation, typically using
statistical classifiers or multilayer neural networks of the region of interest. These
steps provide the information necessary for the process/machine control for quality
sorting and grading.

2.3.1Image Segmentation

The images resulting from the acquisition step present from one to four planes. The
two most common configurations are the monochrome images (one plane) and the
color images (three planes, the red, green, and blue channels). The result of the image
segmentation can be expressed as a monochrome image with the different regions
having different gray levels. Image segmentation is one of the most important steps
in the entire image processing technique, as subsequent extracted data are highly
dependent on the accuracy of this operation. Its main aim is to divide an image into
regions that have a strong correlation with objects or areas of interest. Segmenta-
tion can be achieved by three different techniques as shown in Fig.2.6: threshold-
ing, edge-based segmentation, and region-based segmentation [28]. Thresholding
is a simple and fast technique for characterizing image regions based on constant
reflectivity or light absorption of their surfaces. Edge-based segmentation relies
on edge detection by edge operators. Edge operators detect discontinuities in gray
level, color, texture, etc. Region segmentation involves the grouping together of
similar pixels to form regions representing single object within the image. The cri-
teria for like-pixels can be based on gray level, color, and texture. The segmented
image may then be represented as a boundary or a region. Boundary representation
is suitable for analysis of size and shape features while region representation is used
in the evaluation of image texture and defects. Image description (measurement)
deals with the extraction of quantitative information from the previously segmented
2.3Image Processing 21

Fig. 2.6 Typical segmentation techniques. a Thresholding. b Edge-based segmentation. c Region-


based segmentation [12]

image regions. Various algorithms are used for this process with morphological,
textural, and photometric features quantified so that subsequent object recognition
and classifications may be performed.
For fruit grading images, these regions are the background, the healthy tissues
of the fruits, the calyx and the stem ends, and possibly some defects. The contrast
between the fruit and the background should be high to simplify the localization of
the fruit. This is usually carried out by a simple threshold. Nevertheless, as defects
or the calyx and the stem ends could present luminances comparable with the back-
ground, the defect detection is still an interesting study project. It is necessary to
distinguish the defects from the calyx and stem ends, which may present similarities
in terms of luminance and shape. This step is the separation of the defects from the
healthy tissue. On monochrome images, the apple appears in light gray, the mean
luminance of the fruit varies with its color and decreases from the center of the fruit
to the boundaries. The lenticels look like unevenness, which could be assimilated
to noise. The defects are usually darker than the healthy tissue but their contrast,
shape, and size may vary strongly. For these reasons, simple techniques such as
thresholding or background subtraction would give poor results while pattern rec-
ognition techniques would be unusable.

2.3.2Image Interpretation and Classification

The next steps extract the relevant information from the regions segmented earlier
and synthesize it for a whole fruit, i.e., for several images. As it may be seen, most
researchers (except the last ones) did not consider how to manage several images
representing the whole surface of the fruit. It seems that each image was treated
separately and that the fruit was classified according to the worse result of the set
of representative images. Studies like those on apples used global measurements
(computed on the whole fruits, without segmentation of the defects) to evaluate the
fruits quality, but these techniques seemed too simple to be efferent if the reflec-
tance of the fruit is uneven, as for bicolor apples or for apples randomly presented
to the camera [29].
Computer-generated artificial classifiers that are intended to mimic human deci-
sion making for product quality have recently been studied intensively. The opera-
tion and effectiveness of intelligent decision making is based on the provision of a
22 2 Machine Vision Online Measurements

complete knowledge base, which in MV is incorporated into the computer. Algo-


rithms such as neural networks, fuzzy logic, and genetic algorithms are some of the
techniques of building knowledge bases into computer structures. Such algorithms
involve image understanding and decision-making capacities thus providing system
control capabilities. Combined with high-technology handling systems, consistency
is the most important advantage the artificial classifiers provide in the classification
of agricultural commodities. In addition, the advantages of automated classifica-
tion operations over conventional manual sorting operations are objectivity, null, or
low labor requirements, and reduction in tedious manual sorting. Pattern recogni-
tion techniques have the capability of imaging the distribution of quality classes in
feature space. As a result, for some time different pattern recognition algorithms
have been studied for the classification of agricultural products. The number of
features plays a key role in determining the efficiency of the pattern classification
in terms of time and accuracy. Currently, a number of pattern-recognition methods
such as linear function analysis, nonlinear function analysis, and artificial neural
networks (ANNs) have emerged in the field of apple image processing [14, 3033].
These methods have their own advantages and disadvantages. For example, linear
function analysis can be used only under situations where the patterns are linearly
separate, such as multiple linear regression and principle multicomponent analysis.
When the patterns have irregular shapes, the nonlinear analysis would be consider-
ate, but the formula for nonlinear analysis must be known at first which is difficult
to obtain. ANN is good at nonlinear mapping, but how to select the number of hid-
den units and hidden layers is not very clear, and the learning procedure is lengthy.

2.4Applications of Machine Vision in Food


and Agricultural Products

2.4.1Applications

Computer vision systems are being used increasingly in the food industry for qual-
ity assurance purposes. The system offers the potential to automate manual grading
practices, thus standardizing techniques and eliminating tedious human inspection
tasks. From vegetable and fruit to meat and fish, from poultry carcasses to prepared
consumer foods, and to container, MV has been meeting the ever-expanding re-
quirements of the food industry as described in Table2.2.

2.4.2Online Machine Vision Applications

Computer vision has proven successful for the objective, online measurement of
several food products with applications ranging from routine inspection to the
2.4Applications of Machine Vision in Food and Agricultural Products 23

Table 2.2 Machine vision in food quality detection


Foods Quality indices Accuracy (%) Reference
Vegetable and fruit
Potato Shape 90% [34]
Brightness and blemishes 80% [35]
Maturity 84% [36]
Quality inspection and grading >97.6% [37]
Sorting of irregular potatoes 100% [38, 39]
Irregularity evaluation 98.1% [40]
Mushroom Color and shape of the cap 80% [41]
Mechanical damage and diseases 81% [42]
Bruise detection 79100% [43]
Freeze damage detection 95% [44]
Carrot Defects [45]
Broccoli Head size 85% [46]
Onion Defects 8090% [47]
Apple Defects 94% [48]
Bruise detection 86% [49]
Chilling injury detection 98.4% [50]
Water core damage 9295% [51]
Grading by external quality 78% [52]
Color classification 100% [53]
Banana Seven ripening stages 98% [54]
Color measurement 97% [55]
Stone fruit Maturity discrimination 95.83% [56]
Pear Color classification 100% [53]
Shape identification 90% [57]
Peach Split pits 98% [58]
Sorting by color and size 90% [59]
Tomato Color homogeneity, bruises, shape 88% [60]
Shape grading 89% [61]
Citrus Sugar content and acid content 89 and 91% [62]
Stem calyx area 9098% [63]
Inspection and classification >94% [64]
Color evaluation R 2=0.925 [65]
Peel disease 95% [66]
Pistachio nut Closed shell 95100% [67]
Strawberry Shape and size 98% [68]
Grading by size and shape 98.6% [69]
Grading by external quality 88.8% [70]
Grading by external quality 90% [70]
Bruise detection 100% [71]
24 2 Machine Vision Online Measurements

Table 2.2(continued)
Foods Quality indices Accuracy (%) Reference
Prepared consumer food
Bread Height and slope of the top [72]
Internal structure [73]
Chocolate Size, shape, baked dough color [74]
chip cookies
Muffin Color 96% of pregraded [75]
and 79% of
ungraded
Meat and fish
Pork Pork loin chops 90% [76]
Evaluation of fresh pork loin color R=0.75 [77]
Prediction of color scores 86% [76]
Beef Prediction of color scores R2=0.86 [78]
Prediction of sensory color 100% [79]
responses
Fish Fish species recognition 95% [80]
Prediction of color score assigned R=0.95 [81]
by a sensory panel
Detection of bones in fish and 99% [82]
chicken

complex vision guided robotic control. Table2.3 shows the online applications of
MV in food industries.
Visual inspection is used extensively for the quality assessment of meat and fish
products applied to processes from the initial grading to consumer purchases. Color,
marbling, and textural features were extracted from meat and fish images, and an-
alyzed using statistical regression and neural networks. Textural features were a
good indicator of tenderness [88]. MV has been used in the analysis of pork loin
chop images. More than 200 pork loin chops were evaluated using color MV [76].
Agreement between the vision system and the panelists was as high as 90% at a
speed of 1 sample per second. Storbeck and Daan [80] also measured a number
of features of different fish species as they passed on a conveyor belt at a speed of
0.21m/s perpendicular to the camera. A neural network classified the species from
the input data with an accuracy of 95%. Jamieson [82] used an X-ray vision system
for the detection of bones in chicken and fish fillets. This commercial system oper-
ates on the principle that the absorption coefficients of two materials differ at low
energies allowing the defect to be revealed. The developed system has a throughput
of 10,000 fillets per hour and can correctly identify remaining bones with an ac-
curacy of 99%.
External quality is considered of paramount importance in the marketing and
sale of fruit and some vegetables. The appearance, i.e., size, shape, color, and the
2.4Applications of Machine Vision in Food and Agricultural Products 25

Table 2.3 Online applications of machine vision in food and agricultural industries
Area of use Speed/processing Accuracy (%) Reference
time
Pork loin chops 1sample/s 90 [76]
Fish identification 0.21m/s conveyor 95 [80]
Detection of bones in fish and 10,000/h 99 [82]
chicken
Estimation of cabbage head 2.2s/sample [83]
size
Location of stem root joint in 10/s [84]
carrots
Apple defect sorting 3,000/min 94 [48]
Sugar content of apples 3.5s/fruit 78 [85]
Pinhole damage in almonds 66nuts/s 81 [86]
Bottle inspection 60,000/h [87]

presence of blemishes influences consumer perceptions and therefore determines


the level of acceptability prior to purchase. The consumer also associates desirable
internal quality characteristics with a certain external appearance. This learned as-
sociation of internal quality to external quality affects future purchases. To meet
the quality requirements of customer, computer vision is being implemented for
the automated inspection and grading of fruit and vegetable to increase product
throughput and improve objectivity of the industry. Computer vision has shown
to be a viable means of meeting these increased requirements for the vegetable
and fruit industry. Shape, size, color, blemishes, and diseases are important aspects
that need to be considered when grading and inspecting vegetables. Three image
processing algorithms to recognize cabbage head and to estimate head size were
developed for the construction of a selective harvester [83]. From the projected
area, the head size could be estimated in a processing time of 2.2s with an error of
between 8.6 and 17.9mm. Two algorithms for analyzing digital binary images and
estimating the location of stem root joints in processing carrots were developed by
Batchelor and Searcy [84]. Both algorithms were capable of estimating the stem/
root location with a standard deviation of 5mm; however, the midpoint technique
could feasibly attain speeds exceeding 10 carrots per second. A novel adaptive
spherical transform was developed and applied in an MV defect sorting system
[48]. The transform converts a spherical object image to a planar object image al-
lowing fast feature extraction, giving the system an inspection capacity of 3000
apples/min from the three cameras, each covering 24 apples in the field of view. A
94% success rate was achieved for sorting defective apples from good ones for the
600 samples tested. Steinmetz etal. [85] combined two nondestructive sensors to
predict the sugar content of apples. A spectrophotometer and computer vision sys-
tem implemented online resulted in an accuracy of 78% for the prediction of sugar
content with a processing time of 3.5s per fruit. X-ray imaging in combination with
26 2 Machine Vision Online Measurements

MV was used to detect pinhole damage in almonds [86]. By processing scanned


film images, pinhole damage had an 81% correct recognition ratio compared to
65% for line-scanned images. The computation rate, if implemented online, was
estimated to be 66 nuts per second. Container inspection covers a large number of
different areas. These include inspection of bottles for thread, sidewall, and base
defects, with returned bottles inspected by vision systems to determine shape and
to check for foreign matter. Filled bottles are also inspected for fill level, correct
closure, and label position at an operating speed of up to 60,000 bottles per hour
[87]. A bottle cap inspection system used information feedback to help to reduce
the number of defects produced, resulting in a reduction from 150 to 7 or 8 defects
in an 8-h period [89].

2.5Machine Vision for Apples Grading

Mechanical nondestructive devices for online measurement of weight and size,


working at high speed (several fruits/s), are common in current packinghouses [3].
Since 1980s, cameras or photoelectric cells-based MV was introduced for grading
fruit by size and color.
The first step consists of acquiring images of the surface of the fruit, while it goes
through the grading machine. In order to grade apples, two requirements have to be
met: The images should cover the whole surface of the fruit; a high contrast has to
be created between the defects and the healthy tissue, while maintaining a low vari-
ability for the healthy tissue.
The MV system is composed of a chamber, an illumination system, a camera, a
grabber, and an image adapter as shown in Fig.2.7. The digitized apple images were
received and stored in a computer [30].

2.5.1Machine Vision System for Apple Shape and Color Grading

Characterization of apple features includes the presence of defects, the size, the
shape, and the color. Descriptive variables include roundness, diameter, average
green color on the surface, and the color properties of defect spots. Many attempts
have been made to implement these algorithms in online sorting machines [90].
Size grading is very popular around the world. The color and shape of apples have
brought a serious problem because misjudgment occurs frequently due to seasonal
fluctuations in grading criteria, the difference among production areas.
Image analysis can be used to extract external quality properties from digitized
video images. Identifying the shapes and color of fruit is easy for human eyes and
brains, but it is difficult for a computer. Human descriptions of shapes, color are
often abstract or artistic, not quantitative. Researchers developed image processing
algorithms to measure objectively the shape and color features of horticultural
products.
2.5Machine Vision for Apples Grading 27

*UDEEHU

&&'FDPHUD

$SSOH
&RQHVKDSH DSSOH &RPSXWHU
&KDPEHU UROOHU

Fig. 2.7 The schematic of machine vision system

2.5.1.1Apple Shape Grading by Fourier Expansion

Shape uniformity of fruit and vegetables is important whether they are to be fresh
marketed or processed. To achieve the desired uniformity, fruit must be inspected
and classified. To date, most of the research leading to describe fruit shape has been
two-dimensional (2D), and this article focuses on 2D shape analysis.
Shape is an inherent characteristic of the phenotypic appearance of apples and
affected by many factors such as the conditions during production, market situation,
and attitudes of consumers. Today, the shape evaluation is done merely on a subjec-
tive way, making use of grading workers.
In the early research, most of the shape algorithms are quantifying the roundness,
the rectangularity, the triangularity, or the elongation of the product by calculating
ratios of the projected area to width of the product. As Segerlind and Weinberg ap-
plied Fourier expansion for the identification of different grain kernels, more and
more researches were focused on shape characteristics by Fourier transformation
and Fourier inverse transformation [33, 9194]. Fourier transformation and prin-
ciple component analysis (PCA) were used to characterize different types of apple
shape according to the international board for plant genetic resources (IBPGR).
However, it could not use apples shape grading. More recently, some results dem-
onstrated that using Fourier transformation and ANN to distinguish different grad-
ing Huanghua pears according to their shapes. However, it is difficult to select the
number of hidden units and hidden layers, and the learning procedure is lengthy for
ANN [33]. Therefore, an image processing algorithm was developed to characterize
objectively the apple shape to identify different grading. Here, we introduced a Fou-
rier expansion for shape feature extraction.
Horizontal line image scanning and detection of the minimum and maximum x
coordinate at each yth row resulted in about 1000 edge points. The apple shape char-
acterization was based on the extraction of the apple profile from digitized images,
as illustrated in Fig.2.8b. For the boundary of an apple in an image, the most im-
portant information is the positions of the pixels that constitute the boundary. Other
information, such as the brightness of the boundary pixels can be ignored. The co-
ordinates of the centroid (point O: xo, yo) can be found only based on the boundary
information. The edge points were centered around the centroid (xo, yo) of all (x, y)
coordinates:
28 2 Machine Vision Online Measurements

Fig. 2.8 Edge extraction and transformations. a Apple image. b Edge extraction and transformation

yk ( xk2 xk21 ) xk2 ( yk yk 1 )]


n

 k =0
xo = n
(2.1)
2 [ yk ( xk xk 1 ) xk ( yk yk 1 )]
k =0

n
 yk2 ( xk xk 1 ) xk ( yk2 yk21 )
k =0
yo = n (2.2)
2 [ yk ( xk xk 1 ) xk ( yk yk 1 )]
k =0

This resulted in a shift of the origin of the (x, y) vector space to the centroid (xo,
yo). In the following step, the Cartesian (x1, y1) coordinates were transformed into
polar (r, ) coordinates. The polar vector space was rotated by assigning the small-
est angle to the smallest radius. Finally, the (r, ) coordinates were normalized to a
constant average radius of 3cm to exclude size effects:

(2.3) r
r1 = 3.0
rav

where r1 is the normalized radius and rav is the average radius. Thus, the shape
of an apple can be mathematically described as a periodic function with a period
of 2 : r1 ( + 2 ) = r1 ( ). A periodic function can be expressed as a combination
of trigonometric functions with different frequencies using Fourier series. Fourier
expansion was used to characterize the shape of objects by writing the normalized
radius r1 as a function of the angle , using a sum of sine and cosine functions with
a period 2 [95]. Only the first period was considered, implying that is equal
to 1. Fourier expansion describes the apple shape as follows:

1
r1 = f ( ) = a0 + (am cos(m ) + bm sin(m ))
(2.4)
2 m =1
2.5Machine Vision for Apples Grading 29

Here, r1 is the normalized radius and m is the harmonic index variable.


The coefficients am are obtained from:

 1

am = f ( ) cos( m )d (2.5)

The coefficients bm are obtained from:



 1

bm = f ( ) sin(m )d (2.6)

The Fourier coefficients were calculated by the fast Fourier transform algorithm.
Meanwhile, only the first 16 coefficients of cosine terms am and sine terms bm were
calculated, because it could greatly reduce the calculation and describe the shape
of an apple. For apples, this study verified the conclusions through experiments,
which stated that the first two principal components of the first 16 cosine terms am
and the first 16 sine terms bm represented the height to width ratio and how conical
the shape was.
Leemans etal. [96] concluded that the amplitudes of F(h) have a precise physical
meaning that can be used to quantify the shape of apples. For a Golden Delicious
apple to be classified as class I (the best category contemplated in that work), and
considering a side view of the apple in upright position, F(2) should not be too high,
since high values of F(2) imply an excessive fruit elongation. Analogously, F(3)
should be high enough, since low values of F(3) imply lack of conicity or triangu-
larity. F(4) should be high enough, since this implies that the apple can be inscribed
in a square. In regards to the stem view, i.e., the view in which an observer would
watch the apple from above when the apple lies on a horizontal surface in upright
position, F(1) should be low, since high values of F(1) entail an excessively ellipti-
cal apple cross section. Abdullah etal. [93] observed that four-pointed, five-pointed
and six-pointed star fruits peaked in F(4), F(5), and F(6), respectively. Following
the rationale in Leemans etal. [96], it follows that the four-pointed star fruit can be
inscribed in a square, while the five-pointed and six-pointed inscribe in a pentagon
and a hexagon, respectively. Xiaobo Zou [97] used the 33 coefficients: a0, the first
16 cosine terms, am (a1 , a2 , , a16 ), the first 16 sine terms and bm (b1 , b2 , , b16 ) to
identify the shape of apples, the grade judgment ratios for extra, category II,
and reject are high, but the ratio for category I is not high.

2.5.1.2Apple Color Grading

The strong correlation between fruit color and maturity makes it feasible to evalu-
ate the maturity level based on color. Among all the image analysis based methods,
color image processing techniques played an important role in inspections for many
different fruits. Some color-based techniques for fruits inspection extract features
from the RGB or hue, saturation, volume (HSV) images of fruit accompany with
30 2 Machine Vision Online Measurements

other features, e.g., size, texture, and classify fruits with machine learning or artifi-
cial intelligence algorithms.
The composite video signal of an apple collected by an image processor was
processed to the 256 color gradient of the three primary colors in each pixel.
Then, the average color gradients ( R , G, B ) , the variances (VR , VG , VB ), and the
color coordinates (r, g, b) were calculated from the three primary colors in the fol-
lowing manner [14, 20, 30, 98102]. For example, for red;
(2.7)
R = R/n

n
VR = ( Ri R ) 2 / n
(2.8)
i =1

(2.9)
r = R / ( R + G + B)

n
where R is equal to i =1 Ri , and n is the number of total pixels in the image data.
Therefore, nine-color characteristic data were obtained for one entire apple [51; 52;
53; 54; 55].
Color representation in hue, saturation, intensity (HSI) provides an efficient
scheme for statistical color discrimination. These attributes are the closest ap-
proximation to human interpretation of color. So color RGB signals of apple were
transformed to HSI for color discrimination. For a digitized color image, the hue
histogram represents the color components and the amount of area of that hue in
the image. Therefore, color evaluation of apples can be achieved by analyzing the
hue histogram. The hue values of Fuji apple images are mainly between 0 and
100. The hue field in 080 can be divided into eight equal intervals. The number
of pixels in each interval divided by 100 was treated as apples color feature ci
(i=1,,8). Then, eight-color features were obtained. The hue curve of the differ-
ent class apples is presented in Fig.2.9. The maximum feature appeared in 0~20
for extra Fuji apples, 20~40 for class I apples, 4060 for substandard degree.
There is no maximum feature for class II apples [26]. Four images, one for every ro-
tation of 90, were taken from each apple. Seventeen color feature parameters (FPs)
were extracted from each apple in the image processing. They were the average
color gradients (R, G, B), the variances (VR, VG, VB ), the color coordinates (r, g, b),
and, c1,c2,c3,c4, c5, c6,c7, and c8.
Three hundred and eighteen apples used in this study were sent directly to our
laboratory from a farmer. Classification experiments were done under controlled
circumstances, in a room illuminated by halogen lamps with the apples placed
against a black background. The color of an apple was graded with a trained quality
inspector according to the grading standards in China. The quality grades for the ex-
ternal appearance of apples are classified into four categories: class extra, of which
more than 66% of the surface is deep red, and orange in the background; class I, of
which 5066% area of the surface is red, and the background is yellowish orange;
class II, of which 2550% of the surface is red, and the background is yellowish
2.5Machine Vision for Apples Grading 31

Fig. 2.9 Hue curves of


different sorts of Fuji
apples

Table 2.4 Three hundred and eighteen apples in training set and test set were classified into
four classes
Class Samples
Training set Test set
Accepted apple Class extra 50 20
Class I 50 41
Class II 50 40
Rejected apple The reject 50 17

green; the reject, of which less than 25% of the surface is red, and the background
is light green or unevenly red colored, and an injured part can be seen on the apples
surface. The 318 Fuji apples were divided into two sets. An initial experiment
was conducted with 200 fruits (training set). The samples were inspected by the
MV system. Reference measurement for color was then taken. An independent set
of 118 samples (test set) was fed into the robotic device to assess the efficiency
of the online MV procedure and to test the precision of the online MV process. The
apples in training set and test set were classified into class extra, class I, class
II, and the reject, as Table2.4 shows.
Although there are many methods that have been proposed for apple color grad-
ing, we have been unable to investigate the performance of all these color-grading
methods. However, one example is that a three layer back-propagation ANN (BP-
ANN) has been considered for apple color grading [18]. As a comparison, BP-ANN
was build up for apple color grading.
The 17 normalized apple color FPs were chosen as the input values for the
neural network. The apples four color grades were coded to serve as the output
layer of the neural network: extra (1,0,0,0), class I (0,1,0,0), class II (0,0,1,0), and
reject (0,0,0,1). Other parameters of the BP-ANN were activation: logistic, learning
rate: 0.02, momentum: 0.9. The ANN was trained with the 200 training samples in
training set 20,000 times. It was then used to classify the test set, which consisted
32 2 Machine Vision Online Measurements

Table 2.5 The BP-ANN training cycle and classification accuracy as the number of nodes in the
hidden layer changed. BP-ANN back-propagation artificial neural network
Structure Total error (training Classification accuracy Classification accu-
(inputhiddenoutput) 20,000 times) of training set (%) racy of test set (%)
1744 1.374 66 56.8
1764 1.333 67.5 59.4
1784 1.306 68.5 63.6
17104 1.295 69 65.3
17124 1.273 72.5 71.2
17144 1.260 73.5 72.9
17164 1.205 75.5 74.6
17184 1.198 79 76.3
17204 1.187 82.5 77.9
17224 1.189 83 76.3
17244 1.190 83 76.3

of 118 Fuji apples with different color grades. The data in Table 2.5 show the
training cycles, the classification accuracy for the training set, and the classifica-
tion accuracy for the test set when the structure of the ANN changed. It can be seen
from Table2.5 that the training cycle decreased as the number of nodes in the hid-
den layer increased, whereas the classification accuracy increased at first, then did
not change significantly while the hidden layer nodes increased. It is obvious that
more nodes in the hidden layer result in a longer computation time. Therefore, the
network with a structure of 17204 was selected from this study because it yielded
the highest accuracy with a relatively small network structure.
It can be seen that the construction of the neural network (a number of layers
and neurons) is an empirical process similar to the conventional approaches, and
requires considerable trials and errors. Furthermore, the ANN is easy to overfitting,
that is, its classification accuracy of training set is very high, while its classification
accuracy of test set is unacceptable.

2.5.2Apples Defects Detection by Three-Color-Camera System

2.5.2.1Literature Review of Fruit Defect Detection

On the common systems, the fruits placed on rollers are rotating while moving.
They are observed from above by one camera. In this case, the parts of the fruit near
the points where the rotation axis crosses its surface (defined as rotational poles)
are not observed [15, 16]. This can be overcome by placing mirrors on each side of
the fruit lines oriented to reflect the pole images to the camera. Another system used
three cameras observing the fruit rolling freely on ropes. On more sophisticated
2.5Machine Vision for Apples Grading 33

systems, two robot arms were used to manipulate the fruit [103]. The study stated
that it was possible to observe 80% of the fruit surface with four images, but the
classification rate remained limited to 0.25 fruit per second.
Traditional mechanical, image processing, and structured lighting methods are
proved to be unable to solve this problem due to their limitations in accuracy, speed,
and so on. On the common systems, the fruits placed on rollers are rotating while
moving, and the cameras used by different researchers were mainly CCD cameras
[104]. They are observed from above by one camera. In this case, the parts of the
fruit near the points where the rotation axis crosses its surface (defined as rotational
poles) are not observed, and the detection of apple defects is still a problem because
it is hard to identify apple stem ends and calyxes from defects by imaging process
[13, 14, 18, 30, 33, 92, 95, 103, 105136].
A machine vision sorting system was developed that utilizes the difference in
light reflectance of fruit surfaces to distinguish the defective and good apples [29].
To accommodate to the spherical reflectance characteristics of fruit with curved
surface like apple, a spherical transform algorithm was developed that converts the
original image to a nongradient image without losing defective segments on the
fruit. To prevent high-quality dark-colored fruit from being classified into the defec-
tive class and increase the defect detection rate for light-colored fruit, an intensity
compensation method using maximum propagation was used. Leemans etal. [103]
present a method based on color information that is proposed to detect defects on
Golden Delicious apples. In a first step, a color model based on the variability of
the normal color is described. To segment the defects, each pixel of an apple image
is compared with the model. If it matches the pixel, it is considered as belonging to
healthy tissue, otherwise as a defect. Two other steps refine the segmentation, using
either parameters computed on the whole fruit, or values computed locally. Wen
and Yang [16] developed a method based on dual-wavelength infrared imaging us-
ing both NIR and mid-infrared cameras. This method enables a quick and accurate
discrimination between tree defects and stem ends/calyxes. The obtained results
have significant meanings to automated apple defect detection and sorting. A novel
adaptive spherical transform was developed and applied in a machine vision apple
defect sorting system [90]. The image transformation compensates the reflectance
intensity gradient on curved objects and provides flexibility in coping with fruits
natural variations in brightness and size. Guyer and Yang use genetic ANNs and
spectral imaging for defect detection on cherries [137]. ANN classifiers success-
fully separated apples with defects from nondefected apples without confusing the
stem/calyx with defects [33]. Wen and Tao [110] developed a novel method which
incorporates an NIR camera and a mid-infrared (MIR) camera for simultaneous im-
aging of the fruit being inspected. The NIR camera is sensitive to both the stem-end/
calyx and true defects; whereas the MIR camera is only sensitive to the stem-end
and calyx. True defects can be quickly and reliably extracted by logical comparison
between the processed NIR and MIR images.
More recently, multispectral and hyperspectral imagines were used for fruit de-
fect detection. Aleixos etal. developed a multispectral camera, which is able to ac-
quire visible and NIR images from the same scene; the design of specific algorithms
34 2 Machine Vision Online Measurements

and their implementation on a specific board based on two digital signal process-
ings (DSPs) that work in parallel, which allows to divide the inspection tasks in the
different processors, saving processing time [64]. The MV system was mounted on
a commercial conveyor, and it is able to inspect the size, color, and the presence of
defects in citrus at a minimum rate of 5 fruits/s. The hardware improvements needed
to increase the inspection speed to 10 fruits/s were also described. Mehl etal. [121]
applied hyperspectral image analysis to the development of multispectral tech-
niques for the detection of defects on three apple cultivars: Golden Delicious, Red
Delicious, and Gala. Two steps were performed: (1) hyperspectral image analysis to
characterize spectral features of apples for the specific selection of filters to design
the multispectral imaging system and (2) multispectral imaging for rapid detection
of apple contaminations. Good isolation of scabs, fungal, soil contaminations, and
bruises was observed with hyperspectral imaging using either principal component
analysis (PCA) or the chlorophyll absorption peak. This hyperspectral analysis al-
lowed the determination of three spectral bands capable of separating normal from
contaminated apples. These spectral bands were implemented in a multispectral im-
aging system with specific band-pass filters to detect apple contaminations. Spatial
and transform features were evaluated for their discriminating contributions to fruit
classification based on bruise defects [116]. Stepwise discriminant analysis was used
for selecting the salient features. Spatial edge features detected using Roberts edge
detector, combined with the selected discrete cosine transform (DCT) coefficients
proved to be good indicators of old (one month) bruises. Separate ANN classifiers
were developed for old (one month) and new (24h) bruises. An NIR transmission
system was developed to inspect defect and ripeness of moving citrus fruits [138].
The system consisted of light source and NIR transmission spectrophotometer. Four
100W halogen lamps were used as the light source and an NIR spectrometer was
used to measure NIR transmission spectra of the citrus fruits. Ripeness inspection
results of the NIR transmission spectrum system for 100 Unshiu citrus fruits were
compared with results of the visual inspection. Analysis of the spectra showed that
ripeness could be evaluated using the peak near 710nm wavelength band. Spectra
of the ripe fruits had a peak at 710 nm and those of immature fruits had a peak
at 713nm. The wavelength shift of the peak was assumed to be caused by varia-
tions of chlorophyll contents, which absorb light near 678nm. Ripeness inspection
model was developed by using the wavelength difference as a ripeness criterion.
Leemans and Destain present a hierarchical grading method applied to Jonagold
apples [125]. Several images covering the whole surface of the fruits were acquired,
thanks to a prototype grading machine. These images were then segmented and the
features of the defects were extracted. During a learning procedure, the objects were
classified into clusters by k-mean clustering. The classification probabilities of the
objects were summarized, and on this basis, the fruits were graded using quadratic
discriminant analysis. Bennedsen and Peterson [139] performed a system for apple
surface defect identification in NIR images through two optical filters at 740 and
950nm. A multispectral vision system including four wavelength bands in the vis-
ible/NIR range was developed [126]. Multispectral images of sound and defective
fruits were acquired tending to cover the whole color variability of this bicolor apple
2.5Machine Vision for Apples Grading 35

variety. Defects were grouped into four categories: slight defects, more serious de-
fects, defects leading to the rejection of the fruit and recent bruises. Stem ends/ca-
lyxes were detected using a correlation pattern matching algorithm. The efficiency
of this method depended on the orientation of the stem-end/calyx according to the
optical axis of the camera. Defect segmentation consisted in a pixel classification
procedure based on the Bayes theorem and nonparametric models of the sound and
defective tissue. Fruit classification tests were performed in order to evaluate the
efficiency of the proposed method. No error was made on rejected fruits and high
classification rates were reached for apples presenting serious defects and recent
bruises. Fruits with slight defects presented a more important misclassification rate
but those errors fitted, however, the quality tolerances of the European standard.
An integrated approach using multispectral imaging in reflectance and fluorescence
modes was used to acquire images of three varieties of apples [136]. Eighteen im-
ages from a combination of filters ranging from the visible region through the NIR
region and from three different imaging modes (reflectance, visible light induced
fluorescence, and ultra violet (UV) induced fluorescence) were acquired for each
apple as a basis for pixel-level classification into normal or disorder tissue. ANN
classification models were developed for two classification schemes: a two class
and a multiple class. In the two-class scheme, pixels were categorized into normal
or disordered tissue, whereas in the multiple-class scheme, pixels were categorized
into normal, bitter pit, black rot, decay, soft scald, and superficial scald tissues. A
tenfold cross validation technique was used to assess the performance of the neural
network models. The integrated imaging model of reflectance and fluorescence was
effective on Honeycrisp variety, whereas single imaging models of reflectance or
fluorescence was effective on Redcort and Red Delicious. AdaBoost and support
vector machine (SVM) were also used to improve pecan defect classification ac-
curacy [140]. Kavdir and Guyer evaluate different pattern recognition techniques
for apple sorting [127]. The features used in classification of apples were hue angle
(for color), shape defect, circumference, firmness, weight, blush percentage (red
natural spots on the surface of the apple), russet (natural netlike formation on the
surface of an apple), bruise content, and the number of natural defects. Different
feature sets including four, five, and nine features were also tested to find out the
best classifier and feature set combination for an optimal classification success. The
effects of using different feature sets and classifiers on classification performance
were investigated.

2.5.2.2The Hardware

The lighting and image acquisition system were designed to be adapted on an exist-
ing single row grading machine (prototype from Jiangsu University, China). Six
lighting tubes (18W, type 33 from Philips, Netherlands) were placed at the inner
side of a lighting box while three cameras (color 3CCD uc610 from Uniq, USA),
two inclined at about 60 and one above observed the grading line in the box, as
shown in Fig.2.10. The lighting box is 1000mm in length and 1000mm in width.
36 2 Machine Vision Online Measurements

Fig. 2.10 Hardware system of apple in-line detection. a System hardware. b Schematic of three
cameras system

The distance between apple and camera is 580 mm, thus there are three apples
in the view field of each camera, and had a resolution of 0.4456 mm per pixel.
The images were grabbed using three Matrox/meteorII frame-grabbers (Matrox,
Canada) in three computers. The standard image treatment functions were based on
the Matrox libraries (Matrox, Canada) and the other algorithms were implemented
in C++. A local network was built among the computers in order to communicate
results data. The central processing unit of each computer was a Pentium 4 (Intel,
USA) clocked at 3GHz. The fruits placed on rollers are rotating while moving. The
rotational speed of the rollers was adjusted in such a way that a spherical object
having a diameter of 80mm made a rotation in exactly three images. The moving
speed in the range 0~15 apples per second could be adjusted by the stepping motor.

2.5.2.3Image Preprocessing

Image preprocessing includes background segmentation, image de-noise, child im-


age segmentation, and sequential images processing.
The background is relatively complicated. To get rid of the background, multi-
thresholds method was put forward. That is, the R value in RGB and S value in HIS
were taken into account. The segmentation values are follows:

 background pixel : R < 90 || ( S < 0.20 and R < 200)


p ( x, y ) = (2.10)
apple pixel : else
There may still be some noises in the image after getting rid of the background, so
this chapter introduces medial filter to getting rid of the noise.
There are three apples waiting for measurement in the field of view at most. In
order to take out ones own information of the individual apple, single apple divi-
sion has become inevitable operation. The minimum enclosing rectangle of each
single apple was used to divide the view image to three child images as shown in
Fig.2.11.
Continuous image grabbing has formed a group of sequential images. There
is relation of information between each single child image. In this chapter, the
2.5Machine Vision for Apples Grading 37

Fig. 2.11 Single child apple Left Right


images segmentation

Up

Down

Fig. 2.12 Sequential image and the single child image representation

processing design of the sequential images is based on the three different posi-
tions appearance sequence one by one (maybe there are apples in this position,
maybe there are not apples.). A 2D array R was used to represent the three child
single images information as shown in Fig.2.12. It can draw three conclusions from
Fig.2.12 as follows:
First, among the three child images, the left child image is represented one ap-
ples first image, the middle image is represented one apples second image and the
right image is represented one apples third image, and this rule do not change when
the trigger grabbing times increase.
Second, the sub numbers of array R of No.6 (the times of trigger grabbing I=6)
apple is the same as that of No.3 (the times of trigger grabbing I=3) apple, and
there is a cycle beginning. The cycle variable: X=I mod 3.
Third, it is a special case when I=1 or I=2. The cycle variable should be:
38 2 Machine Vision Online Measurements

X= (I1) mod 3.
The information of an apple in the array R should be saved when the apple
appeared three times, otherwise, it will be covered by the information of follow-
ing apples. ActiveX Data Objects (ADO) is used to save the information into a
database.

2.5.2.4Blemish (or Defects) Segmentation and Recognition

There exist several image analysis methods to produce blemish detection, such as
global gray-level or gradient thresholding, simple background subtraction, statisti-
cal classification, and color classification [13]. Blemish segmentation is a difficult
problem in image analysis, because various types of blemishes with different size
and extent of damage may occur on fruit surfaces. If a blemish appears as very a
dark mark on a fruit surface, a simple thresholding of gray-level intensity of re-
flected light may allow a direct segmentation of the blemish. However, in most
cases, the light reflectance from both blemished and nonblemished surfaces varies
considerably, and it is impossible to set a single threshold value for the segmenta-
tion. For example, a patch of good surface with a relatively dark color can have
similar reflectance as a slightly discolored blemish on a light colored surface. In this
case, the thresholding method will fail.
An image analysis scheme for accurate detection of fruit blemishes proposed by
Qingsheng Yang [141] is used in this study. The detection procedure consists of two
steps: initial segmentation and refinement. In the first step, blemishes are coarsely
segmented out with a flooding algorithm and in the second step an active contour
model, i.e., a snake algorithm, is applied to refine the segmentation so that the lo-
calization and size accuracy of detected blemishes is improved. However, Yangs
algorithms were tested on monochrome images of mono-color fruits. Here, the im-
ages are color images of bicolor fruit.
The appearances of calyxes and stem ends are also like the patch-like defects,
these patches were defined as region of interests (ROIs). The ROIs are general-
ly darker than their surrounding nondefective surfaces, and in image gray-level
landscapes, they usually appear as significant concavities using the concept of top-
ographic representation. The median filtering process mentioned in image prepro-
cessing improved the success of the flooding algorithm. This smoothing naturally
distorts the gray-level surface and thus has a drawback effect that the segmented
areas are larger than those that we see. Since the size of an ROI is important for
grade decision making, a refinement of the defect detection is necessary. A closed
loop snake has been implemented to improve the boundary localization of detected
ROI. Then, the minimum enclosing rectangle of each single ROI was used to mea-
sure the size of ROI. If the dimensions of the rectangle exceed 5 pixels (0.4456mm
per pixel), the measured ROI area is taken into account. The R channel signals were
used to detect the defects, because the tests for sample apple R channel images have
shown better results than other channel images.
The defect recognition steps are as following:
2.5Machine Vision for Apples Grading 39

Fig. 2.13 Precise segmen-


tation of ROIs area. ROIs
region of interests

First, the number of ROIs is counted in each single child apple image.
Second, logical recognition rules were developed. That is, since calyx and stem
ends could not appear in a single child image at the same time, an apple is defective
if any one of its nine images has two or more ROIs. Figure2.13 shows an example
of an apple image that has two ROIs.
Third, the defect detection mentioned above is all based on the data acquisition
using three computers, consequently, an apples characteristic parameters is formed
by integration into a single source. One of the three computers is server; the other
two computers are customers. Figure2.14 shows the data exchange and synchroni-
zation online grading.
Since nine images are sufficient to encompass the whole surface of the apple,
any defects in the surface can be detected by this method. The disadvantage of this
method is that it could not distinguish different defect types. Defects of apples, such
as bruising, scab, fungal growth, and disease, are treated as equivalent. The apples
were then graded to reject or accept.

2.5.2.5Fruits Grading

All the fruits used in this experiment were selected and came from the same grower.
Three hundred and eighteen fruits were used in only one experiment and each fruit
was thus presented only once to the machine, to avoid any additional bruises. The
apples were classified into two classes: accepted (199 apples) and rejected (with
blemish, 119 apples).
The proposed system has been tested with a laboratory three CCD cameras sys-
tem for fuji apples. The results obtained by the three-color cameras grading line
are given in Table2.6. The total error rate reached 11% mostly occurring in the
accepted batch. When these errors were analyzed, half of the errors were apples
with over-segmentation of healthy tissue and especially in the tissue near the bound-
aries in the defect segmentation processing. The other half was attributed to two
reasons. First, spot blush on the surface of good apples is segmented as defective
and the apple is classified into the rejected class. As the flooding algorithm used
40 2 Machine Vision Online Measurements

Fig. 2.14 Three computers image processing and synchronization online grading

Table 2.6 The results obtained by the three-color cameras grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 169 5
Rejected 30 114
Classification error 15.07% 4.2%
Global classification error 11%

by Yang [142] was designed to detect catchments basins, i.e., areas with a lower
luminance, large spot blushes were easily segmented as ROI. Second, errors occur
because apples with defects are accepted, i.e., false positives. These errors were
due to defects that are difficult to segment such as russet and bruises. Those defects
were present near the stem ends and calyxes of apples. They have almost the same
appearance as the russet around the stem end and, because of the proximity in posi-
tion and appearance, were probably confused with the latter. The defect is localized
together with the stem end and counted as one ROI. Therefore, this apple was seg-
mented as a good one.
Comparing different configurations, the results of a sorting line with only one
camera (the above camera) and sorting line with the two inclined cameras, are
shown in Tables2.7 and 2.8. With one camera, 21.8% of the apples with no defects
are misclassified (i.e., they are accepted), whereas this number reduces significantly
from 14.3% with two cameras to 4.2% with three cameras. However, at the same
time, the classification error for good apples increases from 11% for one camera
(three images), via 13.56% for two (six images) to 15.07% for three cameras (nine
2.5Machine Vision for Apples Grading 41

Table 2.7 The results obtained by the two in-lined color cameras grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 172 17
Rejected 25 102
Classification error 12.5% 14.3%
Global classification error 13.2%

Table 2.8 The results obtained by the only one camera (the above camera) grading line
True groups graded in Accepted (199 apples) Rejected (with defects, 119
apples)
I (accepted) 177 26
Rejected 22 93
Correct classification rate 11% 21.8%
Global correct classification 15.1%
rate

images). This is mainly caused by the information loss. A statistic test was carried
out for the loss of information when different numbers of cameras were utilized in
the sorting line. Fifteen to twenty percent of the apples surface cannot be observed
from the three images obtained by the single overhead camera. Five to ten percent
of the apple surface information will be lost using two inclined cameras. After sta-
tistic analysis, the individual images (child images) obtained by three CCD cameras
resulted in a probability for a defect to be present alone in one child image alone
as 28.4% after testing 318 apples (3189=2862 child images). However, the nine
images obtained by the three cameras could cover the whole surface of the apple.
With defective apples, more images provide more opportunity to detect the defects,
thus leading to a lower classification error. With good apples, more images mean a
change to classify spot blush as defect, and hence more will be misclassified. This
is caused by the defect detection algorithm. There are defects that are not darker
than their surrounding and could thus not be recognized. On the other hand, some
parts of the fruit are darker than their surroundings. There are also other reasons
for errors. Less defective apples in the accepted bin give higher prices that can
compensate for the slightly increased loss of good apples. With three cameras, the
class of accepted apples now has 174 apples, of which five still have defects (i.e.,
some 2.87%). Whereas, with one camera the accepted bin has 203 apples, but with
26 defective ones (i.e., some 13%). Compared with many former works in articles
[135, 143, 144], several images representing the whole surface of the fruit are con-
sidered in this work, and the defect recognition algorithm is simpler and faster.
42 2 Machine Vision Online Measurements

2.6Machine Vision Online Sorting Maturity


of Cherry Tomato

A cherry tomato is a smaller garden variety of tomato. With its highly nutritional
value and good appearance, cherry tomato had become one of the most popular
fruits in the world. Nowadays, cherry tomatoes are sorted by hand in many farms.
However, the manual inspection process is not only labor intensive and tedious,
but also subject to human error which results in poor quality. Farmers want an
automated grading device to facilitate this work. Cherry tomato little angel was
selected for the experiment. The samples were hand harvested on 23rd November
2007 from the experimental orchard in Jin rui Institute of Agricultural, Zhenjiang.
Cherry tomatoes were selected completely randomized in the same plant with
each fruit as an experimental unit. All fruits of each sample were individually num-
bered. Without any procedure, five assessors with previous experience in tomato
assessment were invited to classified cherry tomatoes into three different matu-
rity states (immature, half-ripe, and full-ripe), each with 30 samples. A total of 90
MV measurements were performed. For validation, the same variety, cherry tomato
little angel, was selected for the experiment. A total of 414 cherry tomatoes were
inspected for validation.

2.6.1Hardware of the Detection System

The MV system as shown in Fig. 2.15 was composed of a CCD color camera
(SenTec STC-1000) and a frame grabber (GRABLINK Value), connected to a
compatible personal computer (Pentium 2.8GHz, 512Mb random access mem-
ory (RAM)). The system provides images of 768 per 576 pixels. The frame grab-
ber digitized and decoded the composite video signal from the camera into three
user-defined buffers in RGB coordinates. In this chapter, lighting system was
composed of two ring-shaped LEDs inside of a chamber, with a hole in the top to
place the camera.
The vision system was part of the robotic system for automatic inspection and
sorting. Before entering the inspection chamber, the fruit was list one by one. Then
the fruit is made to be presented to the camera in three different, nonoverlapped
positions, in order to inspect as much of the fruit surface as possible. Entire system
as shown in Fig.2.15 is made of four parts as follows: (1) mechanical conveyor, (2)
CCD combine with PC, (3) executive mechanism, and (4) electronic device.

2.6.2Image Analysis

Figure 2.16 shows the flowchart of online grading software. It mainly includes im-
age acquisition, segmentation, and feature extraction.
2.6Machine Vision Online Sorting Maturity of Cherry Tomato 43

Sorting frame Camera conveyer contoller

Fig. 2.15 Cherry tomato online sorting device

Fig. 2.16 Online grading


Image acquisition Image Feature extraction
software flowchart

Online operation started with the acquisition of the first image. Three images of
different angles are obtained from each cherry tomato, allowing the inspection of
approximately 90% of the fruit surface (Fig.2.17a).
The second step consisted of image segment use fixed threshold as:


0, f ( x, y ) < T
f t ( x, y ) = (2.11)
255, f ( x, y ) T

Cherry tomatoes were separated from background as shown in Fig.2.17b.


The third step is feature extracting. Color is one of the most significant inspec-
tion criteria related to fruit quality, in that surface color of a fruit indicates maturity.
Color representation in RGB provides an efficient scheme for statistical color dis-
crimination. Therefore, color evaluation of apples was achieved by analyzing the
RGB value of each cherry tomato. A total of nine features were extracted from one
cherry tomato, because three images are obtained from each cherry tomato.
PCA is a linear, unsupervised, and pattern recognition technique used for analyz-
ing, classifying, and reducing the dimensionality of numerical datasets in a mul-
tivariate problem. This method permits extraction of useful information from the
data, and exploration of the data structure, the relationship between objects, the re-
lationship between objects and variables, and the global correlation of the variables.
The main features of PCA are the coordinates of the data in the new base (scores
plot) and the contribution to each component of the nine features (loads plot). The
score plot is usually used for studying the classification of the data clusters, while
the loads plot can provide information on the relative importance of the feature ar-
ray to each principal component and their mutual correlation.
The linear discriminant analysis (LDA) calculates the discriminant functions and
similar to the PCAa 2D or 3D display of the training set data. The difference
between PCA and LDA is that PCA does not consider the relation of a data point to
44 2 Machine Vision Online Measurements

Fig. 2.17 Results of image processing. a Raw image of cherry tomato. b Result of image segment

the specified classes, while the LDA calculation uses the class information that was
given during training. The LDA utilizes information about the distribution within
classes and the distances between them. Therefore, the LDA is able to collect infor-
mation from all sensors in order to improve the resolution of classes.

2.6.3Sorting Results

To investigate whether the MV system was able to distinguish between differ-


ent ripe state, PCA and LDA analysis were applied to 90 samples. PCA and
LDA analysis results are shown in Fig. 2.18. These figures show the analysis
results on a 2D plane, principal component 1 (PC1) and principal component 2
(PC2) in Fig.2.18a, and the first and second linear discriminant LD1 and LD2
in Fig.2.18b.
PCA is a linear combinatorial method, which reduces the complexity of the data-
set. The inherent structure of the dataset is preserved while its resulting variance is
maximized. Figure2.18a shows a clearer discrimination among the various clusters
representing the cherry tomato ripe state. Each group was clearly distinguishable
from the other groups by using PCA analysis. The processed data show a shift of
the different maturity state coinciding with the classification by the trained profile
panel. The PC1 explains 60.10% of the total variation, while 36.03% of the total
variance is explained by PC2 as shown in Fig.2.18a. The system has enough reso-
lution to explain the tomato ripe state. PCA analysis showed the variation of each
group along the abscissa (PC1) with a trend. The ripe and immature groups show
a clear upward and downward displacement in negative and positive direction on
the ordinate axis (PC2), respectively, moving these groups away from the other two
groups.
The LDA analysis was applied to the same dataset, and it showed a very clear
discrimination among the various clusters representing different cherry tomato
ripeness state, all cherry tomatoes were perfectly classified (Fig.2.18b). In this plot,
about 93.3% of the total variance of the data is displayed. LDA function 1 (LD1)
and function 2 (LD2) accounted for 84.6and 8.7% of the variance, respectively as
shown in Fig.2.18.
Using PCA and LDA analysis, it is possible to classify the fruit into three ma-
turity states. When the MV system was performed with LDA, better classification
rates were observed. Validation analysis was performed using 414 samples. Toma-
toes were of the same variety, from the experimental orchard in jin rui Institute
2.7Machine Vision Online Detection Quality of Soft Capsules 45

5.0
PC2

LD2
2.5

0.0

PC1 -2.5
a
-6 -3 0 3 6
b LD1
Fig. 2.18 PCA and LDA analysis for tomato ripeness. a PCA. b LDA. PCA principle component
analysis, LDA linear discriminant analysis

of Agricultural, Zhenjiang. The result is as Table2.9. The main error is caused by


the half-ripe.
This study has presented a method of cherry tomatoes maturity detection
byMV. The main conclusions of this study are as follows: (1) three images of a
cherry tomato by the CCD camera during the motion of the fruit on the grading
line. Three images of a cherry tomato could enough cover the 90% of the fruit
surface. (2) The cherry tomato is segmented from the background by fixed thresh-
old method, allowed fruits to be precisely distinguished from the background.
Nine features (RGB value) were extracted from three images of one cherry to-
mato. (3) PCA and LDA were used to investigate whether the nine features were
able to distinguish among different ripe states. Results indicated that using LDA
analysis, it is possible to differentiate and to classify the different cherry tomato
maturity states, and this method was able to classify 94.9% of the total samples in
each respective group. Furthermore, the grading speed of the sorting line reaches
seven cherry tomatoes per second. The sorting line can be used in most of cherry
tomato farms, and with a slight change of software, it also can be used to sort the
other miniature fruit.

2.7Machine Vision Online Detection Quality


of Soft Capsules

Soft capsules are produced in a single production step, filled, and then closed off.
The name soft capsule is used as the shell of the capsule contains plasticizers in
addition to the gelatine. The actual degree of softness and elasticity depends on the
46 2 Machine Vision Online Measurements

Table 2.9 The detection accuracy rate of cherry tomato


Ripe Half-ripe Immature
Total 211 120 83
Correct 205 121 88
Error 5 11 5
Repeatability 94.9%

type and amount of plasticizer used, the residual moistness and the thickness of
the capsule shell. The soft capsule shells are generally somewhat thicker than hard
capsule shells. Glycerol, sorbitol, or a combination of both are common plasticizers.
The manufacture of soft capsules is generally by the so-called rotary die process as
invented by Robert Pauli Scherer at the end of the 1920s: Here, two dyed highly
elastic gelatine bands are fed through two counter-rotating drums in opposite direc-
tions. A film is formed, capsules are made and these are then filled with the pharma-
ceutical active ingredient provided.
In China, soft capsules are a new kind of capsules in which oil functional ma-
terial, liquor, suspension mash, or even powder is sealed. Soft capsules industry
is developing very fast and more than 60,000million soft capsules are produced
every year over the world which cost US$400 million. There are 300million soft
capsules produced in China every year. These capsules are exported to Japan, south-
east Asia, USA, Europe, Singapore, etc. As most of contents of soft capsules have
viscosity, a fraction of content adhered to injector and filling pump while it flowed
into wedge injector and was pushed into two pieces of colloidal film by the filling
pump of automatic rotating rolling capsules machine. This process caused fluctua-
tion of soft capsules weight which has a close correlation to its efficacy. Therefore,
it is important that soft capsules need measurement in order to keep their weight
uniform as the dose controlling. Nowadays, many companies use workers who were
trained to measure soft capsules weight according their size. The grading accuracy
and repeatability were low, in that the grading process was based on workers per-
sonal experience. The hand grading process is also labor intensive, expensive, and
with low efficiency. This hand grading method cannot meet the industry produce.
For our knowledge, it is the first time of developing soft capsules grading equip-
ment. Mimic human grading process, MV is proposed to grading the capsules.

2.7.1The Hardware of Soft Capsule Online Grading System

A soft capsule online grading system was developed as shown in Fig. 2.19. It
consisted of feeding unit, MV system, grading unit, and electric control unit. The
basic feeding conveyor transported the soft capsule to the uniform spacing con-
veyor. Then, the capsules were fed to the MV system for the defect inspection one
by one. Finally, the automatic sorting unit accomplished the soft capsule grading
operation.
2.7Machine Vision Online Detection Quality of Soft Capsules 47

Fig. 2.19 Soft capsule online sorting device

Fig. 2.20 Capsule image


segmentation. a Before.
b After

The MV system included a lighting chamber for the desired spectrum and light
distribution for soft capsule illumination, a CCD camera and an image grabbing
card with four input channels which provided by Euresys company inserted in a
microcomputer (processor speed: 1.66GHz).

2.7.2Image Process

First, it is the image background removal. There are many ways to remove back-
ground of an image [145]. According to the histogram of the soft capsule images,
the gray distribution is double peak. OSTU (maximization of interclass variance)
method was chosen to remove the background. Figure2.20a is the source image and
Fig.2.20b is the result image processed by OSTU method, from the image we can
get that the soft capsule was segmented completely.
Second, it is the noise removal. Following the background removal, the image
still has some noise which will influence future processing. There are many meth-
ods to remove noise from an image such as mean smoothing, low-pass filter, and
48 2 Machine Vision Online Measurements

1 2 1
1
median filter. In this research, 33 mean smoothing filter 2 4 2 , low-pass
16
1 2 1
filter, and median filter were investigated to remove noise [146]. The results of
these smoothing filter are shown in Fig.2.21. Compared to these results, Fig.2.21d
was the best image for future processing.
Third, it is the image character extraction. In order to keep whole soft capsule
region in background removing step, some background pixels that have similar
gray value were reserved. Before character extraction, we should do region labeling
[146] to find right region of soft capsule in the image. Many MVs software have
region-labeling algorithm. Blob analysis function, which includes in Evision soft
was chosen to do this work. The result is shown in Fig.2.22. In the image, the soft
capsule region is the biggest one. In this research, a region whose pixels are more
than 50,000 is soft capsule region.
Fourth, after the region of soft capsule was found in image, it is character extrac-
tion. In this research, area, girth, altitude diameter, and latitude diameter were used
to represent soft capsule character. Their definitions are shown in Fig.2.23:
1. Area (S) as shown in Fig.2.23, the number of pixels whose gray value is 0.
2. Girth (L) as shown in Fig.2.23, the number of the edge of soft capsule region.
3. Altitude diameter (H), the distance between the most left and right pixel.
4. Latitude diameter (W), the distance between the most top and bottom pixel.

2.7.3Sorting Results

Five hundred and forty soft capsules (180 unqualified and 360 qualified) were chose
to extract area, girth, altitude diameter, and latitude diameter to build linear regres-
sion model. Figure2.24 shows the relationship between area and weight. Fifteen
thousand four hundred and sixty soft capsules produced by Hengshun company
were tested by the online grading system based on linear regression model. The ac-
curate rate of grading is shown in Table2.10. The soft capsules were first detected
by manual using electronic scale (FA1604), and sorted into two classes: accepted
and rejected (Fig.2.24).
The detection results of regression model were 94.1% as shown in Table2.10.
Compared with the manual detection by human eyes (the accurate rate of detection
is 74.9%), the machine detection is much higher.

Summary

Over the past decade, MV has been applied much more widely, uniformly and sys-
tematically in the food industry. This chapter presents the recent developments and
applications of MV in the food industry, and highlights the construction and imaging
Summary 49

Fig. 2.21 Effect of mean smoothing, low-pass filter, and median filter. a Source image. b Pro-
cessed by mean smoothing. c Processed by low-pass filter, d Processed by median filter

Fig. 2.22 Region labeling

Fig. 2.23 Characters of soft


capsule

processing of online detection by MV. The basic component and technologies as-
sociated with MV and three examples of online food detection were introduced. The
automated, objective, rapid, and hygienic inspection of diverse raw and processed
foods can be achieved by the use of computer vision systems.
Computer vision has the potential to become a vital component of automated
food processing operations as increased computer capabilities and greater pro-
cessing speed of algorithms are continually developing to meet the necessary on-
line speeds. This has been ensured by continual developments in the constituent
methodologies, namely image processing and pattern recognition. At the same time,
advances in computer technology have permitted viable implementations to be
achieved at lower cost. The flexibility and nondestructive nature of this technique
also help to maintain its attractiveness for application in the food industry. To some
extent, progress is now being held up by the need for tailored development in each
application: Hence, future algorithms will have to be made trainable to a much
greater extent than is currently possible.
50 2 Machine Vision Online Measurements


\ [
5 



Weight(w)






     

Area (s)

Fig. 2.24 The relation between the area (s) and weight (w) of capsules

Table 2.10 The detection accuracy rate of capsule by machine vision


Total number of Number of accepted Number of rejected Accuracy of detection
samples by machine vision (%)
15,460 14,547 913 94.1

References

1. Alfatni MSM, Shariff ARM, Abdullah MZ, Marhaban MHB, Saaed OMB. The applica-
tion of internal grading system technologies for agricultural productsreview. J Food Eng.
2013;116:70325.
2. Ying Y, Zhang W, Jiang Y, Zhao Y. Application of machine vision technique in automatic
harvesting and processing of agricultural products. Nongye Jixie Xuebao/Trans Chin Soc
Agric Mach. 2000;31:1125.
3. Brosnan T, Sun DW. Inspection and grading of agricultural and food products by computer
vision systemsa review. Comput Electron Agric. 2002;36:193213.
4. Xu H, Ying Y. In Detection citrus in a tree canopy using infrared thermal imaging, Provi-
dence, RI, United States, 2004; The International Society for Optical Engineering: Provi-
dence, RI, United States, p.321327.
5. Daley WD, Doll TJ, McWhorter SW, Wasilewski AA. Machine vision algorithm generation
using human visual models. Proc SPIEInt Soc Opt Eng. 1999;3543:6572.
6. Purnell G, Brown T. Equipment for controlled fat trimming of lamb chops. Comput Electron
Agric. 2004;45:10924.
7. Pellerin C. Machine vision in experimental poultry inspection. Sens Rev. 1995;15:234.
8. Chao K, Chen Y-R, Hruschka WR, Gwozdz FB. On-line inspection of poultry carcasses by a
dual-camera system. J Food Eng. 2002;51:18592.
9. Igathinathane C, Pordesimo LO, Columbus EP, Batchelor WD, Methuku SR. Shape identi-
fication and particles size distribution from basic shape parameters using imagej. Comput
Electron Agric. 2008;63:16882.
10. Zapotoczny P. Discrimination of wheat grain varieties using image analysis and neural net-
works. Part i. Single kernel texture. J Cereal Sci. 2011;54:608.
References 51

11. Edan Y. Design of an autonomous agricultural robot. Appl Intell. 1995;5:4150.


12. Brosnan T, Sun D-W. Improving quality inspection of food products by computer visiona
review. J Food Eng. 2004;61:316.
13. Yang Q. Approach to apple surface feature detection by machine vision. Comput Electron
Agric. 1994;11:24964.
14. Tao Y, Heinemann PH, Varghese Z, Morrow CT, Sommer HJ III. Machine vision for color
inspection of potatoes and apples. Trans ASAE. 1995;38:155561.
15. Tao Y. Closed-loop search method for on-line automatic calibration of multi-camera inspec-
tion systems. Trans ASAE. 1998;41:154955.
16. Wen Z, Tao Y. In Dual-wavelength imaging for on-line identification of stem ends and ca-
lyxes, San Diego, CA, United States, 1998; The International Society for Optical Engineer-
ing: San Diego, CA, United States, p.249253.
17. Batchelor BG, Whelan PF. Real-time colour recognition in symbolic programming for ma-
chine vision systems. Mach Vis Appl. 1995;8:38598.
18. Nakano K. Application of neural networks to the color grading of apples. Comput Electron
Agr. 1997;18:10516.
19. Pla F, Sanchez JS, Sanchiz JM. On-line machine vision system for fast fruit colour sorting
using low-cost architecture. Proc SPIEInt Soc Opt Eng. 1999;3836:24451.
20. Zadeh A, Schrieber M. Color systems see what grayscales miss. Mach Des. 2002;73:304.
21. Annamalai P, Lee WS, Burks TF. In Color vision system for estimating citrus yield in real-
time, Ottawa, ON, Canada, 2004; American Society of Agricultural Engineers, St. Joseph, MI
49085 9659, United States: Ottawa, ON, Canada, p.39934004.
22. Chong VK, Kondo N, Ninomiya K, Monta M, Namba K. In Comparison on eggplant fruit
grading between nir-color camera and color camera, Kyoto, Japan, 2004; American Soci-
ety of Agricultural Engineers, St. Joseph, MI 49085 9659, United States: Kyoto, Japan,
p.387393.
23. Guo F, Cao Q. In Study on color image processing based intelligent fruit sorting system,
Hangzhou, China, 2004; Institute of Electrical and Electronics Engineers Inc., Piscataway,
United States: Hangzhou, China, p.48024805.
24. Ying Y, Fu F. Color transformation model of fruit image in process of non-destructive qual-
ity inspection based on machine vision. Nongye Jixie Xuebao/Trans Chin Soc Agric Mach.
2004;35:85.
25. Rao X, Ying Y. In Color model for fruit quality inspection with machine vision, Boston, MA,
United States, 2005; International Society for Optical Engineering, Bellingham WA, WA
98227-0010, United States: Boston, MA, United States, p.59960.
26. Xiaobo Z, Jiewen Z, Yanxiao L. Apple color grading based on organization feature parameters.
Pattern Recognit Lett 2007, 28, 204653.
27. Kang SP, East AR, Trujillo FJ. Colour vision system evaluation of bicolour fruit: A case study
with b74 mango. Postharvest Biol Technol. 2008;49:7785.
28. Cheng HD, Jiang XH, Sun Y, Wang J. Color image segmentation: Advances and prospects.
Pattern Recognit. 2001;34:225981.
29. Wen Z, Tao Y. In Intensity compensation for on-line detection of defects on fruit, San Diego,
CA, United States, 1997; The International Society for Optical Engineering: San Diego, CA,
United States, p.474481.
30. Zou XB, Zhao JW, Li YX. Apple color grading based on organization feature parameters.
Pattern Recognit Lett 2007;28:204653.
31. Rao X, Ying Y. In A method of size inspection for fruit with machine vision, Boston, MA,
United States, 2005; International Society for Optical Engineering, Bellingham WA, WA
98227-0010, United States: Boston, MA, United States, p.59961.
32. Lu N, Tredgold A, Fielding ER. In Use of machine vision and fuzzy sets to classify soft fruit,
Wuhan, China, 1995; Society of Photo-Optical Instrumentation Engineers, Bellingham, WA,
USA: Wuhan, China, p.663669.
52 2 Machine Vision Online Measurements

33. Kavdir I, Guyer DE. In Artificial neural networks, machine vision and surface reflectance
spectra for apple defect detection, Milwaukee, WI., United States, 2000; American Society
of Agricultural Engineers: Milwaukee, WI., United States, p.937953.
34. Tao Y, H.P.H, Sommer HJ. Machine vision for colour inspection of potatoes and apples. T
Asae. 1995;5:94957.
35. Guizard CGJM. Automatic potato sorting system using colour machine vision, vol.98. In-
ternational Workshop on Sensing Quality of Agricultural Products, Motpellier, France 1998,
p.20310.
36. Wooten JR, White JG, Thomasson JA, Thompson PG. In 2000 asae annual international
meeting, vol.98,paper no.001123. St.Joseph, Michigan, USA:ASAE. 2000.
37. Noordam JC, Otten GW, Timmermans TJ, Zwol BHv. High-speed potato grading and quality
inspection based on a color vision system, electronic imaging. Int Soc Opt Photonics. 2000;
20617.
38. ELMasry G, Cubero S, Molto E, Blasco J. In-line sorting of irregular potatoes by using auto-
mated computer-basedmachine vision system. J Food Eng. 2012;12:608.
39. Elmasry G, Kamruzzaman M, Sun DW, Allen P. Principles and applications of hyper-
spectral imaging in quality evaluation of agro-food products: a review. Crit Rev Food Sci.
2012;52:9991023.
40. Zhang BH, Huang WQ, Li JB, Liu CL, Huang DF. Research of in-line sorting of irregular
potatoes based on i-relief and svm method. J Jilin University Eng Technol. 2014.
41. Heinemann PH, Hughes R, Morrow CT, Sommer HJ, Beelman RB, Wuest PJ. Grading of
mushrooms using a machine vision system. Transactions of the ASAE. 1994;37:16711.
42. Vizhanyo T, Felfoldi J. Enhancing colour differences in images of diseased mushrooms.
Comput Electron Agr. 2000;26:18798.
43. Gowen AA, ODonnell CP, Taghizadeh M, Cullen PJ, Frias JM, Downey G. Hyperspectral
imaging combined with principal component analysis for bruise damage detection on white
mushrooms (agaricus bisporus). J Chemom. 2008;22:25967.
44. Gowen AA, Taghizadeh M, O'Donnell CP. Identification of mushrooms subjected to freeze
damage using hyperspectral imaging. J Food Eng. 2009;93:712.
45. Howarth MS, Searcy SW. In Inspection of fresh market carrots by machine vision, Proceed-
ings of the 1992 Conference on Food Processing Automation II, May 46 1992, Lexington,
KY, USA, 1992; Publ by ASAE: Lexington, KY, USA, p.106106.
46. Qiu W, Shearer SA. Maturity assessment of broccoli using the discrete fourier transform.
Trans Am Soc Agric Eng. 1992;35:205762.
47. Tollner EW, Shahin MA, Maw BW, Gitaitis RD, Summer DR. In 1999 asae annual interna-
tional meeting, vol.26, paper no. 993165. S t. Joseph, Michigan, USA: ASAE, 1999.
48. Tao Y, Wen Z. An adaptive spherical image transform for high-speed fruit defect detection.
Trans Am Soc Agric Eng. 1999;42:2416.
49. Xing J, Saeys W, De Baerdemaeker J. Combination of chemometric tools and image process-
ing for bruise detection on apples. Comput Electron Agr. 2007;56:113.
50. ElMasry G, Wang N, Vigneault C. Detecting chilling injury in red delicious apple using hy-
perspectral imaging and neural networks. Postharvest Biol Technol. 2009;52:18.
51. Kim S, Schatzki TF. Apple watercore sorting system using x-ray imagery: I. Algorithm devel-
opment. Trans Asae. 2000;43,16951702.
52. Leemans V, Magein H, Destain MF. On-line fruit grading according to their external quality
using machine vision. Biosyst Eng. 2002;83:397404.
53. Chauhan APS, Singh AP. Intelligent estimator for assessing apple fruit quality. Int J Comput
Appl. 2012;60:3641.
54. Mendoza F, Aguilera JM. Application of image analysis for classification of ripening ba-
nanas. J Food Sci. 2004;69:E471E7.
55. Mendoza F, Dejmek P, Aguilera JM. Calibrated color measurements of agricultural foods
using image analysis. Postharvest Biol Technol. 2006;41:28595.
References 53

56. Garrido-Novell C, Perez-Marin D, Amigo JM, Fernandez-Novales J, Guerrero JE, Garrido-


Varo A. Grading and color evolution of apples using rgb and hyperspectral imaging vision
cameras. J Food Eng. 2012;113:2818.
57. Ying Y, Jing H, Tao Y, Zhang N. Detecting stem and shape of pears using fourier transforma-
tion and an artificial neural network. Trans Asae. 2003;46:15762.
58. Han YJ, Bowers Iii SV, Dodd RB. Nondestructive detection of split-pit peaches. Trans Am
Soc Agric Eng. 1992;35,20637.
59. Esehaghbeygi A, Ardforoushan M, Monajemi SAH, Masoumi AA. Digital image processing
for quality ranking of saffron peach. Int Agrophys. 2010;24:11520.
60. Laykin S, Edan Y, Alchanatis V, Regev R, Gross F, Grinshpun J, Bar-Lev E, Fallik E, Alka-
lai S. Development of a quality sorting machine using machine vision and impact. ASAE.
1999;99:3144.
61. Tao Y, Heinemann PH, Varghese Z, Morrow CT, Sommer Iii HJ. Machine vision for color
inspection of potatoes and apples. Trans Am Soc Agric Eng. 1995;38:155561.
62. Kondo N, Ahmad U, Monta M, Murase H. In Machine vision based quality evaluation of
iyokan orange fruit using neural networks, 2000; Elsevier: p.135147.
63. Ruiz LA, Molt E, Juste F, Pl F, Valiente R. Location and characterization of the stemcalyx
area on oranges by computer vision. J Agric Eng Res. 1996;64:16572.
64. Aleixos N, Blasco J, Navarrn F, Molt E. Multispectral inspection of citrus in real-time us-
ing machine vision and digital signal processors. Comput Electron Agr. 2002;33:12137.
65. Vidal A, Talens P, Prats-Montalban JM, Cubero S, Albert F, Blasco J. In-line estimation of the
standard colour index of citrus fruits using a computer vision system developed for a mobile
platform. Food Bioprocess Tech. 2013;6:34129.
66. Zhao X, Burks TF, Qin J, Ritenour MA. Digital microscopic imaging for citrus peel disease
classification using color texture features. Appl Eng Agric. 2009;25:76976.
67. Pearson T, Toyofuku N. Automated sorting of pistachio nuts with closed shells. Appl Eng
Agric. 2000;16:914.
68. Nagata M, Cao Q, Bato PM, Shrestha BP, Kinoshita O. In. asae annual international meeting,
vol.43, paper no. 973095. St Joseph Michigan USA: ASAE. 1997;1997:1695702.
69. Bato PM, Nagata M, QiXin C, Hiyoshi K, Kitahara T. Study on sorting system for strawberry
using machine vision (part 2): development of sorting system with direction and judgement
functions for strawberry (akihime variety). J Jpn Soc Agric Mach. 2000;62,10110.(%@
02852543).
70. Liming X, Yanchao Z. Automated strawberry grading system based on image processing.
Comput Electron Agr. 2010;71:S32S9.
71. Nagata M, Tallada JG, Kobayashi T. Bruise detection using nir hyperspectral imaging for
strawberry. Fragaria ananassa. 2006;13342.
72. Scott A. Automated continuous online inspection, detection and rejection. Food Technol Eur.
1994;1:868.
73. Sapirstein HD. In food processing automation iv proceedings of the fpac conference, vol.26.
St. Joseph, Michigan, USA: ASAE, 1995, p.187198.
74. Davidson VJ, Ryks J, Chu T. Fuzzy models to predict consumer ratings for biscuits based on
digital image features. IEEE Trans Fuzzy Syst. 2001;9:627.
75. Abdullah MZ, Aziz SA, Dos Mohamed AM. Quality inspection of bakery products using a
color-based machine vision system. J Food Qual. 2000;23:3950.
76. Tan FJ, Morgan MT, Ludas LI, Forrest JC, Gerrard DE. Assessment of fresh pork color with
color machine vision. J Anim Sci. 2000;78:307885.
77. Lu J, Tan J, Shatadal P, Gerrard DE. Evaluation of pork color by using computer vision. Meat
Sci. 2000;56:5760.
78. Gerrard DE, Gao X, Tan J. Beef marbling and color score determination by image processing.
J Food Sci. 1996;61,14518. (%@ 17503841).
79. Tan J, Gao X, Gerrard DE. Application of fuzzy sets and neural networks in sensory analysis.
J Sens Stud. 1999;14:11938. (%@ 17451459X).
54 2 Machine Vision Online Measurements

80. Storbeck F, Daan B. Fish species recognition using computer vision and a neural network.
Fish Res. 2001;51:115.
81. Quevedo RA, Aguilera JM, Pedreschi F. Color of salmon fillets by computer vision and
sensory panel. Food Bioprocess Tech. 2010;3:63743.
82. Jamieson V. Physics raises food standards. Phys World. 2002;15,212.
83. Hayashi S, Kanuma T, Ganno K, Sakaue O. In Cabbage head recognition and size estima-
tion for development of a selective harvester, 1998.
84. Batchelor MM, Searcy SW. Computer vision determination of the stem/root joint on pro-
cessing carrots. J Agric Eng Res. 1989;43:25969.
85. Steinmetz V, Roger JM, Molt E, Blasco J. On-line fusion of colour camera and spectro-
photometer for sugar content prediction of apples. J Agric Eng Res. 1999;73:20716.
86. Kim S, Schatzki T. Detection of pinholes in almonds through x-ray imaging. Trans Asae.
2001;44:9971003.
87. Anon. Focus on container inspection. Int Bottler Packag. 1995;69,2231.
88. Li J, Tan J, Martz FA. In Predicting beef tenderness from image texture features, Proceed-
ings of the 1997 ASAE Annual International Meeting. Part 1 (of 3), August 10, 1997 Au-
gust 14, 1997, Minneapolis, MN, USA, 1997; ASAE: Minneapolis, MN, USA.
89. Ilea DE, Whelan PF. Image segmentation based on the integration of colourtexture de-
scriptorsa review. Pattern Recognit. 2011;44:2479501.
90. Tao Y, Wen Z. Adaptive spherical image transform for high-speed fruit defect detection.
Trans ASAE. 1999;42:2416.
91. Ying Y-B, Gui J-S, Rao X-Q. Fruit shape classification based on zernike moments. Ji-
angsu Daxue Xuebao (Ziran Kexue Ban)/J Jiangsu University (Natural Science Edition).
2007;28:13.
92. Paulus I, Schrevens E. Shape characterization of new apple cultivars by fourier expansion
of digitized images. J Agric Eng Res. 1999;72:1138.
93. Abdullah MZ, Mohamad-Saleh J, Fathinul-Syahir AS, Mohd-Azemi BMN. Discrimination
and classification of fresh-cut starfruits (averrhoa carambola l.) using automated machine
vision system. J Food Eng. 2006;76:50623.
94. Abdullah MZ, Fathinul-Syahir AS, Mohd-Azemi BMN. Automated inspection system for
colour and shape grading of starfruit (averrhoa carambola l.) using machine vision sensor.
Trans Inst Meas Control. 2005;27:6587.
95. Paulus I, De Busscher R, Schrevens E. Use of image analysis to investigate human quality
classification of apples. J Agric Eng Res. 1997;68:34153.
96. Leemans V, Magein H, Destain MF. Defects segmentation on golden delicious apples by
using colour machine vision. Comput Electron Agr. 1998;20:11730.
97. Zou XB, Zhao JW, Li YX, Shi JY, Yin XP Apples shape grading by fourier expansion and
genetic program algorithm. In Icnc 2008: Fourth international conference on natural com-
putation, vol4, proceedings, Guo, M.Z.; Zhao, L.; Wang, L.P., Eds. 2008; p.8590.
98. Weeks AR, Gallagher A, Eriksson J. Detection of oranges from a color image of an orange
tree. Proc SPIEInt Soc Opt Eng. 1999;3808:34657.
99. Pydipati R, Burks TF, Lee WS. Identification of citrus disease using color texture features
and discriminant analysis. Comput Electron Agric. 2006;52:4959.
100. Lee DJ, Archibald JK, Chang YC, Greco CR. Robust color space conversion and color
distribution analysis techniques for date maturity evaluation. J Food Eng. 2008;88:36472.
101. Lee D-J. In Color space conversion for linear color grading, Boston, USA, 2000; Soci-
ety of Photo-Optical Instrumentation Engineers, Bellingham, WA, USA: Boston, USA,
p.358366.
102. Abdullah MZ, Guan LC, Mohamed AMD, Noor MAM. Color vision system for ripeness
inspection of oil palm elaeis guineensis. J Food Process Preserv. 2002;26:21335.
103. Leemans V, Magein H, Destain MF. Defects segmentation on golden delicious apples by
using colour machine vision. Comput Electron Agric. 1998;20:11730.
104. Bulanon DM, Burks TF, Alchanatis V. In Study on fruit visibility for robotic harvesting,
Minneapolis, MN, United States, 2007; American Society of Agricultural and Biological
References 55

Engineers, St. Joseph, MI 49085 9659, United States: Minneapolis, MN, United States,
p.12.
105. Zou XB, Zhao JW. Apple quality assessment by fusion three sensors. IEEE Sensors. 2005;1
& 2,38992.
106. Zhu B, Jiang L, Tao Y. Three-dimensional shape enhanced transform for automatic apple
stem-end/calyx identification. Opt Eng. 2007;46.
107. Yoruk R, Yoruk S, Balaban MO, Marshall MR. Machine vision analysis of antibrown-
ing potency for oxalic acid: a comparative investigation on banana and apple. J Food Sci.
2004;69:E281E9.
108. Xiuqin R, Yibin Y, YiKe C, Haibo H. In Laser scatter feature of surface defect on apples,
Boston, MA, United States, 2006; International Society for Optical Engineering, Belling-
ham WA, WA 98227-0010, United States: Boston, MA, United States, p638113.
109. Xing J, Jancsok P, De Baerdemaeker J. Stem-end/calyx identification on apples using con-
tour analysis in multispectral images. Biosystems Eng. 2007;96:2317.
110. Wen Z, Tao Y. Dual-camera nir/mir imaging for stem-end/calyx identification in apple de-
fect sorting. Trans ASAE. 2000;43:44952.
111. Upchurch BL, Throop JA. Effects of storage duration on detecting watercore in apples us-
ing machine vision. Trans ASAE. 1994;37:4836.
112. Upchurch BL, Throop JA. In Considerations for implementing machine vision for detect-
ing watercore in apples, Boston, MA, USA, 1993; Publ by Int Soc for Optical Engineering,
Bellingham, WA, USA: Boston, MA, USA, p.291297.
113. Unay D, Gosselin B. Stem and calyx recognition on jonagold apples by pattern recogni-
tion. J Food Eng. 2007;78:597605.
114. Unay D, Gosselin B. Automatic defect segmentation of jonagold apples on multi-spectral
images: a comparative study. Postharvest Biol Technol. 2006;42:2719.
115. Unay D, Gosselin B. In Artificial neural network-based segmentation and apple grading
by machine vision, Genova, Italy, 2005; Institute of Electrical and Electronics Engineers
Computer Society, Piscataway, NJ 08855 1331, United States: Genova, Italy, p.630633.
116. Shahin MA, Tollner EW, McClendon RW, Arabnia HR. Apple classification based on sur-
face bruises using image processing and neural networks. Trans Asae. 2002;45:161927.
117. Safren O, Alchanatis V, Ostrovsky V, Levi O. Detection of green apples in hyperspectral
images of apple-tree foliage using machine vision. Trans ASABE. 2007;50:230313.
118. Rao XQ, Ying YB, Cen YK, Huang HB. Laser scatter feature of surface defect on apples
art. No. 638113. Opt Nat Resour Agric Foods. 2006;6381:381133.
119. Peirs A, Scheerlinck N, De Baerdemaeker J, Nicolai BM. Starch index determination of
apple fruit by means of a hyperspectral near infrared reflectance imaging system. J Infrared
Spectrosc. 2003;11:37989.
120. Narayanan P, Lefcourt AM, Tasen U, Rostamian R, Kim MS. In Tests of the ability to orient
apples using their inertial properties, Minneapolis, MN, United States, 2007; American So-
ciety of Agricultural and Biological Engineers, St. Joseph, MI 49085 9659, United States:
Minneapolis, MN, United States, p12.
121. Mehl PM, Chao K, Kim M, Chen YR. Detection of defects on selected apple cultivars using
hyperspectral and multispectral image analysis. Appl Eng Agric. 2002;18:21926.
122. Li QZ, Wang MH, Gu WK. Computer vision based system for apple surface defect detec-
tion. Comput Electron Agric. 2002;36:21523.
123. Lefcout AM, Kim MS, Chen Y-R, Kang S. Systematic approach for using hyperspectral
imaging data to develop multispectral imagining systems: Detection of feces on apples.
Comput Electron Agric. 2006;54:2235.
124. Lefcourt AM, Narayanan P, Tasch U, Rostamian R, Kim MS, Chen Y-R. Algorithms
for parameterization of dynamics of inertia-based apple orientation. Appl Eng Agric.
2008;24:1239.
125. Leemans V, Destain MF. A real-time grading method of apples based on features extracted
from defects. J Food Eng. 2004;61:839.
56 2 Machine Vision Online Measurements

126. Kleynen O, Leemans V, Destain MF. Development of a multi-spectral vision system for the
detection of defects on apples. J Food Eng. 2005;69:419.
127. Kavdir I, Guyer DE. Evaluation of different pattern recognition techniques for apple sort-
ing. Biosystems Eng. 2008;99:2119.
128. Kavdir I, Guyer DE. Bulanik mantik kullanarak elma siniflama apple grading using fuzzy
logic. Turk J Agric For. 2003;27,37582.
129. Kaewapichai W, Kaewtrakulpong P, Prateepasen A. A real-time automatic inspection sys-
tem for pattavia pineapples. Key Eng Mater. 2006;321323 II,118691.
130. Huang X-Y, Lin J-R, Zhao J-W. Detection on defects of apples based on support vector
machine. Jiangsu Daxue Xuebao (Ziran Kexue Ban)/J Jiangsu University (Natural Science
Edition). 2005;26:4657.
131. ElMasry G, Wang N, Vigneault C, Qiao J, ElSayed A. Early detection of apple bruises
on different background colors using hyperspectral imaging. Lwt-Food Sci Technol.
2008;41:33745.
132. Cheng X, Tao Y, Chen Y-R, Luo Y. Nir/mir dual-sensor machine vision system for online
apple stem-end/calyx recognition. Trans Am Soc Agric Eng. 2003;46:5518.
133. Bulanon DM, Kataoka T, Ota Y, Hiroma T. Segmentation algorithm for the automatic rec-
ognition of Fuji apples at harvest. Biosystems Eng. 2002;83:40512.
134. Bulanon DM, Kataoka T, Okamoto H, Hata S. In Development of a real-time machine vi-
sion system for the apple harvesting robot, Sapporo, Japan, 2004; Society of Instrument and
Control Engineers (SICE), Tokyo, 113, Japan: Sapporo, Japan, p.25312534.
135. Bennedsen BS, Peterson DL, Tabb A. Identifying defects in images of rotating apples.
Comput Electron Agric. 2005;48:92102.
136. Ariana D, Guyer DE, Shrestha B. Integrating multispectral reflectance and fluorescence
imaging for defect detection on apples. Comput Electron Agric. 2006;50:14861.
137. Guyer D, Yang X. Use of genetic artificial neural networks and spectral imaging for defect
detection on cherries. Comput Electron Agric. 2000;29:17994.
138. Kim G, Lee K, Choi K, Son J, Choi D, Kang S. In Defect and ripeness inspection of citrus
using nir transmission spectrum, Jeju Island, South Korea, 2004; Trans Tech Publications
Ltd, Zurich-Ueticon, CH-8707, Switzerland: Jeju Island, South Korea, p.10081013.
139. Bennedsen BS, Peterson DL. Performance of a system for apple surface defect identifica-
tion in near-infrared images. Biosystems Eng. 2005;90:41931.
140. Mathanker SK, Weckler PR, Bowser TJ, Wang N, Maness NO. Adaboost classifiers for
pecan defect classification. Comput Electron Agric. 2011;77:608.
141. Qingsheng Yang JAM. Accurate blemish detection with active contour models. Comput
Electron Agric. 1996;14:7789.
142. Yang Q. An approach to apple surface feature detection by machine vision. Comput Elec-
tron Agr. 1994;11:24964.
143. Leemans VD, Destain M-F. A real-time grading method of apples based on features ex-
tracted from defects. J Food Eng. 2004;61:839.
144. Blasco J, Aleixos N, Molto E. Computer vision detection of peel defects in citrus by means
of a region oriented segmentation algorithm. J Food Eng. 2007;81:53543.
145. Sonka M, Bosch JG, Lelieveldt BPF, Mitchell SC, Reiber JHC. Computer-aided diagnosis
via model-based shape analysis: cardiac MR and echo. Int Congr Ser. 2003;1256:10138.
146. Zhang Y, Yin X, Xu T, Zhao J. On-line sorting maturity of cherry tomato bymachine vision.
In: Li D, Zhao C, editors. Computer and computing technologies in agriculture ii. vol.3.
New York: Springer; 2009. p.22232229
http://www.springer.com/978-94-017-9675-0