Sie sind auf Seite 1von 16

Trends in Food Science & Technology 29 (2013) 5e20

Review

Colour
measurements by transmitted to the brain by the optical nerve, which makes
human assign colours to this signal. Therefore, colour is not
an intrinsic property of the object, since if the light source
computer vision for is changed, the colour of the object also changes
(Melendez-Martinez, Vicario, & Heredia, 2005). The per-

food quality ception of colour is a very complex phenomenon that de-


pends on the composition of the object in its illumination
environment, the characteristics of the perceiving eye and
control e A review brain, and the angles of illumination and viewing.
In foods, the appearance is a primary criterion in making
purchasing decisions (Kays, 1991). Appearance is utilized
Di Wu and Da-Wen Sun* throughout the productionestorageemarketingeutilization
chain as the primary means of judging the quality of indi-
Food Refrigeration and Computerised Food vidual units of product (Kays, 1999). The appearance of
Technology (FRCFT), School of Biosystems unities of products is evaluated by considering their size,
Engineering, University College Dublin, National shape, form, colour, freshness condition and finally the ab-
University of Ireland, Agriculture & Food Science sence of visual defects (Costa et al., 2011). Especially, col-
Centre, Belfield, Dublin 4, Ireland our is an important sensorial attribute to provide the basic
(Tel.: D353 1 7167342; fax: D353 1 7167493; quality information for human perception, and has close as-
e-mail: dawen.sun@ucd.ie; URLs: http://www.ucd.ie/ sociation with quality factors such as freshness, maturity,
refrig, http://www.ucd.ie/sun) variety and desirability, and food safety, and therefore is
an important grading factor for most food products
(McCaig, 2002). Colour is used as an indicator of quality
Colour is the first quality attribute of food evaluated by con- in many applications (Blasco, Aleixos, & Molto, 2003;
sumers, and is therefore an important component of food qual- Cubero, Aleixos, Molto, Gomez-Sanchis, & Blasco, 2011;
ity relevant to market acceptance. Rapid and objective Quevedo, Aguilera, & Pedreschi, 2010; Rocha & Morais,
measurement of food colour is required in quality control for 2003). Upon the first visual assessment of product quality,
the commercial grading of products. Computer vision is colour is critical (Kays, 1999). Consumer first judge a food
a promising technique currently investigated for food colour from its colour and then from other attributes such as taste
measurement, especially with the ability of providing a de- and aroma. The colour of food products affects the con-
tailed characterization of colour uniformity at pixel-based sumer acceptability of food products, and therefore should
level. This paper reviews the fundamentals and applications be “right”, when consumers are purchasing foods. The
of computer vision for food colour measurement. Introduction research on the objective assessment of food colours is an
of colour space and traditional colour measurements is also expanding field. Some researches show that colours have
given. At last, advantages and disadvantages of computer vi- relationship with human responses (Iqbal, Valous,
sion for colour measurement are analyzed and its future trends Mendoza, Sun, & Allen, 2010; Pallottino et al., 2010).
are proposed. With increased requirements for quality by consumers,
the food industry has paid numerous efforts to measure
and control the colour of their products. Therefore, it is crit-
Introduction
ical to develop effective colour inspection systems to mea-
Colour is a mental perceptual response to the visible spec-
sure the colour information of food product rapidly and
trum of light (distribution of light power versus wave-
objectively during processing operations and storage
length) reflected or emitted from an object. Such response
periods. For a modern food plant, as its food throughput
signal is interacted in the eye with the retina, and is then
is increasing as well as the quality tolerance is tightening,
the employment of automatic methods for colour measure-
* Corresponding author. ment and control is quite necessary.
0924-2244/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.tifs.2012.08.004
6 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

Colour spaces images generated by the free software RGBCube (http://


Human eye distinguishes colours according to the vary- www.couleur.org/index.php?page¼rgbcube) except the
ing sensitivity of different cone cells in the retina to light of HSV space (Mathworks, 2012) is illustrated in Fig. 1.
different wavelengths. There are three types of colour
photoreceptor cells (cones) for human with sensitivity Hardware-orientated spaces
peaks in short (bluish, 420e440 nm), middle (greenish, Hardware-orientated spaces are proposed for the hard-
530e540 nm), and long (reddish, 560e580 nm) wave- ware processing, such as image acquisition, storage, and
lengths (Hunt, 1995). A colour sensation no matter how display. They can sense even a very small amount of colour
complex can be described using three colour components variation and are therefore popular in evaluating colour
by eyes. These components, which are called as tristimulus changes of food products during processing, such as the ef-
values, are yielded by the three types of cones based on the fects of changes of temperature and time during the storage
extent to which each is stimulated. Colour space is a math- on tomato colour (Lana, Tijskens, & van Kooten, 2005). As
ematical representation for associating tristimulus values the most popular hardware-orientated space, RGB (red,
with each colour. Generally there are three types of colour green, blue) space is defined by coordinates on three
spaces, namely hardware-orientated space, human- axes, i.e., red, green, and blue. It is the way in which cam-
orientated space, and instrumental space. Some colour eras sense natural scenes and display phosphors work
spaces are formulated to help humans select colours and (Russ, 1999). YIQ (luminance, in-phase, quadrature) and
others are formulated to ease data processing in machines CMYK (cyan, magenta, yellow, black) are another two
(Pascale, 2003). 3D demonstration of some colour space popular hardware-orientated spaces, which are mainly

Fig. 1. 3D demonstration of some colour space images. (a) RGB, (b) YIQ, (c) CMY, (d) HSV (Mathworks, 2012), (e) XYZ, (f) L*a*b*, (g) L*u*v*.
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 7

used for television transmission and in printing and copying components which look like red and blue sensitive curve
output, respectively, and hence are not used for colour mea- of cones. However, XYZ does not represent colour gradation
surement in the food industry. in a uniform matter. For this reason, two colour spaces, CIE
1976 (L*a*b*) or called CIELAB and CIE 1976 (L*u*v*)
Human-orientated spaces or called CIELUV, which are the non-linear transformation
Human-orientated spaces are corresponding to the con- of XYZ, were brought out and are adopted in many colour
cepts of tint, shade, and tone, which are defined by an artist measuring instruments. In the colour measurement of
based on the intuitive colour characteristics. In general, food, L*a*b* colour space is the most used one due to
human-orientated spaces are hue-saturation (HS) based the uniform distribution of colours, and because it is per-
spaces, such as HSI (hue, saturation, intensity), HSV ceptually uniform, i.e., the Euclidean distance between
(hue, saturation, value), HSL (hue, saturation, lightness), two different colours corresponds approximately to the col-
and HSB (hue, saturation, brightness). Hue is defined as our difference perceived by the human eye (Leon, Mery,
the attribute of a visual sensation according to which an Pedreschi, & Leon, 2006).
area appears to be similar to one of the perceived colours:
red, yellow, green, and blue, or to a combination of two of Colour measurements
them. Saturation is defined as the colourfulness of an area Colour is an important object measurement for image
judged in proportion to its brightness. On the other hand, understanding and object description, which can be used
brightness is defined as the attribute of a visual sensation for quality evaluation and inspection of food products.
according to which an area appears to emit and lightness The colour measurements can be conducted by visual
is defined as the brightness of an area judged relative to (human) inspection, traditional instruments like colourime-
the brightness of a similarly illuminated area that appears ter, or computer vision.
to be white or highly transmitting (Fairchild, 2005). Differ-
ent from RGB space which uses the cuboidal coordinate to Visual measurements
define colour, the colour in HS based spaces is defined us- Qualitative visual assessment is carried out for many
ing the cylindrical coordinates (Fig. 1d). Because HS based operations in existing food colour inspection systems by
spaces are developed based on the concept of visual percep- trained inspectors in well-illuminated rooms and sometimes
tion in human eyes, their colour measurements are user- with the aid of colour atlases or dictionaries (Melendez-
friendly and have a better relationship to the visual signif- Martinez et al., 2005). As a result of visual measurement,
icance of food surfaces. This has been clarified by a study a particular description of colour is obtained using a certain
in which HSV space had a better performance than RGB vocabulary (Melendez-Martinez et al., 2005). Although
space in the evaluation of acceptance of pizza toppings human inspection is quite robust even in the presence of
(Du & Sun, 2005). However, human-orientated spaces, as changes in illumination, colour perception is subjective,
with human vision, are not sensitive to a small variation variable, laborious, and tedious, has poor colour memory
in colour, and therefore are not suitable for evaluating of subjects, depends upon lighting and numerous other fac-
changes of product colour during processing. tors, and is not suitable for routine large-scale colour mea-
surement (Hutchings, 1999; Leon et al., 2006; McCaig,
Instrumental spaces 2002).
Instrumental spaces are used for colour instruments.
Many of instrumental spaces are standardized by the Traditional instrumental measurements
Commission Internationale d’Eclairage (CIE) under a series Traditional instruments, such as colourimeters and spec-
of standard conditions (illuminants, observers, and method- trophotometers, have been used extensively in the food in-
ology spectra) (Rossel, Minasny, Roudier, & McBratney, dustry for colour measurement (Balaban & Odabasi, 2006).
2006). Not like hardware-orientated spaces which have dif- Under specified illumination environment, these instru-
ferent coordinates for the same colour for various output ments provide a quantitative measurement by simulating
media, colour coordinates from an instrumental space are the manner in which the average human eye sees the colour
the same on all output media. CIE XYZ colour space is of an object (McCaig, 2002).
an early mathematically defined colour space created by Colourimeters, such as Minolta chromameter; Hunter
CIE in 1931 based on the physiological perception of light. Lab colourimeter, and Dr. Lange colourimeters, are used
In XYZ space, a set of three colour-matching functions, col- to measure the colour of primary radiation sources that
lectively called the Standard Observer, are related to the emit light and secondary radiation sources that reflect or
red, green and blue cones in the eye (The Science of transmit external light (Leon et al., 2006; Melendez-
Color, 1973). XYZ colour space was proposed to solve Martinez et al., 2005). Therefore, tristimulus values are
the problem that it is not possible to stimulate one type optically, not mathematically, obtained. Its measurement
of cone only and no component is used to describe the per- is rapid and simple. The calibration of colourimeters is
ceived brightness (Hunt, 1998). In this space, Y means the achieved using standard tiles at the beginning of the opera-
lightness, while X and Z are two primary virtual tion (Oliveira & Balaban, 2006).
8 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

Spectrophotometers with extended spectral range which (Gunasekaran, 1996; Sun, 2000; Sun, & Brosnan 2003;
includes the visible region (VNIR instruments) are also Zheng, Sun, & Zheng, 2006a, b; Du, & Sun, 2006). As
widely used for colour measurement throughout the food an inspection and evaluation technique through electroni-
and agricultural industries (McCaig, 2002). Spectropho- cally perceiving and evaluating an image, computer vision
tometers output the spectral distribution of transmittance has the advantages of being rapid, consistent, objective,
or reflectance of the sample. The X, Y, Z values are calcu- non-invasive, and economic. In computer vision, colour is
lated, depending on the illuminant, the measurement geom- elementary information stored in pixels in a digital image.
etry and the observer (Hutchings, 1994). Computer vision extracts quantitative colour information
Spectroradiometers are used for the measurement of from digital images by using image processing and analy-
radiometric quantities as a function of wavelength sis, resulting in the achievement of rapid and non-contact
(Melendez-Martinez et al., 2005). Tristimulus values of colour measurement. In recent years, computer vision has
both spectrophotometers and spectroradiometers are math- been investigated for objectively measuring the colour
ematically obtained in accordance with the CIE definitions. and other quality attributes of foods (Brosnan & Sun,
Spectroradiometers have the same components as a spectro- 2004; Cubero et al., 2011; Du & Sun, 2004; Jackman,
photometer. The difference is that spectroradiometers use Sun, & Allen, 2011). A significant difference between com-
an external light source. Nowadays, spectroradiometers puter vision and conventional colourimetry is the amount of
have also been widely used for the quality prediction of provided spatial information. High-spatial resolution en-
many food and agricultural products (Wu et al., 2009; ables computer vision to analyze each pixel of the entire
Wu, He, & Feng, 2008; Wu, He, Nie, Cao, & Bao, 2010). surface, calculate the average and standard deviation of col-
However, although simple colour measurements can be our, isolate and specify appearance, measure the nonuni-
achieved, there are potential disadvantages in using tradi- form shapes and colours, select a region of interests
tional instrumental measurements (Balaban & Odabasi, flexibly, inspect more than one object at the same time,
2006). One problem is that traditional instrumental mea- generate the distribution map of colour, and provide a per-
surements can only measure the surface of sample that is manent record by keeping the picture (Balaban & Odabasi,
uniform and rather small. The sampling location and the 2006; Leon et al., 2006).
number of readings for obtaining an accurate average col- A digital image is acquired by incident light in the vis-
our are important for traditional instrumental measurements ible spectrum falling on a partially reflective surface with
(Oliveira & Balaban, 2006). When the surface of sample the scattered photons being gathered up in the camera
has nonhomogeneous colours, the measurement should be lens, converted to electrical signals either by vacuum tube
repeated to cover the whole surface, and even so, it is still or by CCD (charge-coupled device), and saved in hard
hard to obtain the distribution map of colour. In addition, disk for further image display and image analysis. A digital
such measurement is quite unrepresentative, making the monochrome image is a two-dimensional (2-D) light-
global analysis of the food’s surface a difficult task. intensity function of I(x, y). The intensity I, generally
Another problem is the size and shape of the sample. If known as the grey level, at spatial coordinates (x, y) has
the size of sample is too small to fill the sample window, proportional relationship with the radiant energy received
e.g., a grain of rice or the shape of measured area is not by the sensor or detector in a small area around the point
round, e.g., shrimp, their colour measurements may be in- (x, y) (Gunasekaran, 1996). The interval of grey level
accurate if traditional instrumental measurement is made. from low to high is called a grey scale, which is numeri-
Moreover, in order to obtain a detailed characterization of cally represented by a value between 0 (pure black) and
a food sample and thereby more precisely evaluate its qual- L (white) in common practice (Gunasekaran, 1996). Image
ity, it is required to acquire the colour value of each pixel acquisition and image analysis are two critical steps for the
within sample surface for further generating the distribution application of computer vision. Image acquisition requires
map of colour (Leon et al., 2006). Such requirement is not scrupulous design of image capturing system and careful
possibly achieved by using traditional instrumental mea- operation to obtain digital images with high quality. Image
surements. This in turn has increased the need for develop- analysis includes numerous algorithms and methods avail-
ing automatic pixel-based colour measurement process in able for classification and measurement (Krutz, Gibson,
the food industry to replace traditional methods of human Cassens, & Zhang, 2000). The automatic colour measure-
evaluation and instrumental measurements for rapid and ment using computer vision has the advantages of superior
non-invasive measurement of colour distribution within speed, consistency, accuracy, and cost-effectiveness, and
food products. therefore cannot only optimize quality inspection but also
help in reducing human inconsistency and subjectiveness.
Computer vision measurements
Computer vision is the science that develops theoretical
and algorithmic basis to automatically extract and analyze Computer vision system
useful information about an object or scene from an The hardware configuration of a computer vision system
observed image, image set or image sequence generally consists of an illumination device, a solid-state
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 9

Fig. 2. Schematic diagram of a typical computer vision system.

CCD array camera, a frame-grabber, a personal computer,


and a high-resolution colour monitor (Fig. 2).

Illumination
As an important prerequisite of image acquisition, illu-
mination can greatly affect the quality of captured image.
Different illuminants may yield different stimuli using the
same camera. A well-designed illumination system can im-
prove the accuracy, reduce the time and complexity of the
subsequent image processing steps, lead to success of im- Fig. 3. Two possible lighting geometries: (a) the ring illuminator; (b) the
diffuse illuminator.
age analysis, and decrease the cost of an image processing
system (Du & Sun, 2004; Gunasekaran, 1996). Fluorescent
and incandescent bulbs are two widely used illuminants, of photodiodes (known as pixels) that are made of light sen-
even though there are also some other useful light sources, sitive materials and used to read out light energy falling on
such as light-emitting diodes (LEDs) and electrolumines- it as an electronic charges. The charges are proportional to
cent sources. Because fluorescent light provides a more uni- the light intensity and stored in the capacitor. The CCD op-
form dispersion of light from the emitting surface and has erates in two modes, passive and active. The first mode
more efficient inherence to produce more intense illumina- transfers the charges to a bus line when received the select
tion at specific wavelengths, it is widely used for many signal. In the latter one, charges are transferred to a bus line
computer vision practitioners (Abdullah, 2008). Besides after being amplified to compensate the limited fill factor of
the type of illuminant, the position of an illuminant is the photodiode. After shifting out of the detector, the elec-
also important. There are two commonly used geometries trical charges are digitalized to generate the images.
for the illuminators, namely the ring illuminator and the Depending on various applications, CCD cameras have dif-
diffuse illuminator (Fig. 3). The ring illuminator has a sim- ferent architectures. The interline and frame-transfer are
ple geometry and is widely used for general purpose, espe- two popularly used architectures associated with modern
cially for the samples with flat surfaces. On the other hand, digital cameras. Both interline and frame-transfer architec-
the diffuse illuminator is well suited for the imaging appli- tures are competent for acquiring motion images. The inter-
cation of food products with sphere shape, because it pro- line CCD uses an additional horizontal shift register to
vides virtually 180 of diffuse illumination. collect and pass on the charge read out from a stack of ver-
tical linear scanners, which comprises photodiodes and
Camera a corresponding vertical shift register. The downside to
The camera is used to convert photons to electrical the interline CCD is that the opaque strips on the imaging
signals. CCD and CMOS (complementary metale area decreases the effective quantum efficiency. The
oxideesemiconductor) are two major types of camera, frame-transfer design is consisted of integration and storage
which are both solid-state imaging devices. Due to using frames. The integration frame acquires an image and trans-
a lens for imaging, pixel of central part of an image are fers the charge to the storage frame, so that the image can
much more sensitive than peripheral part in CCD and be read out slowly from the storage region while the next
CMOS. CCD camera consists of hundreds of thousands light signal can be integrated in the integration frame for
10 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

capturing a new image. The disadvantage of this architec- beam splitter prism that splits the light into red, green and
ture is its higher cost due to the requirement of doubled blue components. Each sensor in 3CCD responds to one of
cell area and more complex control electronics. the three colour. 3CCD has higher quantum efficiency/light
Although CCD is the current dominant detector for im- sensitivity resulting in enhanced resolution and lower noise,
age acquisition, it is anticipated that the CCD technology because 3CCD captures most of the light entering the aper-
will be superseded by CMOS technology in the consumer ture, while only one-third of the light is detected by a Bayer
electronics market in the near future. The CMOS image mask.
sensor includes both photodetector and read out amplifier
in each pixel (called active pixel), which is the major differ- Frame-grabber
ence between CCD and CMOS (Litwiller, 2005). There- Besides illumination and camera, frame grabber is an-
fore, CMOS sensor is referred to as an ‘Active Pixel other hardware that should be considered for image acqui-
Sensor’ compared with the ‘Passive Pixel Sensor’ type con- sition. When only analogue cameras were available, frame
tained in CCD arrays (Kazlauciunas, 2001). After using grabbers provided the functions of digitization, synchroni-
photodiode to convert incident photon to electron, CMOS zation, data formatting, local storage, and data transfer
converts the integrated charge to a voltage signal inside from the camera to the computer to generate a bitmap
each active pixel immediately by using a set of optically in- image. A typical frame-grabber card used for analogue
sensitive transistors adjacent to the photodiode. The voltage cameras consists of signal-conditioning elements, an A/D
signals are then read out over the wires. CMOS camera can converter, a look-up table, an image buffer and a PCI bus
transfer signal very fast because it has the wires inside, interface. Nowadays, digital cameras are generally used
compared to the vertical and horizontal registers used by in higher-end applications. These cameras do not need
the CCD to shift the charges. Therefore, CMOS is espe- frame grabber for digitization. Frame grabber is also not
cially suitable for the requirement of high speed imaging necessary to transfer data from camera to the host com-
for online industrial inspection. Moreover, the CMOS can puter. Alternatively, cameras are available with Camera-
access to each particular pixel by an XeY address owing Link, USB, Ethernet and IEEE 1394 (“FireWire”)
to the addressability of the wires arranged in rows and col- interfaces that simplify connection to a PC. Nevertheless,
umns. So that the CMOS can extract a region of interest frame grabbers are still alive well, but they are different
from the image. Besides the characteristics of high speed than what they used to be. Their role today has become
and random addressing, the CMOS has other advantages much broader rather than just image capture and data trans-
like low cost, low power consumption, single power supply, fer. Modern frame grabbers now include many of the spe-
and small size for system integration, which makes them cial features, such as acquisition control (trigger inputs
prevail in the consumer electronics market (e.g., low-end and strobe outputs), I/O control, tagging incoming images
camcorders and cell phones) (Qin, 2010). In addition, in with unique time stamps, formatting data from multitap
CCD technology, signals from one pixel can be affected cameras into seamless image data, image correction and
by another in the same row which is termed ‘blooming’ processing such as Bayer inversion filters, image authenti-
and a poor pixel within a particular row can interfere cation and filtering, and communications related to perfor-
with signals from other rows (Kazlauciunas, 2001). mance monitoring.
However, CMOS is immune to the blooming, because
each pixel in CMOS array is independent of other pixels Colour space transformations
nearby. The main limit of current CMOS sensor is that There are three aspects that determine colour, namely
they have higher noise and higher dark current than the the type of emission source that irradiates an object, the
CCDs, giving rise to the low dynamic range and the sensi- physical properties of the object itself (which reflects
tivity (Qin, 2010). the radiation consequently detected by the sensor), and
Bayer sensor and three-CCD devices (3CCD) are two the in-between medium (e.g., air or water) (Menesatti
main types of colour image sensors that are differed by et al., 2012). In general, a computer vision system captures
the way of colour separation. Bayer sensor over the CCD the colour of each pixel within the image of the object us-
is commonly used for capturing digital colour images. It ing three colour sensors (or one sensor with three alternat-
uses a colour filter array that comprises many squares. ing filters) per pixel (Forsyth & Ponce, 2003; Segnini,
Each squire has four pixels with one red filter, one blue fil- Dejmek, & Oste, 1999a). RGB model is the most often
ter, and two green filters, because human eye is more sen- used colour model, in which each sensor captures the inten-
sitive to the green of the visible spectrum and less sensitive sity of the light in the red (R), green (G) or blue (B) spec-
to the red and blue. The missing colour can be interpolated trum, respectively (Leon et al., 2006). However, the RGB
using a demosaicing algorithm. The shortcoming of Bayer model is device-dependent and not identical to the intensi-
sensor is the colour resolution is lower than the luminance ties of the CIE system (Mendoza & Aguilera, 2004).
resolution, although luminance information is measured at Another problem of RGB model is that it is not a perceptu-
every pixel. Better colour separation can be achieved by ally uniform space. The differences between colours (i.e.,
3CCD that has three discrete image sensors and a dichroic Euclidean distances) in RGB space do not correspond to
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 11

colour differences as perceived by humans (Paschos, 2001). use (Van Poucke, Haeghen, Vissers, Meert, & Jorens,
Standard RGB (sRGB; red, green, blue) and L*a*b* are 2010). In general, a computer vision camera employs a sin-
commonly applied in quantifying standard colour of food gle array of light-sensitive elements on a CCD chip, with
(Menesatti et al., 2012). sRGB is a device independent col- a filter array that allows some elements to see red (R),
our model whose tristimulus values (sR, sG, sB) reproduce some green (G) and some blue (B). ‘White balance’ is con-
the same colour on different devices, and represent linear ducted to measure relative intensities manually or automat-
combinations of the CIE XYZ. Therefore, It is used to define ically (Mendoza et al., 2006). A digital colour image is then
the mapping between RGB (no-linear signals) from a com- generated by combining three intensity images (R, G, and
puter vision system and a device-independent system such B) in the range 0e255. As being device-dependent, RGB
as CIE XYZ (Mendoza, Dejmek, & Aguilera, 2006). sRGB signals produced by different cameras are different for
is calculated based on D65 illumination conditions, RGB the same scene. These signals will also change over time
values measured by computer vision, and a power function as they are dependent on the camera settings and scenes
with a gamma value of 2.4. The camera sensors (e.g., CCD (Van Poucke et al., 2010). Therefore, measurements of col-
or CMOS) generate outputs signals and the rendering is our and colour differences cannot be conducted on RGB im-
device-dependent, since the display device specifications ages directly. On the other hand, different light sources
have different ranges of colour. In order to overcome this present different emission spectra dominated by diverse
problem, sRGB values are often transformed to other colour wavelengths that affect those reflected by the object under
spaces such L*a*b* (Menesatti et al., 2012). Moreover, analysis (Costa et al., 2009). Therefore, in order to mini-
even the result of such transformation is device-dependent mize the effects of illuminants and camera settings, colour
(Ford & Roberts, 1998). In many researches, a linear trans- calibration prior to photo/image interpretation is required in
form that defines a mapping between RGB signals from food processing to quantitatively compare samples’ colour
a computer vision camera and a device independent system during workflow with many devices (Menesatti et al.,
such as L*a*b* and L*u*v* was determined to ensure the 2012). sRGB is a device-independent colour space that
correct colour reproduction (Mendoza & Aguilera, 2004; has relationship with the CIE colourimetric colour spaces.
Paschos, 2001; Segnini et al., 1999a). However, such trans- Most of the variability introduced by the camera and illumi-
form that converts RGB into L*a*b* units does not consider nation conditions could be eliminated by finding the rela-
calibration process, but only uses an absolute model with tionship between the varying and unknown camera RGB
known parameters. Because the RGB colour measurement and the sRGB colour space (Van Poucke et al., 2010). Dif-
depends on external factors (sensitivity of the sensors of ferent calibration algorithms defining the relationship be-
the camera, illumination, etc.), most cameras (even of the tween the input RGB colour space of the camera and the
same type) do not exhibit consistent responses (Ilie & sRGB colour space have been published using various
Welch, 2005). The parameters in the absolute model vary methods (Van Poucke et al., 2010). Several software are
from one case to another. Therefore, the conversion from available to perform colour calibration using a colour pro-
RGB to L*a*b* cannot be done directly using a standard file assignable to the image that deals with different devices
formula (Leon et al., 2006). For this reason, Leon et al. (e.g., ProfileMaker, Monaco Profiler, EZcolour, i1Extreme
(2006) present a methodology to transform device depen- and many others), but they are often too imprecise for sci-
dent RGB colour units into device-independent L*a*b* col- entific purposes. Therefore, polynomial algorithms, multi-
our units. Five models, namely direct, gamma, linear, variate statistics, neural networks, and their combinations
quadratic and neural network, were used to carry out the are proposed for the colour calibration (Menesatti et al.,
transformation of RGB to L*a*b* to make the values deliv- 2012). Mendoza et al. (2006) transferred RGB into sRGB
ered by the model are as similar as possible to those deliv- according to IEC 61966-2-1 (1999) for the colour measure-
ered by a colourimeter over homogenous surfaces. The best ments of agricultural foods. Costa et al. (2009) compared
results with small errors (close to 1%) were achieved with three calibration systems, namely partial least squares
the quadratic and neural network model. However, although (PLS), second order polynomial interpolation (POLY2),
the methodology presented is general, i.e., it can be used in and ProfileMaker Pro 5.0 software (PROM) under eight dif-
every computer vision system, it should be noticed that the ferent light conditions. Results show that PLS and POLY2
results obtained after the calibration for one system (e.g., achieved better calibration with respect to the conventional
system A) cannot be used for another system (e.g., system software (PROM). Van Poucke et al. (2010) used three 1D
B). A new calibration procedure needs to be conducted for look-up tables and polynomial modelling to ensure repro-
a new computer vision system (Leon et al., 2006). ducible colour content of digital images. A ‘reference
chart’ called the MacBeth Colour Checker Chart Mini
Colour calibration methods [MBCCC] (GretagMacBeth AG, Regensdorf, Switzerland)
The quality of digital image is principally defined by its was used in the colour target-based calibration by trans-
reproducibility and accuracy (Prasad & Roy, 2003). forming the input RGB colour space into the sRGB colour
Without reproducibility and accuracy of images, any at- space. Gurbuz, Kawakita, and Ando (2010) proposed a col-
tempt to measure colour or geometric properties is of little our calibration method for multi-camera systems by
12 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

utilizing a set of robustly detected stereo correspondences called thin-plate spline interpolation to estimate the colour
between camera pairs, resulting in a 3  4 coefficient ma- of the incident illumination. The resulted illumination-
trix multiplier that can be used for colour calibration. Costa estimation can be used to provide colour constancy under
et al. (2012) calibrated digital images of whole gilthead changing illumination conditions and automatic white bal-
seabream using a PLS approach with a standard colour ancing for digital cameras (Shi et al., 2011). A review of
chart. Recently, Menesatti et al. (2012) applied the “3D these algorithms and their comparison can be found else-
Thin-Plate Spline” warping approach to calibrate colours where (Barnard, Cardei et al., 2002; Barnard, Martin,
in sRGB space. The performance of this method was com- Coath, & Funt, 2002).
pared with other two common approaches,
namely commercial calibration system (ProfileMaker) and
Applications
partial least square analysis under two different cameras
Nowadays, computer vision has found extraordinary in-
and four different light conditions. Compared to the com-
terests as a key inspection method for non-destructive and
mercial methods (ProfileMaker) and the multivariate PLS
rapid colour measurement of food and food products. If
approach, the Thin-Plate Spline approach significantly
implemented in processing lines, computer vision systems
diminished both, distances from the reference and the
will provide precise inspection and increase throughput in
inter-distances setup experiment, and was the most robust
the production and packaging process. Table 1 summarizes
against lighting conditions and sensor typology
applications of using computer vision for food colour
(Menesatti et al., 2012).
evaluation.
Colour constancy and illumination estimation
Colour constancy is the phenomenon by which per- Meat and fish
ceived object colour tends to stay constant under changes Beef
in illumination (Ling & Hurlbert, 2008). Colour constancy Freshness is an important factor for consumers to buy
is not a property of objects; it is a perceptual phenomenon, meat (Maguire, 1994). ‘Red’ and ‘bright red’ lean is asso-
the result of mechanisms in the eye and brain (Hurlbert, ciated by consumers with fresh beef, while brownish colour
2007). Colour constancy is important for object recogni- is considered to be an indicator of state or spoiled beef
tion, scene understanding, image reproduction as well as (Larrain, Schaefer, & Reed, 2008). Colourimeters have
digital photography (Li, Xu, & Feng, 2009). There are three been intensively studied for determining colour differences
factors affecting the image recorded by a camera, namely of fresh meat using various CIE colour expressions, such as
the physical content of the scene, the illumination incident lightness (L*), redness (a*), yellowness (b*), hue angle,
on the scene, and the characteristics of the camera and chroma (Larrain et al., 2008). However, these works
(Barnard, Cardei, & Funt, 2002). An object can appear dif- have limitation of scanning a small surface area. Computer
ferent colour under changing colour. The objective of com- vision is considered as a promising method for predicting
putational colour constancy is to find a nontrivial illuminant colour of meat (Mancini & Hunt, 2005; Tan, 2004). Back
invariant description of a scene from an image taken under in the eighties of the last century, computer vision has
unknown lighting conditions, either by directly mapping been used to detect colour changes during cooking of
the image to a standardized illuminant invariant representa- beef ribeye steaks (Unklesbay, Unklesbay, & Keller,
tion, or by determining a description of the illuminant 1986). The mean and standard deviation of the red, green
which can be used for subsequent colour correction of the and blue colours were found to be sufficient to differentiate
image (Barnard, Cardei et al., 2002). The procedure of between 8 of 10 classes of steak doneness. Later, Gerrard,
computational colour constancy includes two steps: esti- Gao, and Tan (1996) determined muscle colour of beef
mating illumination parameters and using these parameters ribeye steaks using computer vision. Means of red and
to get the objects’ colour under a known canonical light green (mR and mG) were significant (Coefficient of Determi-
source (Li et al., 2009). The first step, illumination estima- nation (R2) ¼ 0.86) for the prediction of colour scores
tion, is important in colour constancy computation (Li which were determined using USDA lean colour guide.
et al., 2009). So far, a number of leading colour constancy In order to improve the results, Tan, Gao, and Gerrard
algorithms were proposed that focus on the illumination es- (1999) used fuzzy logic and artificial neural network tech-
timation (Li et al., 2009). These algorithms can be gener- niques to analyze the colour scores and a 100% classifica-
ally divided into two major groups: unsupervised tion rate was achieved. In another work, Larrain et al.
approaches and supervised ones. The algorithms falling in (2008) applied computer vision to estimate CIE colour co-
the first category include Max RGB, grey world algorithm, ordinates of beef as compared to a colourimeter. In their
Shades of grey (SoG), and Grey surface identification work, CIE L*, a*, and b* were measured using a colourim-
(GSI). The other colour constancy category includes those eter (Minolta Chromameter CR-300, Osaka, Japan) with
training-based solutions, such as Bayesian colour con- a 1 cm aperture, illuminant C and a 2 viewing angle.
stancy, Neural Network method, support vector regression. RGB values obtained from computer vision were trans-
Recently, Shi, Xiong, and Funt (2011) proposed a method formed to CIE L*a*b* colour spaces using the following
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 13

Table 1. Summary of computer vision applications for food colour evaluation.

Category Application Accuracy References


Beef Detection of colour changes during Unklesbay et al., 1986
cooking
Prediction of colour scores R2 ¼ 0.86 Gerrard et al., 1996
Prediction of sensory colour responses 100% Tan et al., 1999
Estimation of CIE colour coordinates as R2 ¼ 0.58 for L* Larrain et al., 2008
compared to a colourimeter R2 ¼ 0.96 for a*
R2 ¼ 0.56 for b*
R2 ¼ 0.94 for hue angle
R2 ¼ 0.93 for chroma
Prediction of official colour scores 86.8% using MLR Sun et al., 2011
94.7% using SVM
Pork Evaluation of fresh pork loin colour R ¼ 0.52 using MLR Lu et al., 2000
R ¼ 0.75 using NN
Prediction of colour scores 86% Tan et al., 2000
Prediction of the sensory visual quality O’Sullivan et al., 2003
Fish Detection of colour change Oliveira & Balaban, 2006
Colour measurement as compared to Yagiz et al., 2009
a colourimeter
Prediction of colour score assigned by R ¼ 0.95 Quevedo et al., 2010
a sensory panel
Prediction of colour score as compared to Similar to Roche SalmoFan Misimi et al., 2007
the Roche cards and a colourimeter linear ruler
Orange Juice Colour evaluation R ¼ 0.96 for hue Fernandez-Vazquez et al., 2011
R ¼ 0.069 for chroma
R ¼ 0.92 for lightness
Wine Measurement of colour appearance R2 ¼ 0.84 for lightness compared to Martin et al., 2007
visual estimates
R2 ¼ 0.89 for colourfulness compared
to visual estimates
R2 ¼ 0.98 for hue compared to visual
estimates
R2 ¼ 0.99 for lightness compared to
spectroradiometer
R2 ¼ 0.90 for colourfulness compared
to spectroradiometer
R2 ¼ 0.99 for hue compared to
spectroradiometer
Beer Determination of colour as compared to Sun et al., 2004
the colourimetry
Potato chip Colour measurement as compared to two Scanlon et al., 1994
colourimeters
Colour measurement as compared by the Segnini et al., 1999a
sensory assessors
Colour measurement as compared by the R > 0.79 between L* and most of the Segnini et al., 1999b
sensory assessors sensory colour attributes
Colour measurement as compared by the R ¼ 0.9711 using linear model for Pedreschi et al., 2011
sensory assessors smooth potato chips
R ¼ 0.9790 using quadratic model for
smooth potato chips
R ¼ 0.7869 using linear model for
smooth potato chips
R ¼ 0.8245 using quadratic model for
smooth potato chips
Development of an computer vision Pedreschi et al., 2006
system to measure the colour of potato
chips
Wheat Measurement of the colour of the seed High linear correlations ( p < 0.05) Zapotoczny & Majewska, 2010
coat as compared to the
spectrophotometer
Banana Measurement of the colour as compared R2 ¼ 0.80 for L* Mendoza & Aguilera, 2004
to a colourimeter R2 ¼ 0.97 for a*
R2 ¼ 0.61 for b*
MLR: Multiple linear regression.SVM: Support vector machine.R: Correlation coefficient.R2: Coefficient of determination.
14 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

steps. RGB was first converted to XYZD65 using the matrix which was considered negligible from a practical view-
transform (Pascale, 2003): point. For the NN model, 93.2% of the samples had predic-
2 3 2 3 2 3 tion errors of 0.6 or lower, with the correlation coefficient
XD65 0:4125 0:3576 0:1804 R of 0.75. Results showed that a computer vision system is
4 YD65 5 ¼ 4 0:2127 0:7152 0:0722 5  4 G 5 ð1Þ an efficient tool for measuring sensory colour of fresh
ZD65 0:0193 0:1192 0:9503 B pork. Later, Tan, Morgan, Ludas, Forrest, and Gerrard
The obtained XYZD65 was then converted to XYZC using (2000) used computer vision to predict colour scores of
the Bradford matrix transform (Pascale, 2003): fresh loin chops, which were visually assessed by an un-
2 3 2 3 2 3 trained panel in three separated studies. After training
XC 1:0095 0:007 0:0128 XD65 pork images classified by the panel, the computer vision
4 YC 5 ¼ 4 0:0123 0:9847 0:0033 5  4 YD65 5 ð2Þ system was capable of classifying pork loin chops up to
ZC 0:0038 0:0072 1:0892 ZD65 86% agreement with visually assessed colour scores. In an-
Finally, XYZC was converted into CIEC L*a*b* using the other study, the effectiveness of computer vision and col-
following equations (Konica Minolta, 1998): ourimeter was compared in predicting the sensory visual
quality of pork meat patties (M. longissimus dorsi) as deter-
L ¼ 116  hðY=Yn Þ1=3 16 i mined by a trained and an untrained sensory panel
a ¼ 500  ðX=Xn Þ1=3 ðY=Yn Þ1=3 (O’Sullivan et al., 2003). Compared to the colourimeter,
ð3Þ
h i computer vision had a higher correction with the sensory
b ¼ 200  ðY=Yn Þ ðZ=Zn Þ
1=3 1=3
terms determined by both trained and untrained sensory
panelists. This was due to the fact that the entire surface
where Xn, Yn, and Zn are the values for X, Y, and Z for the of sample was measured by computer vision and therefore
illuminant used, in this case 0.973, 1.000, and 1.161 respec- computer vision took a more representative measurement
tively. Also, (X/Xn)1/3 was replaced by [7.787  (X/ compared to the colourimeter.
Xn) þ 16/116] if X/Xn was below 0.008856; (Y/Yn)1/3 was
replaced by [7.787  (Y/Yn) þ 16/116] if Y/Yn was Fish
below 0.008856; and (Z/Zn)1/3 was replaced by Consumers commonly purchase fish based on visual ap-
[7.787  (Z/Zn) þ 16/116] if Z/Zn was below 0.008856 pearance (colour). Gormley (1992) found that consumers
(Konica Minolta, 1998). When L*a*b* were transformed, associate colour of fish products with the freshness of
hue angle and chroma were calculated from a* and b* a product having better flavour and higher quality. Colour
values. Regressions of colourimeter on computer vision charts, such as SalmonFanÔ card (Hoffmann-La Roche
for a*, hue angle and chroma had R2 values of 0.96, Basel, Switzerland), are generally used for colour assess-
0.94, and 0.93, while there were only 0.58 and 0.56 of R2 ment in the fish industry (Quevedo et al., 2010). However,
for L* and b*. Recently, Sun, Chen, Berg, and Magolski such measurement is laborious, tedious, subjective, and
(2011) analyzed 21 colour features obtained from images time-consuming. Quevedo et al. (2010) developed a com-
of fresh lean beef for predicting official beef colour scores. puter vision method to assign colour score in salmon fillet
Multiple linear regression (MLR) had the correct percent- according to SalmonFanÔ card. The computer vision sys-
age of 86.8% of beef colour muscle scores and better per- tem was calibrated in order to obtain L*a*b* from RGB us-
formance of 94.7% was achieved using support vector ing 30 colour charts and 20 SalmonFan cards. Calibration
machine (SVM), showing that computer vision technique errors for L*a*b* were 2.7%, 1%, and 1.7%, respectively,
can provide an effective tool for predicting colour scores with a general error range of 1.83%. On the basis of the
of beef muscle. calibrated transformation matrix, a high correlation coeffi-
cient of 0.95 was obtained between the SalmonFan score
Pork assigned by computer vision and the sensory panel. Good
In addition to beef colour, fresh pork colour was also results showed the potential of using computer vision tech-
evaluated by using computer vision (Tan, 2004). Early nique to qualify salmon fillets based on colour. In another
work was carried out by Lu, Tan, Shatadal, and Gerrard study, Misimi, Mathiassen, and Erikson (2007) compared
(2000), who applied computer vision to evaluate fresh the results of computer vision with the values determined
pork loin colour. Colour image features analyzed in this manually using the Roche SalmonFanÔ lineal ruler and
study included mean (mR, mG and mB) and standard devia- Roche colour card. The results demonstrated that the com-
tion (sR, sG and sB) of red, green, and blue bands of the puter vision method had the good evaluation of colour as
segmented muscle area. Both MLR and neural network the Roche SalmoFan linear ruler. This study also found
(NN) models were established to determine the colour that the colour values generated by the chromameter had
scores based on the images features as inputs. The correla- large deviations in mean value to those generated by the
tion coefficient between the predicted and the sensory col- computer vision. This was due to the brighter illumination
our scores was 0.52 in MLR model, with 84.1% of the 44 used by the computer vision setup and the different algo-
pork loin samples having prediction errors lower than 0.6, rithms used to convert RGB into L*a*b* for two methods
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 15

(Misimi et al., 2007). The comparison of the performance vision and colourimeter in this study (Yagiz et al., 2009)
of a computer vision system and a colourimeter was also were similar to those of the study carried out by Oliveira
compared to measure the colour of uncooked fillets from and Balaban (2006). However, unlike Oliveira and
Gulf of Mexico sturgeons fed three different diets, during Balaban (2006) used different illuminants, Yagiz et al.
storage on ice for 15 days (Oliveira & Balaban, 2006). In (2009) used the same illuminant, i.e., D65 with a colour
order to do the comparison, DE values were calculated temperature of 6504 K, for both instruments. In addition,
from the L*a*b* values measured using both the computer the standard red plates they used for the calibration of
vision system and the colourimeter. The DE value was used two instruments had the similar L*, a*, b* values. Hence,
to measure the “total” colour change, which was calculated the authors (Yagiz et al., 2009) recommended caution in re-
by the following function: porting colour values measured by any system, even when
the ‘reference’ tiles were measured correctly. There are var-
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ious factors that can affect the colour readings, such as the
DE ¼ ðLo  Li Þ þðao  ai Þ þðbo  bi Þ ð4Þ
2 2 2
surface roughness and texture, the amount of surface
‘shine’, the geometry of the measuring instrument. It is rec-
where, the subscript o refers to the values at time 0, and i ommended to visually compare the colour formed by the
refers to values at 5, 10, or 15 days. DE values determined L*, a*, b* values read from any device with the observed
using computer vision showed colour change over storage colour of the sample.
time, which was in accordance with mild colour changes
visually observed in the images of the centre slices of the Liquid food products
sturgeon fillets. However, it was hard to find such colour Orange juice
change using the colourimeter. Moreover, there was signif- Some studies have revealed that the colour of orange
icant difference of DE values (p < 0.05) between instru- juice is related to the consumer’s perception of flavour,
ments, except for day 0. The difference could be due to sweetness and other quality characteristics (Fernandez-
the different average daylight illuminants used, namely Vazquez, Stinco, Melendez-Martinez, Heredia, & Vicario,
D65 with a colour temperature of 6504 K for the colourim- 2011). Colour is found to influence sweetness in orange
eter and D50 with a colour temperature of 5000 K for the drinks and affects intensity of typical flavour in most fruit
machine vision system. Similarly, Yagiz, Balaban, drinks (Bayarri, Calvo, Costell, & Duran, 2001). Instead
Kristinsson, Welt, and Marshall (2009) compared a Minolta of subjective visual evaluation, traditional instruments
colourimeter and machine vision system in measuring col- such as colourimeter have been used for the objective col-
our of irradiated Atlantic salmon. There were significantly our evaluation of orange juice (Melendez-Martinez et al.,
higher readings obtained by the computer vision system for 2005). New advances in computer vision offer the possibil-
L*, a*, b* values than the Minolta colourimeter. Visual ity of evaluating colour in terms of millions of pixels at rel-
comparison was then conducted to illustrate the actual col- atively low cost. Fernandez-Vazquez et al. (2011) explored
ours to evaluate the measurements of the two instruments. the relationship between computer vision and sensory eval-
The colour represented by the computer vision system uation of the colour attributes (lightness, chroma and hue)
was much closer to the average real colour of Atlantic in orange juices. Hue (R ¼ 0.96) and lightness
salmon fillets, while that measured using the colourimeter (R ¼ 0.92) were well correlated between panelists’ colour
was purplish based on average L*, a*, b* values (Fig. 4). evaluation and the image values but not chroma
The differences between colours measured by computer (R ¼ 0.069). The poor measurement of chroma was proba-
bly due to the fact that it is not an intuitive attribute.

Alcoholic beverage
Colour, which is one of the main parameters of the qual-
ity of wines, affects the determination of aroma, odour, va-
riety, and the overall acceptability by consumers (Martin,
Ji, Luo, Hutchings, & Heredia, 2007). Martin et al.
(2007) measured colour appearance of red wines using
a calibrated computer vision camera for various wines
with reference to the change of depth. The results from
computer vision had good correlations with visual esti-
mates for lightness (R2 ¼ 0.84), colourfulness
(R2 ¼ 0.89), and hue (R2 ¼ 0.98) and with a Minolta
CS-1000 tele-spectroradiometer (R2 ¼ 0.99 for lightness,
Fig. 4. Colour representations of Minolta and machine vision reading R2 ¼ 0.90 for colourfulness, and R2 ¼ 0.99 for hue). In an-
results and actual pictures of differently treated salmon fillets and stan- other study, Sun, Chang, Zhou, and Yu (2004) investigated
dard red plate (Yagiz et al., 2009). computer vision for determining beer colour as compared
16 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

to the European Brewery Convention (EBC) colourimetry. measurements of the seed coat performed by computer
A high positive correlation was found between colours vision and spectrophotometer. The results of this study
measured by computer vision and those determined by us- showed that the colour of the seed coat of wheat kernels
ing spectrophotometry and colourimetry, demonstrating the can be determined by computer vision instead of
feasibility of determining beer colour using computer spectrophotometry.
vision. The computer vision was highly repeatable with Mendoza and Aguilera (2004) implemented computer
a standard deviation of zero for measuring the colour of vision to measure the colour of bananas with different rip-
beer. ening stages. There was a good correlation (R2 ¼ 0.97) be-
tween a* values obtained with the computer vision system
Other applications and the Hunter Lab colourimeter, while smaller correlation
Colour of potato chips is an important attribute in the coefficients were obtained for L* (R2 ¼ 0.80) and b*
definition of the quality for the potato processing industry (R2 ¼ 0.61) values. This difference between two methods
and it is strictly related to consumer perception was mainly due to the fact that measurements with the col-
(Pedreschi, Leon, Mery, & Moyano, 2006; Segnini, ourimeter did not extend over the whole surface of the ba-
Dejmek, & Oste, 1999b; Pedreschi, Bunger, Skurtys, nanas which had nonhomogeneous colours during ripening,
Allen, & Rojas, 2012). As an early research, Scanlon, in particular at the ends of the bananas. On the other hand,
Roller, Mazza, and Pritchard (1994) used computer vision the computer vision system is possible to assess the overall
to characterise colour of chips. On the basis of mean grey colour change during ripening, similar to human percep-
level values from specific regions of potato chips, it was tion. Recently, Hashim et al. (in press) used computer vi-
feasible to distinguish differences in chip colour from pota- sion to detect colour changes in bananas during the
toes stored at the two temperatures and to discriminate dif- appearance of chilling injury symptoms. The raw RGB
ferent frying times for potato chips that had been stored at values obtained were transformed to normalized rgb and
5  C. Good relationships were obtained between colour as- CIEL*a*b* space to remove the brightness from the colour
sessed by mean grey level and colour measured by the Ag- and to distinguish the colour similar to human perception.
tron M31A colour meter and Hunter Lab D25L-2 Results show that the r and g in normalized rgb colour
colourimeter. Later, Segnini et al. (1999a) developed space have strong correlation with visual assessment.
a new, easy and inexpensive procedure to quantify the col-
our of potato chips by using computer vision technique. Quantification of colour nonhomogeneity
There was a clear relationship between the obtained L*, Colour nonhomogeneity is an important appearance at-
a*, or b* and the scale by human eyes. The method had tribute and its quantitative measurement is required for
less influence from the undulating surface of the chips most food products which have nonuniform colours.
and was not sensitive to light intensity. In another study, However, colourimeters fail for nonuniform colours be-
Segnini et al. (1999b) investigated the potential of using cause only the “average” colour of food products can be
computer vision for measuring colour of commercial potato measured by colourimeters. For this reason, Balaban
chips as compared to sensory analysis. There was a good (2008) applied computer vision technique to quantify uni-
relationship (R > 0.79) between L* and most of the sensory form or nonuniform colours of food products. Several im-
colour attributes, which include “yellow colour”, “burnt as- age analysis methods were applied, which included
pect” and “sugar coloured aspect”. The a* attribute also colour blocks, contours, and “colour change index” (CCI).
showed a good relationship with “burnt aspect”, while the The calculation of colour blocks included three steps:
b* attribute did not significantly correlate with any of the firstly, the number of colours in the RGB colour space
sensory parameters ( p > 0.05). Recently, Pedreschi, Mery, was reduced by dividing each colour axis into either 4
Bunger, and Yanez (2011) established the relationships be- (4  4  4 ¼ 64 colour blocks) or 8 (8  8  8 ¼ 512 col-
tween colour measured by the sensory assessors and the our blocks) or 16 (16  16  16 ¼ 4096 colour blocks);
colour determined objectively in L*, a*, b* units by a com- secondly, the number of pixels that fall within a colour
puter vision system. Good relationships were found for block is counted, and the percentage of that colour was cal-
smooth potato chips using both linear (0.9711) and qua- culated based on the total view area (total number of pixels)
dratic (0.9790) models, while undulated chips only had R of the object; and finally, an appropriate threshold was set
of 0.7869 and 0.8245 suing linear and quadratic methods, to consider only those colour blocks that have percent areas
respectively. above that threshold. On the basis of the set threshold, the
Zapotoczny and Majewska (2010) measured the col- higher the number of colour blocks, the more nonhomoge-
our of the seed coat of wheat kernels using computer vi- neous the colour is.
sion. The colour of the seed coat was saved in RGB space The calculation of colour contours included two steps:
after image acquisition, and was then transformed into firstly, colour attributes lower than, or higher than a given
XYZ and L*a*b*, which enabled the computation of the threshold, or attributes between two thresholds were
hue and saturation of colour. After image analysis, high identified; secondly, the percentage of pixels within con-
linear correlations ( p < 0.05) were found between colour tours based on the total view area of an object was
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 17

calculated. The colours of defective areas, such as dark L*a*b* colour space. The USA Federal Colour Standard
spots could be quantified based on the calculation of printouts (SP) comprised of 456 different colours were
contours. used to train and test the artificial neural network (ANN)
The calculation of CCI was based on colour primitives, integrated the system. High correlations were obtained be-
which are several continuous areas of an image where the tween the results estimated from the computer vision sys-
“intensity” of any pixel is within a given threshold value. tem and those obtained from a spectrophotometer for test
The more colour primitives in an image, the more nonho- images data set. R2 values were 0.991, 0.989, and 0.995
mogeneous the colour of that object is. The calculation for L*, a*, and b*, respectively. When various food samples
function of CCI was proposed as follows: were used to evaluate the performance of the system,
P a good agreement was also found between colour measured
DI for all neighoring pixels using the system and the spectrophotometer (R2 values
CCI ¼ P
distances between equivalent circles were 0.958, 0.938, and 0.962 for L*, a*, and b*, respec-
number of neighbors tively). The mean errors of 0.60% and 2.34% obtained
  100 ð5Þ respectively for test and various food samples showed the
object area
feasibility of using computer vision for the measurement
The results of the study by Balaban (2008) showed that of food colour instead of spectrophotometer. In another
the colour blocks method was competent for the case if the work, Pedreschi et al. (2006) designed and implemented
range of hue values is large such as mangoes; and the CCI an inexpensive computer vision system to measure repre-
method did well if the hue range is narrow as in the case of sentatively and precisely the colour of potato chips in
rabbit samples. L*a*b* units from RGB images. The system had the func-
Furthermore, because it is not easy to quantify nonho- tions of image acquisition, image storage, image pre-
mogeneous colours by sensory panels, most researches processing, object segmentation, feature extraction, and
were conducted in the comparison and correlation of homo- colour transformation from RGB to L*a*b* units. The sys-
geneous colour measurements between computer vision tem allowed the measurements of the colour over the entire
and instrumental or visual colour analysis. For this reason, surface of a potato chip or over a small specific surface re-
Balaban, Aparicio, Zotarelli, and Sims (2008) proposed gion of interest in an easy, precise, representative, objective
a method to quantify the perception of nonhomogeneous and inexpensive way. There are also some other commer-
colours by sensory panelists and compared the differences cial systems available for food colour measurement, such
in colour evaluation between a computer vision system as QualiVision system (Dipix Technologies, Ottawa,
and sensory panels for the perception of nonhomogeneous Ontario, Canada), Lumetech Optiscan system (Koch Lume-
colours. Generally, the more nonuniform the colour of tech, Kansas City, Mo., USA), Model L-10 Vision Weigher
a sample, the higher the error of a panelist was in quantify- (Marel, Reykjavik, Iceland), Parasensor system (Precarn,
ing the colour of a sample, which showed that panelists had Ottawa, Canada), Prophecy 550 system (Imaging Technol-
more difficulty in evaluating more nonhomogeneous col- ogy, Bedford, Mass.), and SINTEF system (SINTEF, Oslo,
ours. Moreover, no significant difference in DE values Norway) (Balaban & Odabasi, 2006).
was found between panelists’ errors based on evaluating
the real fruit and evaluating its image (Balaban et al., Advantages and disadvantages of using computer
2008). Therefore, images can be used to evaluate colour in- vision
stead of the real samples, which may be significant, since Many reviews have concluded the advantages and disad-
visual evaluation of images eliminates temporal and geo- vantages of computer vision (Brosnan & Sun, 2004; Du &
graphical restrictions, especially for the evaluation of per- Sun, 2004; Gumus, Balaban, & Unlusayin, 2011).
ishable foods. In addition, images can be transferred Especially for food colour measurement, the main advan-
electronically to distant places and stored much longer tages of applying computer vision technique include:
than the food, which allows much more flexibility in the
analysis of visual attributes of food products.  The rapidness, preciseness, objectiveness, efficiency,
consistency, and non-destruction of the measurement
Development of computerized colour measurement of colour data with low cost and no sample pretreatment;
system  The ability of providing high spatial resolution, analyzing
Nowadays, computer vision technique has been used on each pixel of the surface of a food product, extracting
a production line or in the quality control lab. Several more colour features with spatial information, analyzing
works have been carried out to develop computerized col- the whole food even it is of small or irregular shape and
our measurement systems. Kihc, Onal-Ulusoy, Yildirim, of nonuniform colours, and selecting a region of interest,
and Boyaci (2007) designed a computerized inspection sys- and generating the distribution map of colour;
tem that uses a flat-bed scanner, a computer, and an algo-  The automation of mass labour intensive operations and
rithm and graphical user interface coded and designed in reduction of tedious and subjective human visual in-
Matlab 7.0 to determine food colour based on CIE volvement; and
18 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

 The availability of rapid generation of reproducible re- vision-based colour measurement without human interven-
sults and a permanent storage of colour data for further tion. Due to the complex nature of food images, no existing
analysis by keeping the picture. algorithm is totally effective for food-image segmentation.
The development of efficient and robust calibration is also
Although computer vision has the aforementioned ad- required to reduce the influence from the change of camera,
vantages, it does have some disadvantages (Brosnan & illumination, and environment. Besides image process algo-
Sun, 2004; Gumus et al., 2011): rithms, the development in hardware and software of com-
puter vision system is also critical to measure colour of
 The difficulties encountered with objects that are diffi- food products rapidly and accurately. A faster, lighter/
cult to separate from the background, overlapping ob- smaller, and less expensive hardware can decrease image
jects, or when both sides of a food need to be evaluated; acquisition and analysis time, improve the speed and space
 The requirement of careful calibration and setting of of storage, and increase the image resolution for detailed
camera and well-defined and consistent illumination colour measurement.
(such as a light box, where the light intensity, spectrum
and direction are all controlled); and
Acknowledgements
 The possible variation of intensity and the spectrum of
The authors would like to acknowledge the financial
light bulbs over time (Balaban & Odabasi, 2006).
support provided by the Irish Research Council for Science,
Engineering and Technology under the Government of
Ireland Postdoctoral Fellowship scheme.
Conclusions and future trends
This review covers fundamentals and typical applica- References
tions of computer vision in food colour measurement. As
Abdullah, M. Z. (2008). Image acquisition systems. In D.-W. Sun (Ed.),
a science-based automated food inspection technique, com-
Computer vision technology for food quality evaluation. Elsevier,
puter vision has been proved to be efficient and reliable for San Diego, California, USA: Academic Press.
colour measurement with the capabilities not possible with Balaban, M. O. (2008). Quantifying nonhomogeneous colors in
other methods, especially the ability for analyzing food agricultural materials part I: method development. Journal of Food
samples with nonhomogeneous colours, shapes, and sur- Science, 73, S431eS437.
Balaban, M. O., Aparicio, J., Zotarelli, M., & Sims, C. (2008).
faces. The colour measurement of using computer vision
Quantifying nonhomogeneous colors in agricultural materials. Part
is repeatable and flexible, permits the plant application II: comparison of machine vision and sensory panel evaluations.
with high throughput and accuracy and a relatively low Journal of Food Science, 73, S438eS442.
cost, and allows human visual inspectors to focus on Balaban, M. O., & Odabasi, A. Z. (2006). Measuring color with
more demanding and skilled jobs, instead of undertaking machine vision. Food Technology, 60, 32e36.
Barnard, K., Cardei, V., & Funt, B. (2002). A comparison of
tedious, laborious, time-consuming, and repetitive inspec-
computational color constancy algorithms e part I: methodology
tion tasks. Moreover, besides colour measurement, com- and experiments with synthesized data. IEEE Transactions on
puter vision allows evaluation of other quality attributes, Image Processing, 11, 972e984.
such as shape, size, orientation, defects, and nutrition. Barnard, K., Martin, L., Coath, A., & Funt, B. (2002). A comparison of
Based on the combination of these attributes, computer vi- computational color constancy algorithms e part II: experiments
with image data. IEEE Transactions on Image Processing, 11,
sion offers the possibility of designing inspection systems
985e996.
for the automatic grading and quality determination of Bayarri, S., Calvo, C., Costell, E., & Duran, L. (2001). Influence of
food products. On the basis of computer vision, it is feasi- color on perception of sweetness and fruit flavor of fruit drinks.
ble to reduce industrial dependence on human graders, in- Food Science and Technology International, 7, 399e404.
crease production throughput, decrease production cost, Blasco, J., Aleixos, N., & Molto, E. (2003). Machine vision system for
automatic quality grading of fruit. Biosystems Engineering, 85,
improve product consistency and wholesomeness, and
415e423.
enhance public confidence in the safety and quality of the Brosnan, T., & Sun, D.-W. (2004). Improving quality inspection of
food products. food products by computer vision e a review. Journal of Food
On the other hand, despite the above great research Engineering, 61, 3e16.
efforts on colour measurement of food products using com- Costa, C., Antonucci, F., Menesatti, P., Pallottino, F., Boglione, C., &
Cataudella, S. (2012). An advanced colour calibration method for
puter vision, there are still many challenges remain to de-
fish freshness assessment: a comparison between standard and
sign a computer vision system that has sufficient passive refrigeration modalities. Food and Bioprocess Technology.
flexibility and adaptability to handle the biological varia- Costa, C., Antonucci, F., Pallottino, F., Aguzzi, J., Sun, D. W., &
tions in food products. Further in-depth research is required Menesatti, P. (2011). Shape analysis of agricultural products:
on system robustness, real-time capability, sample han- a review of recent research advances and potential application to
computer vision. Food and Bioprocess Technology, 4, 673e692.
dling, and standardization, which also create many future
Costa, C., Pallottino, F., Angelini, C., Proietti, P., Capoccioni, F.,
research opportunities. Some difficulties arise from the seg- Aguzzi, J., et al. (2009). Colour calibration for quantitative
mentation algorithms, which is a prerequisite to the success biological analysis: a novel automated multivariate approach.
of all subsequent operations leading to successful computer Instrumentation Viewpoint, 8, 70e71.
D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20 19

Cubero, S., Aleixos, N., Molto, E., Gomez-Sanchis, J., & Blasco, J. Kihc, K., Onal-Ulusoy, B., Yildirim, M., & Boyaci, I. H. (2007).
(2011). Advances in machine vision applications for automatic Scanner-based color measurement in L*a*b* format with artificial
inspection and quality evaluation of fruits and vegetables. Food neural networks (ANN). European Food Research and Technology,
and Bioprocess Technology, 4, 487e504. 226, 121e126.
Du, C. J., & Sun, D.-W. (2004). Recent developments in the Konica, Minolta (1998). Precise color communication: Color control
applications of image processing techniques for food quality from perception to instrumentation. Osaka: Konica Minolta
evaluation. Trends in Food Science & Technology, 15, 230e249. Sensing, Inc.
Du, C. J., & Sun, D.-W. (2005). Comparison of three methods for Krutz, G. W., Gibson, H. G., Cassens, D. L., & Zhang, M. (2000). Colour
classification of pizza topping using different colour space vision in forest and wood engineering. Landwards, 55, 2e9.
transformations. Journal of Food Engineering, 68, 277e287. Lana, M. M., Tijskens, L. M. M., & van Kooten, O. (2005). Effects of
Du, C. J., & Sun, D.-W. (2006). Learning techniques used in computer storage temperature and fruit ripening on firmness of fresh cut
vision for food quality evaluation: a review. Journal of Food tomatoes. Postharvest Biology and Technology, 35, 87e95.
Engineering, 72(1), 39e55. Larrain, R. E., Schaefer, D. M., & Reed, J. D. (2008). Use of digital
Fairchild, M. D. (2005). Color appearance models, (2nd ed.). England: images to estimate CIE color coordinates of beef. Food Research
John Wiley & Sons Ltd. International, 41, 380e385.
Fernandez-Vazquez, R., Stinco, C. M., Melendez-Martinez, A. J., Leon, K., Mery, D., Pedreschi, F., & Leon, J. (2006). Color
Heredia, F. J., & Vicario, I. M. (2011). Visual and instrumental measurement in L*a*b* units from RGB digital images. Food
evaluation of orange juice color: a consumers’ preference study. Research International, 39, 1084e1091.
Journal of Sensory Studies, 26, 436e444. Ling, Y. Z., & Hurlbert, A. (2008). Role of color memory in successive
Ford, A., & Roberts, A. (1998). Colour space conversions. London, color constancy. Journal of the Optical Society of America
UK: Westminster University. A-Optics Image Science and Vision, 25, 1215e1226.
Forsyth, D., & Ponce, J. (2003). Computer vision: A modern approach. Litwiller, D. (2005). CMOs vs. CCD: maturing technologies, maturing
New Jersey: Prentice Hall. markets. Photonics Spectra, 39, 54e58.
Gerrard, D. E., Gao, X., & Tan, J. (1996). Beef marbling and color Li, B., Xu, D., & Feng, S. H. (2009). Illumination estimation based on
score determination by image processing. Journal of Food Science, color invariant. Chinese Journal of Electronics, 18, 431e434.
61, 145e148. Lu, J., Tan, J., Shatadal, P., & Gerrard, D. E. (2000). Evaluation of pork
Gormley, T. R. (1992). A note on consumer preference of smoked color by using computer vision. Meat Science, 56, 57e60.
salmon color. Irish Journal of Agricultural and Food Research, 31, Maguire, K. (1994). Perceptions of meat and food: some implications
199e202. for health promotion strategies. British Food Journal, 96, 11e17.
Gumus, B., Balaban, M. O., & Unlusayin, M. (2011). Machine vision Mancini, R. A., & Hunt, M. C. (2005). Current research in meat color.
applications to aquatic foods: a review. Turkish Journal of Fisheries Meat Science, 71, 100e121.
and Aquatic Sciences, 11, 167e176. Martin, M. L. G. M., Ji, W., Luo, R., Hutchings, J., & Heredia, F. J.
Gunasekaran, S. (1996). Computer vision technology for food quality (2007). Measuring colour appearance of red wines. Food Quality
assurance. Trends in Food Science & Technology, 7, 245e256. and Preference, 18, 862e871.
Gurbuz, S., Kawakita, M., & Ando, H. (2010). Color calibration for Mathworks (2012). Matlab user’s guide. Natick, MA: The MathWorks,
multi-camera imaging systems. In Proceeding of the 4th Inc.
International Universal Communication Symposium (IUCS 2010). McCaig, T. N. (2002). Extending the use of visible/near-infrared
Beijing, China. reflectance spectrophotometers to measure colour of food and
Hashim, N., Janius, R., Baranyai, L., Rahman, R., Osman, A., & agricultural products. Food Research International, 35,
Zude, M. Kinetic model for colour changes in bananas during the 731e736.
appearance of chilling injury symptoms. Food and Bioprocess Melendez-Martinez, A. J., Vicario, I. M., & Heredia, F. J. (2005).
Technology, in press. Instrumental measurement of orange juice colour: a review.
Hunt, R. W. G. (1995). The reproduction of colour, (5th ed.). England: Journal of the Science of Food and Agriculture, 85, 894e901.
Fountain Press. Mendoza, F., & Aguilera, J. M. (2004). Application of image analysis
Hunt, R. W. G. (1998). Measuring colour. England: Fountain Press. for classification of ripening bananas. Journal of Food Science, 69,
Hurlbert, A. (2007). Colour constancy. Current Biology, 17, R906eR907. E471eE477.
Hutchings, J. B. (1994). Food colour and appearance. Glasgow, UK: Mendoza, F., Dejmek, P., & Aguilera, J. M. (2006). Calibrated color
Blackie Academic & Professional. measurements of agricultural foods using image analysis.
Hutchings, J. B. (1999). Food color and appearance. Gaithersburg, Postharvest Biology and Technology, 41, 285e295.
Md: Aspen Publishers. Menesatti, P., Angelini, C., Pallottino, F., Antonucci, F., Aguzzi, J., &
Ilie, A., & Welch, G. (2005). Ensuring color consistency across Costa, C. (2012). RGB color calibration for quantitative image
multiple cameras. In Proceedings of the tenth IEEE international analysis: the “3D thin-plate spline” warping approach. Sensors,
conference on computer vision (ICCV-05). 12, 7063e7079.
Iqbal, A., Valous, N. A., Mendoza, F., Sun, D.-W., & Allen, P. (2010). Misimi, E., Mathiassen, J. R., & Erikson, U. (2007). Computer vision-
Classification of pre-sliced pork and Turkey ham qualities based based sorting of Atlantic salmon (Salmo salar) fillets according to
on image colour and textural features and their relationships with their color level. Journal of Food Science, 72, S30eS35.
consumer responses. Meat Science, 84, 455e465. O’Sullivan, M. G., Byrne, D. V., Martens, H., Gidskehaug, L. H.,
Jackman, P., Sun, D.-W., & Allen, P. (2011). Recent advances in the Andersen, H. J., & Martens, M. (2003). Evaluation of pork colour:
use of computer vision technology in the quality assessment of prediction of visual sensory quality of meat from instrumental and
fresh meats. Trends in Food Science & Technology, 22, 185e197. computer vision methods of colour analysis. Meat Science, 65,
Kays, S. J. (1991). Postharvest physiology of perishable plant products. 909e918.
New York: Van Nostrand Reinholt. Oliveira, A. C. M., & Balaban, M. O. (2006). Comparison of
Kays, S. J. (1999). Preharvest factors affecting appearance. Postharvest a colorimeter with a machine vision system in measuring color of
Biology and Technology, 15, 233e247. Gulf of Mexico sturgeon fillets. Applied Engineering in
Kazlauciunas, A. (2001). Digital imaging e theory and application Agriculture, 22, 583e587.
part I: theory. Surface Coatings International Part B-Coatings Pallottino, F., Menesatti, P., Costa, C., Paglia, G., De Salvador, F. R., &
Transactions, 84, 1e9. Lolletti, D. (2010). Image analysis techniques for automated
20 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

hazelnut peeling determination. Food and Bioprocess Technology, Sun, F. X., Chang, Y. W., Zhou, Z. M., & Yu, Y. F. (2004).
3, 155e159. Determination of beer color using image analysis. Journal of the
Pascale, D. (2003). A review of RGB color spaces. Montreal: The American Society of Brewing Chemists, 62, 163e167.
Babel Color Company. Sun, X., Chen, K., Berg, E. P., & Magolski, J. D. (2011). Predicting fresh
Paschos, G. (2001). Perceptually uniform color spaces for color beef color grade using machine vision imaging and support vector
texture analysis: an empirical evaluation. IEEE Transactions on machine (SVM) analysis. Journal of Animal and Veterinary
Image Processing, 10, 932e937. Advances, 10, 1504e1511.
Pedreschi, F., Bunger, A., Skurtys, O., Allen, P., & Rojas, X. (2012). Tan, J. L. (2004). Meat quality evaluation by computer vision. Journal
Grading of potato chips according to their sensory quality of Food Engineering, 61, 27e35.
determined by color. Food and Bioprocess Technology, 5, Tan, J., Gao, X., & Gerrard, D. E. (1999). Application of fuzzy sets and
2401e2408. neural networks in sensory analysis. Journal of Sensory Studies, 14,
Pedreschi, F., Leon, J., Mery, D., & Moyano, P. (2006). Development of 119e138.
a computer vision system to measure the color of potato chips. Tan, F. J., Morgan, M. T., Ludas, L. I., Forrest, J. C., & Gerrard, D. E.
Food Research International, 39, 1092e1098. (2000). Assessment of fresh pork color with color machine vision.
Pedreschi, F., Mery, D., Bunger, A., & Yanez, V. (2011). Computer Journal of Animal Science, 78, 3078e3085.
vision classification of potato chips by color. Journal of Food The science of color. (1973). Washington: Committee on Colorimetry,
Process Engineering, 34, 1714e1728. Optical Society of America.
Prasad, S., & Roy, B. (2003). Digital photography in medicine. Journal Unklesbay, K., Unklesbay, N., & Keller, J. (1986). Determination of
of Postgraduate Medicine, 49, 332e336. internal color of beef ribeye steaks using digital image-analysis.
Qin, J. W. (2010). Hyperspectral imaging instruments. In D.-W. Sun Food Microstructure, 5, 227e231.
(Ed.), Hyperspectral imaging for food quality analysis and control Van Poucke, S., Haeghen, Y. V., Vissers, K., Meert, T., & Jorens, P.
(1st ed.). (pp. 159e172) San Diego, California, USA: Academic (2010). Automatic colorimetric calibration of human wounds.
Press/Elsevier. BMC Medical Imaging, 10, 7.
Quevedo, R. A., Aguilera, J. M., & Pedreschi, F. (2010). Color of Wu, D., Chen, X. J., Shi, P. Y., Wang, S. H., Feng, F. Q., & He, Y.
salmon fillets by computer vision and sensory panel. Food and (2009). Determination of alpha-linolenic acid and linoleic acid in
Bioprocess Technology, 3, 637e643. edible oils using near-infrared spectroscopy improved by wavelet
Rocha, A. M. C. N., & Morais, A. M. M. B. (2003). Shelf life of transform and uninformative variable elimination. Analytica
minimally processed apple (cv. Jonagored) determined by colour Chimica Acta, 634, 166e171.
changes. Food Control, 14, 13e20. Wu, D., He, Y., & Feng, S. (2008). Short-wave near-infrared
Rossel, R. A. V., Minasny, B., Roudier, P., & McBratney, A. B. (2006). spectroscopy analysis of major compounds in milk powder and
Colour space models for soil science. Geoderma, 133, 320e337. wavelength assignment. Analytica Chimica Acta, 610,
Russ, J. C. (1999). Image processing handbook. Boca Raton: CRC 232e242.
Press. Wu, D., He, Y., Nie, P. C., Cao, F., & Bao, Y. D. (2010). Hybrid
Scanlon, M. G., Roller, R., Mazza, G., & Pritchard, M. K. (1994). variable selection in visible and near-infrared spectral analysis for
Computerized video image-analysis to quantify color of potato non-invasive quality determination of grape juice. Analytica
chips. American Potato Journal, 71, 717e733. Chimica Acta, 659, 229e237.
Segnini, S., Dejmek, P., & Oste, R. (1999a). A low cost video Yagiz, Y., Balaban, M. O., Kristinsson, H. G., Welt, B. A., &
technique for colour measurement of potato chips. Food Science Marshall, M. R. (2009). Comparison of Minolta colorimeter and
and Technology e Lebensmittel-Wissenschaft & Technologie, 32, machine vision system in measuring colour of irradiated Atlantic
216e222. salmon. Journal of the Science of Food and Agriculture, 89,
Segnini, S., Dejmek, P., & Oste, R. (1999b). Relationship between 728e730.
instrumental and sensory analysis of texture and color of potato Zapotoczny, P., & Majewska, K. (2010). A comparative analysis of
chips. Journal of Texture Studies, 30, 677e690. colour measurements of the seed coat and endosperm of wheat
Shi, L. L., Xiong, W. H., & Funt, B. (2011). Illumination estimation via kernels performed by various techniques. International Journal of
thin-plate spline interpolation. Journal of the Optical Society of Food Properties, 13, 75e89.
America A-Optics Image Science and Vision, 28, 940e948. Zheng, C. X., Sun, D.-W., & Zheng, L. Y. (2006a). Recent applications
Sun, D.-W. (2000). Inspecting pizza topping percentage and of image texture for evaluation of food qualities e a review. Trends
distribution by a computer vision method. Journal of Food in Food Science & Technology, 17(3), 113e128.
Engineering, 44(4), 245e249. Zheng, C. X., Sun, D.-W., & Zheng, L. Y. (2006b). Recent
Sun, D.-W., & Brosnan, T. (2003). Pizza quality evaluation using developments and applications of image features for food quality
computer vision e part 1 e Pizza base and sauce spread. Journal evaluation and inspection e a review. Trends in Food Science &
of Food Engineering, 57(1), 81e89. Technology, 17(12), 642e655.

Das könnte Ihnen auch gefallen