Beruflich Dokumente
Kultur Dokumente
Rods and cones have quite different properties: rods have the As mentioned above, a model with a nonlinear operation and
ability to see at night, under conditions of very low illumina- a DoG filter can be used for variation illumination removal.
tion; cones have the ability to deal with bright signals. But In our model, two consecutive nonlinear operations are used
both photoreceptors are sensible to light variations and play a for a more efficient light adaptation filter and a truncation is
crucial rule as light adaptation filters. To exploit and mimic used to enhance the global image contrast.
this property, an adaptive nonlinear function could be applied
on the input signal. In [12], the nonlinear function takes in- 3.1. Two adaptive nonlinear operations
spiration from the Naka-Rushton equation:
As verified in [14], duplex nonlinear operations act as an ef-
X ficient light adaptation filter. Thus, we apply two consecutive
Y = (1) adaptive nonlinear functions in this step.
X + X0
The adaptation factor (X0 in equation (1)) of the first non-
where X represents the input light intensity, X0 is the adap- linear function is computed for each pixel by performing a
tation factor, and Y is the adapted signal. Figure 1 illustrates low pass filter on the input image [14]:
Iin
F1 (p) = Iin (p) ∗ G1 + (2)
2
where p is the current pixel; F1 (p) is the adaptation factor at
pixel p; Iin is the intensity of the input image; ∗ denotes the
convolution operation; Iin is the mean value of the input; and
G1 is a 2D Gaussian low pass filter with standard deviation
σ1 :
2 2
1 − x 2σ+y2
G1 (x, y) = e 1 (3)
2πσ12
Fig. 1. Naka-Rushton function with different adaptation fac- The input image is then processed according to the Naka-
tors X0. Rushton equation (1) using the adaptation factor F1 :
Iin (p)
the Naka-Rushton function for different values of X0. If X0 Ila1 (p) = (Iin (max) + F1 (p)) (4)
Iin (p) + F1 (p)
is small, the output has increased sensitivity. If X0 is large,
there is not much change in sensitivity. In [12], X0 varies The term Iin (max) + F1 (p) is a normalization factor where
for each pixel. It is the average light intensity in the neigh- Iin (max) is the maximal value of the image intensity.
borhood of the current pixel. The result of Naka-Rushton The second nonlinear function works similarly, the light
function in an image is the enhancement of the local dynamic adapted image Ila2 is obtained by:
range in dark regions while bright regions remain almost un-
Ila1 (p)
changed. Ila2 (p) = (Ila1 (max) + F2 (p)) (5)
Ila1 (p) + F2 (p)
with
2.2. OPL filter 2 2
Ila1 1 − x 2σ+y2
F2 (p) = Ila1 (p) ∗ G2 + , G2 (x, y) = e 2 (6)
Photoreceptors perform not only as a light adaptation filter but 2 2πσ22
also as a low pass filter. Horizontal cells perform the second
low pass filter. In OPL, bipolar cells calculate the difference An avantage of this light adaptation filter is that the im-
between photoreceptor and horizontal cell responses. Then, age Ila2 does not change with different low pass filter sizes
bipolar cells act as a band pass filter: they remove high fre- [14]. By default in this paper, σ1 and σ2 are set to 1 and 3
quency noise and low frequency illumination. respectively.
To model the processes of OPL, two Gaussian low pass
filters with different standard deviations corresponding to the 3.2. DoG filter
effects of photoreceptors and horizontal cells are used [12]. The image Ila2 is then transmitted to bipolar cells and pro-
Finally, bipolar cells act like a Difference of Gaussians filter cessed by using a Difference of Gaussians (DoG) filter:
(DoG). Note that a DoG filter enhances the image edges [12,
13]. Ibip = DoG ∗ Ila2 (7)
3290
paper mainly deals with the illumination problem, we only
choose 64 frontal images captured under 64 different lighting
conditions for each of 10 subjects. Images in the database
are divided into 5 subsets according to the angle of the light
source. The 5 subsets are subset 1 (0 ◦ to 12 ◦ ), subset 2 (13 ◦
to 25 ◦ ), subset 3 (26 ◦ to 50 ◦ ), subset 4 (51 ◦ to 77 ◦ ), and
(a) Iin (b) Ila2 (c) IP h (d) IH (e) Ibip (f) Ipp subset 5 (above 78 ◦ ). The effect of the proposed illumination
3291
that authors of these methods did not test with subset 5. In the properties of the retina which enable eyes to see objects
comparison with other methods, our algorithm is more robust in different illumination conditions. The model has princi-
to illumination variations and is of lower complexity. Only pal components: two nonlinear functions and a Difference of
one image per subject is required for the training set and the Gaussians filter. In the experiments, the very high recognition
techniques used are simple. The most computational calcula- rates are achieved with different face recognition algorithms
tion in our algorithm is the convolution operation with Gaus- associated with the proposed preprocessing technique.
sian kernel. Suppose that the size of a normalized image is
m × n. By replacing one 2D Gaussian calculation by two in-
dependent 1D Gaussian ones, the computational complexity 6. REFERENCES
of algorithm is O(mnw) where w is the size of 1D Gaussian [1] X. Yongsheng Gao and Maylor K.H. Leung, “Face recognition using
kernel and w = 6σ in this paper. As the method is of low line edge map,” IEEE Trans. PAMI, vol. 24, pp. 764–779, 2002.
complexity, it can be applied as a preprocessing technique in [2] C. Liu and H. Wechsler, “Gabor feature based classification using the
real-time applications such as video surveillance. enhanced fisher linear discriminant model for face recognition,” IEEE
Trans. On Image Processing, vol. 11, pp. 467–476, 2002.
[3] Y. Adini, Y. Moses, and S. Ullman, “Face recognition: The problem
4.2. Results on Feret illumination database of compensating for changes in illumination directions,” IEEE Trans.
PAMI, vol. 19, pp. 721–732, 1997.
In the Feret database, one of the most famous databases for
[4] A.U. Batur and M.H. Hayes III, “Linear subspaces for illumination
the evaluation of face recognition algorithms, all frontal face robust face recognition,” CVPR, vol. 2, pp. 296–301, 2001.
images are divided into five categories: fa, fb, fc, dup1 and [5] A.S. Georghiades and P.N. Belhumeur, “From few to many: illumina-
dup2 (see [15] for details). Fa images and fc images were tion cone models for face recognition under variable lighting and pose,”
taken under different illumination conditions. As we are only IEEE Trans. PAMI, vol. 23, pp. 643–660, 2001.
concerned with the illumination problem, 1196 fa images are [6] J.C. Lee, J. Ho, and D. Kriegman, “Nine points of light: acquiring
used as gallery and 194 fc images are used as test set. subspaces for face recognition under variable lighting,” CVPR, vol. 1,
pp. 519–526, 2001.
Recently, the Local Binary Pattern (LBP) algorithm is re-
[7] L. Zhang and D. Samaras, “Face recognition under variable lighting
ported to be a good representation for face images which is using harmonic image exemplars,” CVPR, vol. 1, pp. 19–25, 2003.
invariant to monotone change in lighting conditions. The
[8] H.F. Chen, P.N. Belhumeur, and D.W. Jacobs, “In search of illumina-
LBP based algorithm has also been successfully applied in tion invariants recognition under variable lighting and pose,” CVPR,
face recognition without preprocessing [17, 19]. However, as vol. 1, pp. 254–261, 2000.
[9] D.J. Jobson, Z. Rahman, and G.A. Woodell, “A multiscale retinex for
ridging the gap between color images and the human observation of
Table 2. Recognition rates for different methods on FERET scenes,” IEEE Trans. On Image Processing, vol. 6, pp. 965–976, 1997.
fc probe sets [10] H. Wang, S. Li, and Y. Wang, “Generalized quotient image,” CVPR,
Methods fc vol. 2, pp. 498–505, 2004.
Best result of [15] 82 [11] A. Benoit and A. Caplier, “Head nods analysis: Interpreation of non
LBP [17] 51 verbal communication gestures,” ICIP, vol. 3, pp. 425–428, 2005.
Weighted LBP [17] 79 [12] A. Benoit, The human visual system as a complete solution for image
processing, Ph.D. thesis, INPG, Grenoble, France, 2007.
LGBPHS [19] 97
[13] W. Davidson and M. Abramowitz, “Molecular expressions microscopy
LBP + proposed preprocessing 98 primer: Digital image processing - difference of gaussians edge en-
hancement algorithm,” Olympus America Inc., and Florida State Uni-
shown in Table 2, the LBP based algorithm recognition rate versity.
is significantly enhanced when the preprocessing method is [14] L. Meylan, D. Alleysson, and S. Susstrunk, “Model of retinal local
used. The 98% recognition rate obtained is higher than that adaptation for the tone mapping of color filter array images,” J. Opt.
Soc. Amer., vol. 24, pp. 2807–2816, 2007.
of existing variations of LBP method: weighted LBP [17] and
[15] J. Phillips, H.Moon, and S.A. Rizvi et al., “The feret evaluation
Local Gabor Binary Pattern Histogram Sequence (LGBPHS) methodology for face-recognition algorithms,” IEEE Trans. PAMI, vol.
[19]. Note that to achieve the rate of 97%, the LGBPHS 22, pp. 1090–1104, 2000.
method has to compute 40 images convolved with Gabor fil- [16] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognitive
ters which is of higher complexity than using our preprocess- Neuroscience 3, pp. 71–86, 1991.
ing method and the LBP based recognition method. [17] T. Ahonen, A. Hadid, and M. Pietikainen, “Face recognition with local
binary patterns,” ECCV, pp. 469–481, 2004.
[18] X. Xie and K.M. Lam, “An efficient illumination normalization method
5. CONCLUSION for face recognition,” Pattern Recognition Letters 27, pp. 609–617,
2006.
In this paper, we present a novel method for removing varia- [19] W. Zhang, S. Shan, W. Gao, X. Chen, and H. Zhang, “Local gabor
tion illumination. The efficiency of the method is estimated binary pattern histogram sequence (lgbphs): A novel non-statistical
with respect to the performance of different face recogni- model for face representaion and recognition,” ICCV, vol. 1, pp. 786–
791, 2005.
tion algorithms. The proposed model takes inspiration from
3292