Sie sind auf Seite 1von 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/268515978

Fast Image Enhancement Based on Color Space Fusion

Article  in  Color Research & Application · February 2016


DOI: 10.1002/col.21931

CITATIONS READS

13 504

5 authors, including:

Jinsheng Xiao Yongqin Zhang


Wuhan University University of North Carolina at Chapel Hill
78 PUBLICATIONS   214 CITATIONS    46 PUBLICATIONS   233 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Study on restoration and enhancement of surveillance video under hazy weather conditions View project

All content following this page was uploaded by Jinsheng Xiao on 11 October 2017.

The user has requested enhancement of the downloaded file.


Fast Image Enhancement Based on Color
Space Fusion

Jinsheng Xiao,1,2 Hong Peng,1 Yongqin Zhang,3*


Chaoping Tu,1 Qingquan Li2,4
1
School of Electronic Information, Wuhan University, Wuhan 430072, People’s Republic of China

2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, People’s Republic
of China

3
School of Information Science and Technology, Northwest University, Xi’an 710127, People’s Republic of China

4
President Office, Shenzhen University, Shenzhen 518060, People’s Republic of China

Received 29 April 2014; revised 17 October 2014; accepted 18 October 2014

Abstract: The current Retinex algorithm processes the lished Online 00 Month 2014 in Wiley Online Library (wileyonlineli-
RGB channels separately for color image enhancement. brary.com). DOI 10.1002/col.21931
However, it changes the ratios of RGB components and
also causes some serious problems, such as color distor- Key words: color image enhancement; Retinex theory;
tion, color noise, and the halo artifacts. To solve these multi-scale Retinex with color restoration; halo artifact
issues, we propose a novel algorithm based on color
space fusion. The single scale Retinex with fast mean fil-
tering is applied to the luminance component in hue-satu- INTRODUCTION
ration-value (HSV) color space. An enhancement Color image enhancement has been widely used in the
adjustment factor is introduced to avoid color distortion fields of medical imaging, industrial inspection, and geo-
and noise amplification. Then, the surrounding function is morphology in recent years.1 Its task is to get finer details
replaced by a small scale Gaussian filter in RGB color of an image and highlight the useful information.2 When
space to eliminate the halo artifact. A parameter is an image is captured by a digital camera or a mobile ter-
involved to keep the color natural when the reflection is minal, the illumination tends to be irregular and uncon-
estimated. Finally, the enhanced color image is con- trolled under the certain light sources, such as sun light
structed from the weighted averaging results of these two or street lamps in the open air or fluorescent lamps in a
steps. The subjective and objective evaluations of many room. These cases lead to image degradation in that there
different backlight images captured by different cameras exist excessively bright or dark regions of the captured
are implemented to verify the validity of the proposed images in part.
algorithm in our experiments. The experimental results After a brief review, according to the processing
show that the proposed algorithm can not only signifi- domain, the image enhancement can be mainly divided
cantly suppress the halo effect and noise amplification, into two groups: spatial domain processing techniques
but can also remove color distortion. Our proposed algo- and transform domain processing techniques. In the first
rithm is superior to the multi-scale Retinex with color group, the Retinex theory simulates the human visual cor-
restoration approach and other state-of-the-art methods. tex and presents a simplified model. Land3 systematically
C 2014 Wiley Periodicals, Inc. Col Res Appl, 00, 000–000, 2014; Pub-
V put forward the Retinex model, and applied it to image
enhancement. In the 1990s, Jobson et al.4 proposed the
single scale Retinex (SSR) algorithm which uses the cen-
*Correspondence to: Yongqin Zhang (e-mail: zhangyongqin@pku.edu.cn) ter surround method to estimate illuminate component of
Contract grant sponsor: National Natural Science Foundation of China; con-
tract grant numbers: 91120002, 61471272, 61201442.
a pixel by its neighbors. The SSR algorithm uses the
weighted average to replace the center pixel within a
C 2014 Wiley Periodicals, Inc.
V scale. But it is difficult to guarantee both the color fidelity

Volume 00, Number 00, Month 2014 1


and the local structures of an image. Subsequently, Job- ment on the basis of Gray World assumption. The main
son et al.5 proposed the flexible multi-scale Retinex with contribution of this article is that the proposed algorithm
color restoration (MSRCR) algorithm. The multi-scale adopts the color fusion strategy. The fast mean filtering
Retinex approach cures the inherent defect of the SSR and the enhancement adjustment factor are used to
algorithm to some extent, which not only keeps image improve the visibility and eliminate color distortion in
color fidelity, but also guarantees the details of the HSV space. A small scale Gaussian filter with parameters
images. They also introduced the concept of color restora- in RGB color space is involved to eliminate the halo arti-
tion, which uses an adjustment factor derived from the facts and keep the color natural. Experimental results
proportion of different wave bands of the input image to show that the proposed algorithm can effectively remove
control the output image. But this method may bring the halo artifact phenomenon, color distortion, and sup-
about halo artifacts on the contrast edges and color distor- press noise amplification.
tion in some cases. Yuan and Sun6 presented an auto-
matic exposure correction method that could estimate the
RELATED WORKS
best image specific non-linear tone curve (the S-curve in
their case) for a given image. Their detail-preserving S- The idea of Retinex theory was firstly conceived by Land
curve adjustment can maintain local details and avoid in the 1970s as a model of the lightness and color percep-
halo effects. Xiao et al.7 proposed a hierarchical tone tion of human vision.3 It explains the mechanism that the
mapping algorithm based on color appearance model same object under different lighting conditions has con-
which reduces the halo effect significantly, and achieves stant color. The version of Land’s Retinex3 eased the
the natural color and the rich details. The second group implementation and manipulation of key variables. It did
can be further divided into two approaches: the RGB- not have unnatural requirements for scene calibration.
based approaches and the HSV-based approaches. One of Jobson et al.4 had sought to define a practical imple-
them mainly concentrates on image filtering improve- mentation of the Retinex model without any particular
ments8–12 and the illumination analysis.13–16 Meylan and concern on its validity for human lightness and color per-
Susstrunk9 proposed an adaptive filter combined with the ception. They analyzed the properties and performance of
principal component analysis (PCA) method.17 Zhang a center/surround Retinex. Under the assumption that the
et al.11 developed the image filtering approach using the three lightness signals are independent,3 the color con-
low-rank technique for visual enhancement from a single stancy tensor relation collapses to three independent equa-
grayscale or color image. Hanumantharaju et al.13 pro- tions. The observed image I is decomposed into two
posed the multi-scale Retinex with a modified color resto- parts.4 One part is the spatial distribution of the source
ration technique for the improvements of subjective effect illumination called the illumination image L, correspond-
and the efficiency. Li et al.14 utilized the reflex lightness ing to the low frequency parts of the image, which deter-
and the ambience illumination to maintain the naturalness mines the dynamic range that it can reach. The other part
of scenes. Liu and Sui15 proposed a new adaptive non- is the distribution of scene reflectance called the reflec-
linear function to modify illumination for keeping the hue tance image R, corresponding to the high frequency part,
constant. Wang et al.16 divided the input into different which determines internal properties of the image. Thus
illumination parts and processed each part separately. the model can be expressed as . The purpose of Retinex
Shao18 studied simultaneous coding artifact reduction and theory is to obtain the reflection properties R from the
sharpness enhancement for block-based compressed observed image I.
images and videos. The other approaches include the The logarithmic function can make subtractive inhibi-
discrete cosine transform (DCT) based method19 and the tion into a shunting inhibition (i.e., arithmetic division).
HSV-based infrared multi-scale Retinex (IMSR) The log operation is used to produce a point by point
method.20 Mukherjee and Mitra19 proposed the DCT- ratio to a large regional mean value. The reorganized
based algorithm that modified the luminance component model in the logarithmic form is
V value and improved the saturation component. By
log ðIÞ5log ðRÞ1log ðLÞ (1)
enhancing the highlighting component V value, Yu
et al.20 adjusted the saturation to eliminate color distor- Note that the reflection part R cannot be directly
tion and halo phenomenon. However, these methods men- derived from the observed image. It is found that the
tioned above cannot eliminate color distortion completely, luminance term can be approximated by the Gaussian fil-
and sometimes produce a considerable amount of halo ter.4 According to the Retinex model (1), the reflection
artifacts. They also cause the excessive noises and over- term log(R) is obtained for image enhancement as
enhanced effects in the low or high brightness regions of follows:
some images. Moreover, the other state-of-the-art meth-
Ri ðx; yÞ5log Ii ðx; yÞ2log ½Fðx; y; cÞ  Ii ðx; yÞ (2)
ods can achieve the goal of eliminating halo artifacts at
the expense of the efficiency, which seriously affected the where i 僆 {R, G, B} represents the ith channel, Ri(x,y) is
realtime on-demand processing services. the reflectance term in the logarithmic form, Ii(x,y) is the
To solve these problems, we propose a novel optimized input image, * is the convolution operation, Fðx; y; cÞ5K
2 2 2
algorithm based on color space fusion for image enhance- e2ðx 1y Þ=c is the Gaussian surround function, c is a

2 COLOR research and application


parameter of Gaussian function,
ÐÐ and K is the normaliza-
tion function satisfying Fðx; y; cÞ51. The multi-scale
Retinex (MSR) is the weighted average of several reflec-
tion outputs of SSR with different Gaussian parameter c.
The MSRCR combines the dynamic range compression
of the small-scale Retinex and the tone rendering rendi-
tion of the large scale Retinex for color restoration. As
one way to approximate the human visual perception sys-
tem, the MSRCR method is reasonable and has low com-
putational complexity. The mathematical expression of
the MSRCR method is given below5:
X
N
RMSRCR;i 5Ci  wn Rn;i (3)
n51

where i 僆 {R, G, B} represents the ith channel, N is the


number of scales, n is the index of scale, Rn,i is reflec-
tance of SSR, and wn refers to the weights. The color res-
toration factor Ci is introduced to adjust the color
proportion of the three channels in this form:
2 3
X
Ci 5log ½a  Ii ðx; yÞ2log 4 Ij ðx; yÞ5 (4)
j2fR;G;Bg
Fig. 1. A flow chart describing the proposed algorithm.

To normalize RMSRCR,i into the range 02255, the mean


value l and the mean-square standard deviation d of the compared with the direct viewing of the scene. A non-
RMSRCR,i are calculated, and a is a constant coefficient, linear transform appears to be essential to the realization of
generally a 僆 [0.5, 2.0]. After we define Max5l1a  d; good visual image rendition. The MSRCR is a non-linear
Min5l2a  d and D 5 Max – Min, the normalized result spatial and spectral transform that produces images that
of RMSRCR,i is given as have a high degree of visual fidelity to the observed
Ri 5255  ðRMSRCR;i 2MinÞ=D (5) scene.21 The MSRCR brings the perception of dark zones
in recorded images up in local lightness and contrast to the
The physical meaning of MSRCR algorithm is that it degree needed to mimic direct scene viewing. Although the
subtracts the convolution value of the Gaussian function MSRCR method performs quite well at the synthesis of
with original image from the original image in logarithmic dynamic range compression, color constancy, image sharp-
space. In fact, the smooth part is subtracted from the origi- ening, and color rendition, the processed images still have
nal image where the smaller the parameter c in Gaussian the problem of color distortion, noise amplification, and
function is, the more the components with slow changes halo artifact. The Retinex-based methods4,5 have another
are subtracted in the image. The rest of the image is the drawback of relatively high computation cost and are not
components with fast changes, thus the details of the origi- suitable for real-time applications. In order to overcome
nal image are standout. That is, for a very dark region in these defects, we propose a novel algorithm based on the
the input image, the details of the dark region are standout Retinex theory by the fusion of different color spaces. Fig-
and enhanced by the MSRCR algorithm, whereas for the ure 1 shows the preview of the specific procedures of the
sharp and bright region, the enhanced image has less proposed algorithm.
weakened contrast and color shift.14 The proposed algorithm mainly consists of two proce-
dures: the color enhancement in HSV space and the detail
enhancement in RGB space. The final output result is the
PROPOSED ALGORITHM VIA COLOR SPACE FUSION
fusion of the two parts. The GF in SSR is replaced by FMF
The Retinex is used as a platform for digital image to improve the efficiency of the enhancement for V compo-
enhancement by synthesizing local contrast improvement, nent in HSV space. An enhancement adjustment factor is
color constancy, and lightness/color rendition. The visual introduced to avoid color distortion and noise amplifica-
characteristics of the recorded digital image are trans- tion. The surrounding function is replaced by a small scale
formed so that the rendition of the transformed image Gaussian filter in RGB color space to eliminate the halo
approaches that of the direct observation of scenes. The artifact. A parameter is involved to keep the color natural
local contrast in the dark zones of images contains brightly when the reflection is calculated. Finally, the enhanced
lit and dark regions. It matches our perception of those dark color image is constructed from the weighted averaging
zones. The image of a scene formed using linear represen- results of two methods. For the same color digital image,
tation does not usually provide a good visual representation the display may be dependent on the specific output

Volume 00, Number 00, Month 2014 3


equipment. However, the changes of color brightness, bands. In fact, the color only depends on the lightness in
noise, and the halo effect can still be distinguishable. each wave band, whereas the lightness is independent of
flux. The shape and size of surrounding areas, the famili-
HSV-Based Enhancement arity of objects, and the memory of color have great
effect on the appearance color.24 If the output image and
For color image enhancement, if the enhanced ampli- the original image satisfy the above conditions, then the
tudes of the RGB are inconsistent, it is likely to cause overall brightness is enhanced while the color information
color distortion. The MSRCR method is based on the of the output image remains. That is, the color is more
Gray World assumption. This assumption means that the bright-colored than the original image without color dis-
average value of each channel (R, G, B) should tend to tortion. In order to enhance the whole image, the lumi-
be equal. But if an input image has a dominant chromatic nance gain curve K(x, y) of the image is calculated to get
distribution for a certain chromaticity, it violates the Gray each pixel’s luminance increment.
World assumption and causes an undesirable color distor- According to the Retinex theory, the reflection compo-
tion.14 Although MSRCR method5 considers about the nent of V* is
color correlation, the color restoring function based on
the Gray World assumption is still lack of support of V  ðx; yÞ5log ½Vðx; yÞ2log ½Fðx; y; cÞ  Vðx; yÞ (12)
strict neurophysiology theories, which does not keep the where F(x, y, c) is the surround function of Eq. (2).
tonal constancy. So there still exists the color deviation To enhance luminance component V in HSV space, the
and distortion in the enhanced results. fast mean filter (FMF)25 can get the approximate results
In HSV space, the three components of a vector [H, S, of the fast Gaussian convolution. It reduces the required
V] separately denotes hue, saturation, and value.22 number of additions and simultaneously eliminates the
According to the color space transform, we set division operation. The computation speed of the FMF is
(
Imax 5 max fR; G; Bg accelerated by basic store-and-fetch operations, which is
(6) irrespective of the input image or neighborhood size.
Imm 5 Imax 2min fR; G; Bg The number of additions can be further reduced by
The luminance component V is extracted in this form grouping the m pixels in each column of a window
(around the central pixel) with their sums stored in an
Vðx; yÞ5Imax (7) array. The window consists of m rows and n columns.
And the saturation S is extracted in this form The array is a circular one of size n. In this case, when
( the neighborhood is shifted one pixel to the right, only
255  Imm =Imax ; Imax > 0 one new column (of the new window) sum needs to be
Sðx; yÞ5 (8) added and one old column (of the old window) sum
0; Imax 50
needs to be subtracted. So the number of additions is fur-
The hue H is ther reduced. The division operation is eliminated by stor-
8 ing the averages in an array of length 255  m  n11,
>
> 60  ðG2BÞ=Imm ; R5Imax
< which are accessed using the sum of neighbourhood pixel
Hðx; yÞ5 120160  ðB2RÞ=Imm ; G5Imax (9) graylevels as the array index. Since the number of divi-
>
>
: sion is reduced to zero by multiple additions, the FMF
240160  ðR2GÞ=Imm ; B5Imax
method is very fast and efficient.
If the hue obtained from Eq. (9) satisfies H(x, y) < 0, Figure 2 shows the visual comparison of the results
then H(x, y) 5 H(x, y) 1 360. between the GF and the FMF method. The FMF filter can
Assume that pixel [R1, G1, B1] is proportional to pixel get approximation effect of fast Gaussian convolution.
[R2, G2, B2] in RGB space satisfying the following The speed of the filter will increase nearly 50% with the
formula15: same effect if the FMF algorithm is used instead of the
Gaussian filter. The FMF filter is used in the enhancement
R2 G2 B2
5 5 5K (10) of luminance component V in HSV space.
R1 G1 B1 The V* obtained in the process above has been greatly
Then, from the relationship of RGB and HSV color enhanced for the whole image, either in the dark region
spaces, we can infer that: or in the bright region of the original image. But this
induces the noise in the dark area of the color image and
V2
5K; S2 5S1 ; H2 5H1 (11) the detail loss in the bright area. The enhancement adjust-
V1 ment factor S*(x,y) is proposed in a new formula to solve
Thus, color consistency between the original image and the problem above.
the enhanced image with the gain of brightness value K
S ðx; yÞ5b  sin ½p  Vðx; yÞ=255 (13)
can be achieved.23 These two points have the same com-
ponents H and S in the HSV (Hue, Saturation, and Value) where V(x, y) is the value to be enhanced. b 僆 [0.5, 1] is an
space. The color at any point in an image is essentially adjusting parameter used to control the brightness. In gen-
independent of the ratio of the three fluxes on three wave eral, the variation of brightness according to the b is that

4 COLOR research and application


Fig. 2. Visual comparisons of the results for test images. (a) Original images; (b) results obtained by GF, and (c) results
obtained by FMF.

the bigger of the b, the brightness enhancement more; the In the HVS-based enhancement, an input RGB color
smaller of the b, the brightness enhancement less. image is transformed into an HSV color image. Under the
The corrected V1(x, y) is obtained from the product of assumption of whitelight illumination, the H component
V* multiplied by adjustment factor S*(x,y) as follows: image remains as it is and the S and V component images
are enhanced.
V1 ðx; yÞ5S ðx; yÞ  V  ðx; yÞ (14)
Through the process in HSV color space, the overall
According to the characteristics of sine function, the color contrast and vividness of the input image have
adjustment factor S*(x,y) is close to 0 when the original been improved significantly without color distortion.
luminance image V(x, y) is quite small or close to 255. Then, the improvement of the next procedure is done to
Hence, it is prevented that the dark region is over- enhance the details of the input image to highlight the
enhanced so that the noise is induced, and the bright textures.
region is over-enhanced so that the textures is lost. The
results of the proposed algorithm with and without the Detail Enhancement
enhancement adjustment factor are shown in Fig. 3. As
F(x, y, c) is the Gaussian surround function with the
can be seen from the results in Fig. 3, there is consider-
scale factor c in the MSRCR method. The details in dark
able noise in the dark area of the image because of over-
area of the image are rich when the parameter c is
enhancement, just as shown in the red area of Fig. 3(b).
smaller so that the dynamic compression range is better.
After introducing the enhancement factor, the result is
On the contrary, the larger the parameter c is, the better
shown in the red region of Fig. 3(c). Through the corre-
the color fidelity of the output image is at the cost of the
sponding adjustment of the enhancement factor for differ-
dynamic compression ability.21 The MSRCR method
ent luminance regions in the image, the distortion
assumes that the luminance changes slowly in illumina-
phenomenon is eliminated.
tion estimate step, which violates the actual case. When
After adjusting the enhanced luminance component
estimating the illumination component, on the contrasty
V1(x,y) to the range 0  255 by linear stretch, we obtain
edges the two components (bright and dark) affect each
the luminance gain curve K(x,y) from the ratio of the val-
other on the contrasty edges, which cause halo artifacts
ues V1 to V which is separately after and before enhance-
around the edges. One example is shown in Fig. 4, where
ment, as shown in the formula:
Figs. 4(a)–4(c) are the original image and the luminance
Kðx; yÞ5V1 ðx; yÞ=Vðx; yÞ (15) component estimated by the Gaussian filter with the scale
parameters 120 3 120 and 9 3 9, respectively. Thus as
From Eqs. (13) and (14), we can see that, the more the
is shown in Fig. 4, the halo artifact appears. The primary
pixel value of the original image is close to 255, the
reason of the halo artifact phenomenon is that the pixels
more the pixel value after enhancement is close to 0,
near the contrasty edges changes much more than the pix-
which makes the highlight areas tend to become dark
els that is far away when the illumination component is
sharply after the enhancement. In order to make the lumi-
estimated through the low-pass filter. In the window of
nance of the image unchanged after the enhancement, we
take the maximized difference value between the
enhanced luminance component and the luminance com-
ponent in original image as the output. Namely, if
K(x,y) < 1, set K(x, y) 5 1. Since the luminance gain
curve is obtained. In order to avoid the color distortion,
we restrain the saturation as follows:
(
Sðx; yÞ; Kðx; yÞ < 1
S1 ðx; yÞ5 (16)
Sðx; yÞ=Kðx; yÞ; else Fig. 3. The compared results of the proposed algorithm
with and without the enhancement adjustment factor. (a)
where S(x, y) is the saturation of original images, S1(x, y) Original image; (b) without the adjustment; (c) with the
is the saturation of the output image after color restrain. adjustment.

Volume 00, Number 00, Month 2014 5


Fig. 4. The results of luminance estimation. (a) Original image; (b) 120 3 120 luminance estimation; and (c) 9 3 9 lumi-
nance estimation.

the tower, there exist the pixel values that have great dif- Eq. (5) and Gamma check is also used to correct the
ference from the center pixel value. The mean-shift fil- results of this part. The contrast effects of this enhanced
ter,26 JND filter,8 and the bilateral filter27,28 are proposed part and NASA29 are shown in Fig. 6. From Fig. 6(a)
to solve this problem. Different filter are used to elimi- the original image, we can see that there is obvious con-
nate halo artifact phenomenon. trast between bright regions and dark regions at the
To make the balance between the efficiency and the edges of the white tower in the red box and the sky.
effects of the proposed algorithm, the illumination estima- Figures 6(b) and 6(c) are the result of the proposed algo-
tion used 9 3 9 Gaussian filter. The result is shown in rithm with the improved Gaussian filter and the result of
Fig. 4(c). We select the junction of the White Tower and NASA, respectively. It is found that there is distinct
sky to compare its pixel value changes. In Fig. 4, the dark halo artifact around the white tower in Fig. 6(c),
pixel value changes at the same row(x 僆 [1240, 1385], whereas the color transit is quite natural without halo
y 5 627, about the position of the yellow arrow in Fig. artifacts in Fig. 6(b).
4(a)) of three pictures is shown in Fig. 5. As shown in
the Fig. 5, when the Gaussian filter window size c 5 120,
the side of sky at the junction, pixel value is larger than Final Output Result
original, while the side of the tower, pixel value is less From the analysis above, the fast mean filter is used to
than the original. According to the formula I5R  L, the replace the gauss filter in V channel of HSV space to get
pixel distribution of the reflected component is exactly the color enhanced image. The scale of Gaussian template
the opposite at the junction, so it will produce halo phe- is reduced to 9 3 9 to get a detail enhanced image with-
nomenon. As can be seen from Fig. 5, the luminance out halo effect. After the color enhanced image and detail
component estimated by Gaussian filter with the scale 9 enhanced image are obtained, the final reconstructed
3 9 is consistent with the original, reflects the essence of image is the weighted average of the two parts using the
Halo elimination from the side. following rules:
As shown before, the enhancement image can have a 8
wide dynamic range with small Gaussian template. But >
> Hout ðx; yÞ5H1 ðx; yÞ
<
the color fidelity cannot be guaranteed. So we rewrite Eq. Sout ðx; yÞ5a  S1 ðx; yÞ1ð12aÞ  S2 ðx; yÞ (18)
(2) with the parameter a. Thus the reflection component >
>
:
is calculated using this formula: Vout ðx; yÞ5b  V1 ðx; yÞ1ð12bÞ  V2 ðx; yÞ
Ri ðx; yÞ5a  log Ii ðx; yÞ2log ½Fðx; yÞ  Ii ðx; yÞ (17) where H1(x,y) is the hue of original image, S1(x,y), V
1(x,y) represent the HSV values of color enhanced image,
where i 僆 {R, G, B} is the index of channels, Ri(x,y) is S2(x,y), V2(x,y) represent the HSV values of detail
the reflection component, Ii(x,y) is the input image, a can enhanced image, and Hout(x,y), Sout(x,y), and Vout(x,y)
take the value from 1.5 to 2, and F(x,y) * Ii(x,y) is illumi-
nation component of the image convolution between the
Gaussian filter and the input image. We passed some
experimental tests which show that the bigger of a, the
more smooth of image color and less details, the smaller
of the a, the more details of image and some color distor-
tion. The Gaussian filter is chosen to estimate the illumi-
nance component. If the reflection component is
calculated as Eq. (2), then the dynamic range compres-
sion of the output image is poor. Therefore the original
component is introduced using Eq. (17) to get reflection
component.
After the reflection component is obtained, the color
restoration factor is introduced based on Eq. (4). The Fig. 5. For different scale filter templates, the pixels
final output is stretched up from 0 to 255 according to change around the tower.

6 COLOR research and application


Fig. 6. The contrast of eliminating halo artifacts. (a) Original image; (b) the result of the proposed filter enhancement; (c)
the result of NASA.

represent the HSV value of final output image. The value which are likely to produce color distortion and lose
range of the parameter a is 0 < a < 1. The variation of the image details. “Father & girl” and “Plane” are rich in
output image according to the parameter a is that the big- details and also have low contrast, which are likely to
ger of the a, the color is more shallow and less color produce halo artifacts. The size of input test images is
noisy in final result; the smaller of the a, the color is 1312 3 2000 with three channels in the RGB format.
more bright and more color noisy. The value range of the The proposed algorithm was compared with MSRCR,5
parameter b is 0 < b < 1. The variation of the output DCT,19 HSV-IMSR,20 and NASA29 for verifying its per-
image according to the b is that the bigger of the b, the formance. The visual comparison of the results of these dif-
less over-enhancement in final result; the smaller of the b, ferent algorithms is given in Figs. 8–10 for the four test
the more details in final result.

RESULTS AND ANALYSIS


The numerical experiments were carried out using Visual
Studio 2005 software on the platform of Inter(R) Cor-
e(TM) i5-2300 CPU at 2.80 GHz, 2.79 GHz, 4.00 GB
RAM, and Windows XP operating system. These differ-
ent methods are implemented by ANSI C language with
single-thread, no SSE acceleration.
To evaluate the performance of the proposed color
image enhancement algorithm, all the images (more than
100 images) in NASA websites29 and other websites30,31
are tested for both subjective and objective evaluations.
Some of these test images are captured by NIKON D129
with size of 2000 3 1312 or iPhone 5/5s32 with size of
3264 3 2448. The other images are captured by Kodak
camera,30 Nokia Lumia 920, or Canon EOS-1D X.31 The
test images shown in Figs. 7–10, i.e., “Boy(indoor),”
“Boy(board),” “Father & girl,” and “Plane,” are selected
from these image databases mentioned above and provide
a wide range of visual phenomena. The proposed algo-
rithm was compared with the state-of-the-art methods to
Fig. 7. The visual comparison of the results of different
verify its enhancement performance. As input test image, algorithms for the “Boy(indoor)” image. (a) Original image;
“Boy(indoor)” and “Boy(board),” appear fuzzy and have (b) MSRCR5; (c) DCT19; (d) HSV-IMSR20; (e) NASA29; and
poor clarity and low contrast of edges with rich colors, (f) the proposed algorithm.

Volume 00, Number 00, Month 2014 7


Fig. 8. The visual comparison of the results of different Fig. 9. The visual comparison of the results of different
algorithms for the “Boy(board)” image. (a) Original image; algorithms for the “Father & girl” image. (a) Original image;
(b) MSRCR5; (c) DCT19; (d) HSV-IMSR20; (e) NASA29; and (b) MSRCR5; (c) DCT19; (d) HSV-IMSR20; (e) NASA29; and
(f) the proposed algorithm. (f) the proposed algorithm.

images, respectively. The original images are shown in Fig. The mean of image is used to measure the overall
8(a)–10(a). In Figs. 8(b)210(b), the results of the MSRCR lightness,21 whose computation formula is show as
method have serious color distortion with low image following:
clarity. In Figs. 8(c)–10(c), the defects in DCT results are
that the color of some sections is over enhanced, image
clarity is not high enough, and blocking effect is introduced
because of using 8 3 8 block. In Figs. 8(d)–10(d), the color
of the results of HSV-IMSR tends to be lighter than the
expected due to the saturation enhancement part that is
likely to produce splashes noise in the dark regions. In
Figs. 8(e)–10(e), the results of NASA have quite well in
dynamic range compression and highlighting details in
dark area without color distortion, but NASA has the draw-
back of halo effect, and the detail of bright place cannot be
recovered well too. For the “Boy(board)” image, a circle of
halo around the head of the boy. From the results of the
proposed algorithm shown in Figs. 8(f)–10(f), we can see
that there is no color distortion, and no halo artifacts. Com-
pared with NASA results, the details in the dark regions of
our results are clear and evident. Furthermore, the proposed
algorithm does not introduce the noise in the output image,
and outperforms NASA in the contrast of the output image.
Good visual representations seem to have high visual
lightness and contrast. In order to further verify the valid-
ity of the proposed algorithm, the objective indicators
including the mean, the standard deviations (SD), the
clarity and the entropy, and the algorithm’s efficiency sta-
Fig. 10. The visual comparison of the results of different
tistics are used to compare the proposed algorithm with algorithms for the “Plane” image. (a) Original image; (b)
the baseline methods. The definitions of these indicators MSRCR5; (c) DCT19; (d) HSV-IMSR20; (e) NASA29; and (f)
are given below. the proposed algorithm.

8 COLOR research and application


R21 X
X C21 TABLE II. The Comparison of computation time for
1 different algorithms (unit: ms).
Mean5 Ii;j (19)
ðR21ÞðC21Þ i51 j51
Images 5 19 20 Proposed
where R and C are the number of rows and columns of
Boy (indoor) 1392 4656 688 303
the image, respectively. Ii,j is the pixel value located at Boy (board) 1438 4750 705 296
the position (i, j). Assume the images are divided into 40 Father & girl 1402 4861 712 299
3 32 non-overlapping blocks with size of 50 3 41. The Plane 1423 4850 702 297
standard deviations of each block are computed. The
overall contrast is measured by the mean of block stand- on numerical test images. From the statistical results, we
ard deviations (MBSD). can see that, the proposed algorithm costs less computation
The clarity denotes the quality of image details, which time than the other current methods.5,19,20,29 Moreover, the
can be measured by the average gradient method. It is a objective indicators of the proposed algorithm are better
modified version of the energy of image gradient (EOG) than them. The subjective visual comparison of the results
and spatial frequency (SF).33 The definition of image shows that the clarity and the mean of the proposed algo-
clarity is given as follows: rithm are also better than the state-of-the-art meth-
R21 X
X C21 ods.5,19,20,29 Through the improvements of the MSRCR
1
Clarity5 di;j (20) method,5 the proposed algorithm increases both the subjec-
ðR21ÞðC21Þ i51 j51 tive and objective quality of the input images. According
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
to the experiments and the subjective and objective results,
where di;j 5 ððIi;j 2Ii11;j Þ2 1ðIi;j 2Ii;j11 Þ2 Þ=2.
the proposed algorithm has better effect than the current
Table I gives the objective evaluation results for differ-
competing methods for different scenes or the images cap-
ent algorithms.
tured by different cameras.
For a fair comparison, all the experiments are con-
ducted on the same machine platform which is shown at
the beginning of this section. All these methods are DISCUSSIONS AND CONCLUSIONS
implemented with the C language. The computation time
does not include the time of reading and writing the In this article, we addressed the contrast enhancement
image file. It contains only the runtime of the image problem of the images captured under different lighting
enhancement algorithm. The comparison of computation conditions. To solve the inherent drawbacks (e.g., color
time for different algorithms is shown in Table II. distortion, noise amplification, and halo artifact phenom-
As can be seen from Tables I and II, the proposed algo- enon) of the traditional image enhancement methods, we
rithm is superior to other state-of-the-art methods5,19,20,29 propose the novel powerful image-enhanced algorithm
from the evaluation of both the objective indicators and based on color space fusion. The proposed algorithm
the efficiency. The statistical analysis on the mean, MBSD, adopts the color fusion strategy, the improvements of the
clarity of the original images, and the output images by Gaussian filter, and the introduction of the adjustment
different algorithms and calculation time were carried out factor for color image enhancement. Furthermore, our
developed algorithm was compared with the current com-
peting methods on numerical test images. The experimen-
TABLE I. The objective evaluation results for the dif- tal results show that the proposed algorithm with less
ferent algorithms. computation time can achieve better performance than the
state-of-the-art methods for image enhancement from sub-
Images Index Boy (indoor) Boy (board) Father & girl Plane
jective and objective evaluations. Furthermore, the differ-
Original Mean 89.381 77.594 88.534 83.675 ent cameras and different imaging parameters can
MBSD 13.465 14.650 15.108 20.286 produce different images for the same scene. How to
Clarity 1.660 2.666 2.802 3.294
5 Mean 108.657 108.402 108.615 107.818 enhance the captured images by different imaging param-
MBSD 13.881 12.993 15.587 20.831 eters and camera types is still a difficult problem that is
Clarity 2.794 3.498 3.705 4.266 our future work to research.
19 Mean 119.086 111.549 109.174 109.506
MBSD 17.831 18.720 20.082 27.781
Clarity 4.219 5.526 5.228 6.191 1. Shao L, Caviedes JE, Ma KK, Bellers E. Video restoration and
20 Mean 122.096 115.407 114.362 115.465 enhancement: Algorithms and applications. Signal Image Video Process
MBSD 20.226 19.625 22.925 31.856 2011;5:269–270.
Clarity 4.271 5.322 5.330 6.362
29 Mean 115.049 109.350 112.432 117.829 2. Gijsenij A, Gevers T, van de Weijer J. Computational color constancy:
MBSD 18.858 21.611 23.062 30.040 Survey and experiments. IEEE Trans Image Process 2011;20:2475–
Clarity 5.756 7.664 7.159 7.689 2489.
Proposed Mean 120.339 118.319 114.554 116.716 3. Land E. An alternative technique for the computation of the designator
MBSD 20.287 20.507 22.088 30.798 in the retinex theory of color vision. Proc Natl Acad Sci 1986;83:
Clarity 5.902 7.190 7.694 8.339
3078–3080.
The chosen test images are “Boy (indoor),” “Boy (board),” 4. Jobson DJ, Rahman Z, Woodell GA. Properties and performance of a
“Father & girl,” and “Plane.” center/surround Retinex. IEEE Trans Image Process 1997;6:451–462.

Volume 00, Number 00, Month 2014 9


5. Jobson DJ, Rahman Z, Woodell GA. A multiscale Retinex for bridging 18. Shao L. Simultaneous coding artifact reduction and sharpness enhance-
the gap between color images and the human observation of scenes. ment for block-based compressed images and videos. Signal Process
IEEE Trans Image Process 1997;6:965–976. Image Commun 2008;23463–470.
6. Yuan L, Sun J. Automatic exposure correction of consumer photo- 19. Mukherjee J, Mitra SK. Ehancement of color images by scaling the
graphs. Proceedings of the 12th European conference on Computer DCT coefficients. IEEE Trans Image Process 2008;17:1783–1794.
Vision-Volume Part IV, Springer-Verlag, Berlin, Heidelberg, 2012. p 20. Yu JH, Kim YT, Lee NK, Tchoi YS, Hahn HS. Effective color correc-
771–785. tion method employing HSV color model. J Meas Sci Instrum 2012;3:
7. Xiao JS, Li WH, Liu GX, Shaw SL, Zhang YQ. Hierarchical tone 39–45.
mapping based on image color appearance model, IET Comput Vis 21. Rahman Z, Jobson DJ, Woodell GA. Retinex processing for automatic
2014;8:358–364. image enhancement. J Electron Imaging 2004;13:100–110.
8. Doo HC, Ick HJ, Mi HK, Nam CK. Color image enhancement based 22. Amma K, Yaguchi Y, Niitsuma Y, Matsuzaki T, Oka R. A compara-
on single-scale retinex with a JND-based nonlinear filter. IEEE Interna- tive study of gesture recognition between RGB and HSV colors using
tional Symposium on Circuits and Systems, New Orleans, May 2007. p time-space continuous dynamic programming. 2013 International Joint
3948–3951. Conference on Awareness Science and Technology and Ubi-Media
9. Meylan L, Susstrunk S. High dynamic range image rendering with a Computing (iCAST-UMEDIA), Aizuwakamatsu, 2-4 Nov. 2013. p
Retinex-based adaptive filter. IEEE Trans Image Process 2006;15: 185–191.
2820–2830. 23. Unaldi N, Sankaran P, Asari VK, Rahman Z. Image enhancement for
improving face detection under non-uniform lighting conditions. 15th
10. Shao L, Zhang H, Wang L, Wang L. Repairing imperfect video
IEEE International Conference on Image Processing, San Diego, USA:
enhancement algorithms using classification-based trained filters. Signal
IEEE; 2008.p 1332–1335.
Image Video Process 2011;5:307–313.
24. Land EH. The retinex theory of color vision. Sci Am 1977;237:108–
11. Zhang YQ, Ding Y, Xiao JS, Liu JY, Guo ZM. Visibility enhancement
128.
using an image filtering approach. Eurasip J Adv Signal Process 2012;
25. Rakshit S, Ghosh A, Uma Shankar B. Fast mean filtering technique
220:1–6.
(FMFT). Pattern Recognit 2007;40:890–897.
12. Zhang YQ, Ding Y, Liu JY, Guo ZM. Guided image filtering using
26. Comaniciu D, Meer P. Mean shift: A robust approach toward feature
signal subspace projection. IET Image Process 2013;7:270–279.
space analysis. IEEE Trans Pattern Anal Mach Intell 2002;24:603–619.
13. Hanumantharaju MC, Ravishankar M, Rameshbabu DR, Ramachandran 27. Tomasi C, Manduchi R. Bilateral filtering for gray and color images.
S. Color image enhancement using multiscale Retinex with modified Sixth International Conference on Computer Vision (ICCV), Bombay,
color restoration technique. The Second International Conference on Jan. 1998. p 839–846.
Emerging Applications of Information Technology, Kolkata, India, Feb. 28. Banterle F, Corsini M, Cignoni P, Scopigno R. A low-memory,
2011. p 93–97. straightforward and fast bilateral filter through subsampling in spatial
14. Li B, Wang S, Geng Y. Image enhancement based on Retinex and domain. Comput Graph Forum 2012;31:19–32.
lightness decomposition. IEEE International Conference on Image Proc- 29. Michael Braukus, Keith Henry. NASA technology helps weekend pho-
essing (ICIP), Brussels, Sep. 2011. p 3417–3420. tographers look like pros. Available at: http://dragon.larc.nasa.gov/reti-
15. Liu W, Sui Q. An hue preserving MSR algorithm of image enhance- nex/ pao/news/. Aug 21, 2001.
ment. Acta Photon Sin 2011;40:642–646. 30. Kodak lossless true color image suite. Available at: http://r0k.us/
16. Wang R, Zhu J, Yang W, Shuai F, Zhang X. An improved local multi- graphics/kodak/index.html. Last accessed on accessed Oct. 2013.
scale Retinex algorithm based on illuminance image segmentation. 31. Windows Phone. http://www.windowsphone.com/zh-cn/how-to/wp8. Jun.
Acta Electron Sin 2012;38:1181–1186. 21, 2012.
17. Zhang YQ, Liu JY, Li MD, and Guo ZM. Joint image denoising using 32. iPhone. http://www.apple.com/cn/iphone/. Sep. 20, 2013.
adaptive principal component analysis and self-similarity. Information 33. Huang W, Jing ZL. Evaluation of focus measures in multi-focus image
Sci 2014;259:128–141. fusion. Pattern Recognit Lett 2007;28:493–500.

10 COLOR research and application

View publication stats

Das könnte Ihnen auch gefallen