Sie sind auf Seite 1von 5

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/6448937

Vehicle Detection Using Normalized Color and


Edge Map

Article in IEEE Transactions on Image Processing April 2007


DOI: 10.1109/TIP.2007.891147 Source: PubMed

CITATIONS READS

98 353

3 authors, including:

Jun-Wei Hsieh
National Taiwan Ocean University
110 PUBLICATIONS 1,510 CITATIONS

SEE PROFILE

All content following this page was uploaded by Jun-Wei Hsieh on 24 May 2016.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue
are linked to publications on ResearchGate, letting you access and read them immediately.
Vehicle Detection Using Normalized Color
and Edge Map
Luo-Wei Tsai , Jun-Wei Hsieh*, and Kao-Chin Fan

Department of Computer Engineering, National Central University,
Jung-Da Rd., Chung-Li 320, Taiwan

Department of Electrical Engineering, Yuan Ze University,
135 Yuan-Tung Road, Chung-Li 320, Taiwan
shieh@saturn.yzu.edu.tw

AbstractThis paper presents a novel vehicle detection approach and Seelen found that the shadow area underneath a vehicle is
for detecting vehicles from static images using color and edges. a good cue to detect vehicles. The major drawback of the
Different from traditional methods which use motion features to above methods using local features to search vehicles is the
detect vehicles, this method introduces a new color transform need of a full search to scan all pixels of the whole image.
model to find important vehicle color from images for quickly
locating possible vehicle candidates. Since vehicles have different
For the global feature color, since vehicles have very large
colors under different lighting conditions, there were seldom variations in their colors, there were seldom color-based
works proposed for detecting vehicles using colors. This paper works addressed for vehicle detection. In [9], Guo et al.
proves that the new color transform model has extreme abilities used several color balls to model road colors in L*a*b* color
to identify vehicle pixels from backgrounds even though they are space and then vehicle pixels were identified if they were
lighted under various illumination conditions. Each detected classified no-road regions. However, since these color
pixel will correspond to a possible vehicle candidate. Then, two models were not general in modeling vehicle colors, many
important features including edge maps and coefficients of false detections were produced and leaded to the degradation
wavelet transform are used for constructing a multi-channel of accuracy of vehicle detection.
classifier to verify this candidate. According to this classier, we
can perform an effective scan to detect all desired vehicles from
In this paper, we propose a novel detection method to
static images. Since the color feature is first used to filter out detect vehicles from still images using colors and edges.
most background pixels, this scan can be extremely quickly Although the color of an object is quite different under
achieved. Experimental results show that the integrated scheme different lighting conditions, it still owns very nice properties
is very powerful in detecting vehicles from static images. The to describe objects. For example, the color of a vehicle is
average accuracy of vehicle detection is 94.5%. almost similar under different viewing points (if we observe it
at the same time) and can be extracted in real-time. If a
I. INTRODUCTION general color transform can be found to accurately capture
vehicles visual characteristics, the color will become a very
Vehicle detection [1] is an important problem in many related
useful cue to narrow down the search areas of possible
applications such as self-guided vehicles, driver assistance
vehicles. The contribution of this paper is to present a novel
system, or intelligent parking system. One of most common
color model to make vehicle colors be more compact and
approaches to vehicle detection is using vision-based
sufficiently concentrated on a smaller area. The model is
techniques to analyze vehicles from images or videos.
global and does not need to be re-estimated for any new
However, due to the variations of vehicle colors, sizes,
vehicles or new images. Since no prior knowledge of
orientations, shapes, and poses, developing a robust and
surface reflectance, weather condition, and view geometry is
effective system of vision-based vehicle detection is very
used in the training phase, the model performs very well to
challenging. To address these problems, there have been
separate vehicle pixels from background pixels. After that,
many approaches using different features and learning
this paper uses two features including edge maps and
algorithms investigated for effective vehicle detection. For
coefficients of wavelet transform to form a multi-channel
example, many techniques [1]-[3] used background
classifier. The classifier is modeled using a Gaussian model
subtraction to extract motion features for detecting moving
and can be automatically learned from a set of training images.
vehicles from video sequences. However, this kind of
Since the classifier records many changes of vehicle
motion feature is no longer usable and found in still images.
appearances, it has good discriminability to verify each
For dealing with static images, Wu et al. [4] used wavelet
vehicle candidate. When scanning, since most of
transform to extract texture features for locating possible
background pixels have been eliminated in advance using
vehicle candidates from roads. Then, each vehicle candidate
colors, the verification process can be achieved extremely
was verified using a PCA classifier. In addition, Z. Sun et al.
efficiently. Due to the filtering and discriminative abilities of
[5] used Gabor filters to extract different textures and then
the proposed method, all desired vehicles can be effectively
verified each candidate of vehicles using a SVM classifier.
and efficiently detected from static images. Experimental
Furthermore, in [6], Bertozzi et al. used corner features to
results reveal the feasibility and validity of the proposed
build four templates for detecting vehicles. In [7], Tzomakas
approach in vehicle detection.

0-7803-9134-9/05/$20.00 2005 IEEE


II. SYSTEM OVERVIEW In [8], Rojas et al. found that the colors of roads will
concentrate around a small cylinder along the axis directed by
This paper presents a novel system to detect vehicles from Eq. (2). Based on this observation, this paper will propose a
static images using edges and colors. The flowchart of this new color model to transform all pixel colors on a 2-D color
system is shown in Fig. 1. First, a color transformation is space such that all vehicle pixels will concentrate on a smaller
presented to project all the colors of input pixels on a color area. Through modeling this area, a Bayesian classifier can be
space such that vehicle pixels can be concentrated on a then developed to accurately identify vehicle pixels from
smaller area. Then, a Bayesian network is proposed to backgrounds.
identify vehicle colors from backgrounds. Since vehicles At the beginning, several thousands of training images
have different sizes and orientations, several vehicle are collected from different scenes including roads, parking
hypotheses are generated from each detected pixel. For lots, building, and natural scenes. Fig. 2 shows parts of our
verifying each hypothesis, we use two kinds of vehicle training samples. Uing the KL transform, we found that the
features to filter out impossible vehicle candidates. The eigenvector with the largest eigenvalue of this data set is the
features include edges and coefficients of wavelet transform. same as Eq.(2). In addition, the other two eigenvector can
Then, desired vehicles can be very robustly and accurately form a new color plane (u, v ) which is perpendicular to the
verified and detected from static images. In what follows, axis (1/3, 1/3, 1/3) and expanded by
details of vehicle color detector are first described. Z Gp Z p Bp
2Z p G p B p
up = , v p = Max{ p , }, (3)
Zp Zp Zp
where ( Rp , G p , B p ) is the color of a pixel p and
Z p = (Rp + G p + B p )/3 used for normalization. Fig. 3 shows
the plotting results using Eq.(3). Clearly, the color
Fig. 1: Details of the proposed vehicle detector. transformation described in Eq.(3) make all vehicle pixels be
concentrated on a smaller area.
III. VEHICLE COLOR DECTECTOR
Like the skin color used for face representation, this section
will introduce a new color transformation for transforming all
pixels with (R, G, B) colors to a new domain. Then, a
specific vehicle color can be found and defined for effective
vehicle detection. Assume that there are N images collected
from various highways and parking places. Through a
Fig. 3 Plotting result of color transformations of vehicle
statistic analysis, we can get the covariance matrix of the
pixels using Eq.(3).
color distributions of R, G, and B from these N images. After transformation, in order to accurately identify
Using the Karhunen-Loeve transform, the eigenvectors and vehicle pixels from backgrounds with colors, a Bayesian
eigenvalues of can be further obtained and represented as classifier should be designed. Assume that mv and mn
ei and i , respectively, for i = 1, 2, and 3. Then, three new are the mean colors of the vehicle and non-vehicle pixels
color features Ci can be formed and defined, respectively, calculated from our collected training images in the (u, v)
Ci = eir R + eig G + eib B for i =1, 2, and 3, (1) color domain, respectively. In addition, v and n are
r g b their corresponding covariance matrices in the same color
where ei = ( e , e , e ) . From the analysis of Ohta et al.[10],
i i i
domain, respectively. Then, given a pixel x, we define its
the color feature C1 with the largest eigenvalue is probability belonging to a vehicle pixel and a non-vehicle
1 1 1 pixels, respectively, as follows:
C1 = R + G + B . (2) -1/2
3 3 3 p( x|v ) = 2 v exp(-d v ( x )) (4)
and
p( x|nv ) = |2 n |-1/2 exp(-d n ( x )) , (5)
d v ( x ) = ( x mv ) ( x mv )
1 t
where v /2 and dn ( x) =
( x mn ) n1 ( x mn )
t
/2 . According to the Bayesian
classifier, we assign a pixel x to the class vehicle if
p ( x | v ) P( v ) > p ( x | nv ) P( nv ) (6)
where P ( v ) and P (nv ) are the priori class probabilities of
Fig. 2 Parts of vehicle training samples. (a) Vehicle
images. (b) Non-vehicle training images. vehicle pixels and non-vehicle ones, respectively. Plugging
Eqs. (4) and (5) into Eq.(6), we have the rule:
Assign a pixel x to vehicle if d n ( x ) - d v ( x )> , (7) WT features. If V is scanned in a row-major order,
FW (V ) can be further represented as a vector, i.e.,
v P( nv )
where = log[ ]. FW (V ) = [l ( p0 )CoeffVW ( p0 ),...., l ( pi )CoeffVW ( pi ),....] . (11)
| n | P( v )
C. Integration and Verification
IV. VEHICLE VERIFICATION Given a vehicle V, based on Eqs. (10) and (11), we can extract
two feature vectors FC (V ) and FW (V ) from its contour and
In previous section, we have extracted different vehicle pixels
from static images using colors. For each detected vehicle wavelet transform, respectively. For convience, these two
pixel X, we will generate different vehicle hypothesizes features can be further combined together to form a new
H sI ( X ) for ensuring whether there is a real vehicle existing feature vector F (V ) , i.e., F (V ) = [FC (V ), FW (V )] . For a
vehicle class Ci , if there are Ni templates in Ci , we can
in X. Here, a vehicle hypothesis H sI ( X ) is a sub-image
extracted from an image I with the size ws hs and the calculate its mean i and variance i of F (V ) from all
center X. samples V in Ci . Then, given a vehicle hypothesis H, the
In order to verify the correctness of H sI ( X ) , we build a similary between H and Ci can be measured by this
set of classes Ci of vehicle templates for estimating its equation:
maximum vehicle response at different orientations. Here S ( H , Ci ) = exp(-( F ( H ) - i )-1i ( F ( H ) - i )t ) . (12)
Ci is a collection of different vehicle templates whose
Therefore, given a position X, its vehicle reponse can be
orientations are at the same angle i . In this paper, two defined as follows
features including vehicle contour and wavelet coefficients are R( X ) = max S ( H sI ( X ), Ci ) . (13)
s ,i
used to measure this vehicle response. In what follows,
details of each feature are introduced. When calculaing Eq.(13), the parameter i can be further
A. Contour Feature eliminated if the direction of the hypothesis H sI ( X ) is
Contour is a good feature to describe vehicles shapes. This known in advance. In [12], a good moment-based method is
paper will use a distance map to record the contour feature of provided for estimating the orientation of the longest axis of a
a vehicle. Assume that BV is a set of boundary pixels region. If sI ( X ) is denoted the orientaiton of H sI ( X ) ,
extracted from a vehicle V. Then, the distance transform of a Eq.(13) can be then rewriten
pixel p in V is R( X ) = max S ( H sI ( X ), C I ( X ) ) . (14)
s s
DTV ( p ) = min d ( p, q ) , (8) Two thresholds are used to remove spurious responses and to
qBV

where d ( p, q ) is the Euclidian distance between p and q. declare whether a vehicle is detected at the position X. Let
R be the average value of R( X ) for all vehicle centers X
In order to enhance the strength of distance changes, Eq.(8) is
further modified as follows in the training samples. For a vehicle pixel X, if its response
DT V ( p ) = min d ( p, q) exp( d ( p, q)) , (9) R( X ) is larger than 0.8 R , it is considered a vehicle
qBV
candidate. In addition to R , another threshold C
where = 0.1 . Thus, according to Eq.(9), a set FC (V ) of (threshold of corners) is used to remove false detections of
contour features can be extracted from a vehicle V. If we vehicles. If X contains a real vehicle, the number of corners
scan all pixels of V in a row major order, FC (V ) can be then around X should be larger than C .
represented as a vector, i.e.,
FC (V ) = [ DT V ( p0 ),...., DT V ( pi ),....] . (10) V. EXPERIMENTAL RESULTS
where all pi belong to V and i is the scanning. In order to analyze the performance of our proposed method,
B. Wavelet Feature several experiments were tested in this paper. various static
images captured under different lighting conditions are used.
In this paper, a three-scale wavelet transform is used to The first experiment was conducted to evaluate the
process all vehicle images. Then, each wavelet coefficient is performance of our vehicle-color detection method. Fig. 4
quantized to three levels, i.e., 1, 0, -1, if its value is larger than shows the result of detecting vehicle colors using Eq.(7). (a)
0, equal to 0, and less than 0, respectively. After that, all the is the original image and (b) the result obtained from (a). To
quantized coefficients are recorded for further recognition. evaluate and measure the performances of our method to
When recording, each wavelet coefficient is furtehr classified detect vehicle colors, the precision and false-alarm rates are
into different bands, i.e., LL, LH, HL, and HH. According to used in this paper. Precision is the ratio of the number of
this classification, a pixel p is labeled as 1, 2, 2, and 4 if it correctly detected vehicle pixels to the number of exactly
locates in the LL, LH, HL, HH bands, respectively. Let existing vehicle pixels. False alarm rate is the ratio of the
l ( p ) denote the labeling vale of p. Then, given a vehicle V, number of background pixels but misclassified as vehicles to
from its wavelet coefficients, we can extract a set FW (V ) of the number of all background pixels. In Fig. 4, the
precision rate and false-alarm rate of vehicle pixel detection also works well to detect vehicles from highways or roads.
were 86.1% and 6.3%, respectively. The lower false-alarm Fig. 8 shows two results of vehicle detection when vehicles
rate implied that most of background pixels were filtered out were driven on highways. All of them were correctly
and didnt need to be further verified. Thus, the verification detected. The average accuracy of vehicle detection using
process could be performed very efficiently due to the the proposed algorithm is 94.5%.
elimination of many redundant searches. Fig. 5 shows
another result of vehicle color detection. The precision rate
and false-alarm rate of vehicle pixel detection are 89.9% and
2.1%, respectively.

Fig. 8: Result of detecting vehicles from roads.

ACKNOWLEDGEMENTS
This work was supported in part by National Science Council
of Taiwan under Grants NSC93-2213-E-150-026 and
Fig. 4: Result of vehicle color detection. (a) Original Department of Industrial Technology of Taiwan under Grants
image. (b) Detection result. 93-EC-17-A-02-S1-032.

REFERENCES
[1] V. Kastinaki, et al., A survey of video processing techniques
for traffic applications, Image, Vision, and Computing, vol. 21,
no. 4, pp.359-381, April 2003.
[2] R. Cucchiara, P. Mello, M. Piccardi, Image analysis and
rule-based reasoning for a traffic monitoring, IEEE Trans. on
ITS, vol. 3, no. 1, pp.37-47, March 2002.
[3] G. L. Foresti, V. Murino, C. Regazzoni, Vehicle recognition
Fig. 5: Result of vehicle color detection. (a) Original and tracking from road image sequences, IEEE Trans. on
image. (b) Detection result. Vehicular Technology, vol. 48, no. 1, pp.301-318, Jan. 1999.
[4] J. Wu, X. Zhang, and J. Zhou, Vehicle detection in static road
images with PCA-and- wavelet-based classifier, 2001 IEEE
Intelligent Transportation Systems Conference, pp.740-744,
Oakland, C.A., USA, Aug. 25-29, 2001.
[5] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection
using Gabor filters and support vector machines, IEEE
International Conference on Digital Signal Processing,
Santorini, Greece, July 2002.
[6] M. Bertozzi, A. Broggi, and S. Castelluccio, A real-time
oriented system for vehicle detection, Journal of Systems
Fig. 6: Result of vehicle detection in a parking lot. Architecture, pp. 317-325, 1997.
[7] C. Tzomakas and W. Seelen, Vehicle detection in traffic
scenes using shadow, Tech. Rep. 98-06, Institut fur
neuroinformatik, Ruhtuniversitat, Bochum, Germany, 1998.
[8] J. C. Rojas and J. D. Crisman, Vehicle Detection in Color
Images, IEEE Conference on Intelligent Transportation
System, pp.403-408, Nov. 9-11, 1997.
[9] D. Guo et al., Color modeling by spherical influence field in
sensing driving environment, 2000 IEEE Intelligent Vehicles
Symposium, pp. 249- 254, Oct. 3-5 2000.
Fig. 7: Result of vehicle detection in parking lots. [10] Y. Ohta, T. Kanade, and T. Sakai, Color Information for
Another set of experiments was performed to examine Region Segmentation, Computer Graphics and Image
the capability of our proposed algorithm to detect all possible Processing, vol. 13, pp. 222-241, 1980.
vehicles from various static images. Fig. 6 shows the [11] G. Healey, Segmenting Images Using Normalized Color,
detection result of vehicles obtained from a parking lot. IEEE Transactions on Systems, Man, and Cybernetics, vol. 22,
no. 1 , pp. 64-73, 1992.
Although these vehicles have different colors, all of them
[12] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis
were correctly detected and located. In addition, although and Machine Vision, London, U. K.: Chapman & Hall, 1993.
there was a vehicle occluded by a tree, it still was correctly
detected. Fig. 7 shows another two results of vehicle
detection. In addition to parking lots, the proposed method

Das könnte Ihnen auch gefallen