Beruflich Dokumente
Kultur Dokumente
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/6448937
CITATIONS READS
98 353
3 authors, including:
Jun-Wei Hsieh
National Taiwan Ocean University
110 PUBLICATIONS 1,510 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jun-Wei Hsieh on 24 May 2016.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue
are linked to publications on ResearchGate, letting you access and read them immediately.
Vehicle Detection Using Normalized Color
and Edge Map
Luo-Wei Tsai , Jun-Wei Hsieh*, and Kao-Chin Fan
Department of Computer Engineering, National Central University,
Jung-Da Rd., Chung-Li 320, Taiwan
Department of Electrical Engineering, Yuan Ze University,
135 Yuan-Tung Road, Chung-Li 320, Taiwan
shieh@saturn.yzu.edu.tw
AbstractThis paper presents a novel vehicle detection approach and Seelen found that the shadow area underneath a vehicle is
for detecting vehicles from static images using color and edges. a good cue to detect vehicles. The major drawback of the
Different from traditional methods which use motion features to above methods using local features to search vehicles is the
detect vehicles, this method introduces a new color transform need of a full search to scan all pixels of the whole image.
model to find important vehicle color from images for quickly
locating possible vehicle candidates. Since vehicles have different
For the global feature color, since vehicles have very large
colors under different lighting conditions, there were seldom variations in their colors, there were seldom color-based
works proposed for detecting vehicles using colors. This paper works addressed for vehicle detection. In [9], Guo et al.
proves that the new color transform model has extreme abilities used several color balls to model road colors in L*a*b* color
to identify vehicle pixels from backgrounds even though they are space and then vehicle pixels were identified if they were
lighted under various illumination conditions. Each detected classified no-road regions. However, since these color
pixel will correspond to a possible vehicle candidate. Then, two models were not general in modeling vehicle colors, many
important features including edge maps and coefficients of false detections were produced and leaded to the degradation
wavelet transform are used for constructing a multi-channel of accuracy of vehicle detection.
classifier to verify this candidate. According to this classier, we
can perform an effective scan to detect all desired vehicles from
In this paper, we propose a novel detection method to
static images. Since the color feature is first used to filter out detect vehicles from still images using colors and edges.
most background pixels, this scan can be extremely quickly Although the color of an object is quite different under
achieved. Experimental results show that the integrated scheme different lighting conditions, it still owns very nice properties
is very powerful in detecting vehicles from static images. The to describe objects. For example, the color of a vehicle is
average accuracy of vehicle detection is 94.5%. almost similar under different viewing points (if we observe it
at the same time) and can be extracted in real-time. If a
I. INTRODUCTION general color transform can be found to accurately capture
vehicles visual characteristics, the color will become a very
Vehicle detection [1] is an important problem in many related
useful cue to narrow down the search areas of possible
applications such as self-guided vehicles, driver assistance
vehicles. The contribution of this paper is to present a novel
system, or intelligent parking system. One of most common
color model to make vehicle colors be more compact and
approaches to vehicle detection is using vision-based
sufficiently concentrated on a smaller area. The model is
techniques to analyze vehicles from images or videos.
global and does not need to be re-estimated for any new
However, due to the variations of vehicle colors, sizes,
vehicles or new images. Since no prior knowledge of
orientations, shapes, and poses, developing a robust and
surface reflectance, weather condition, and view geometry is
effective system of vision-based vehicle detection is very
used in the training phase, the model performs very well to
challenging. To address these problems, there have been
separate vehicle pixels from background pixels. After that,
many approaches using different features and learning
this paper uses two features including edge maps and
algorithms investigated for effective vehicle detection. For
coefficients of wavelet transform to form a multi-channel
example, many techniques [1]-[3] used background
classifier. The classifier is modeled using a Gaussian model
subtraction to extract motion features for detecting moving
and can be automatically learned from a set of training images.
vehicles from video sequences. However, this kind of
Since the classifier records many changes of vehicle
motion feature is no longer usable and found in still images.
appearances, it has good discriminability to verify each
For dealing with static images, Wu et al. [4] used wavelet
vehicle candidate. When scanning, since most of
transform to extract texture features for locating possible
background pixels have been eliminated in advance using
vehicle candidates from roads. Then, each vehicle candidate
colors, the verification process can be achieved extremely
was verified using a PCA classifier. In addition, Z. Sun et al.
efficiently. Due to the filtering and discriminative abilities of
[5] used Gabor filters to extract different textures and then
the proposed method, all desired vehicles can be effectively
verified each candidate of vehicles using a SVM classifier.
and efficiently detected from static images. Experimental
Furthermore, in [6], Bertozzi et al. used corner features to
results reveal the feasibility and validity of the proposed
build four templates for detecting vehicles. In [7], Tzomakas
approach in vehicle detection.
where d ( p, q ) is the Euclidian distance between p and q. declare whether a vehicle is detected at the position X. Let
R be the average value of R( X ) for all vehicle centers X
In order to enhance the strength of distance changes, Eq.(8) is
further modified as follows in the training samples. For a vehicle pixel X, if its response
DT V ( p ) = min d ( p, q) exp( d ( p, q)) , (9) R( X ) is larger than 0.8 R , it is considered a vehicle
qBV
candidate. In addition to R , another threshold C
where = 0.1 . Thus, according to Eq.(9), a set FC (V ) of (threshold of corners) is used to remove false detections of
contour features can be extracted from a vehicle V. If we vehicles. If X contains a real vehicle, the number of corners
scan all pixels of V in a row major order, FC (V ) can be then around X should be larger than C .
represented as a vector, i.e.,
FC (V ) = [ DT V ( p0 ),...., DT V ( pi ),....] . (10) V. EXPERIMENTAL RESULTS
where all pi belong to V and i is the scanning. In order to analyze the performance of our proposed method,
B. Wavelet Feature several experiments were tested in this paper. various static
images captured under different lighting conditions are used.
In this paper, a three-scale wavelet transform is used to The first experiment was conducted to evaluate the
process all vehicle images. Then, each wavelet coefficient is performance of our vehicle-color detection method. Fig. 4
quantized to three levels, i.e., 1, 0, -1, if its value is larger than shows the result of detecting vehicle colors using Eq.(7). (a)
0, equal to 0, and less than 0, respectively. After that, all the is the original image and (b) the result obtained from (a). To
quantized coefficients are recorded for further recognition. evaluate and measure the performances of our method to
When recording, each wavelet coefficient is furtehr classified detect vehicle colors, the precision and false-alarm rates are
into different bands, i.e., LL, LH, HL, and HH. According to used in this paper. Precision is the ratio of the number of
this classification, a pixel p is labeled as 1, 2, 2, and 4 if it correctly detected vehicle pixels to the number of exactly
locates in the LL, LH, HL, HH bands, respectively. Let existing vehicle pixels. False alarm rate is the ratio of the
l ( p ) denote the labeling vale of p. Then, given a vehicle V, number of background pixels but misclassified as vehicles to
from its wavelet coefficients, we can extract a set FW (V ) of the number of all background pixels. In Fig. 4, the
precision rate and false-alarm rate of vehicle pixel detection also works well to detect vehicles from highways or roads.
were 86.1% and 6.3%, respectively. The lower false-alarm Fig. 8 shows two results of vehicle detection when vehicles
rate implied that most of background pixels were filtered out were driven on highways. All of them were correctly
and didnt need to be further verified. Thus, the verification detected. The average accuracy of vehicle detection using
process could be performed very efficiently due to the the proposed algorithm is 94.5%.
elimination of many redundant searches. Fig. 5 shows
another result of vehicle color detection. The precision rate
and false-alarm rate of vehicle pixel detection are 89.9% and
2.1%, respectively.
ACKNOWLEDGEMENTS
This work was supported in part by National Science Council
of Taiwan under Grants NSC93-2213-E-150-026 and
Fig. 4: Result of vehicle color detection. (a) Original Department of Industrial Technology of Taiwan under Grants
image. (b) Detection result. 93-EC-17-A-02-S1-032.
REFERENCES
[1] V. Kastinaki, et al., A survey of video processing techniques
for traffic applications, Image, Vision, and Computing, vol. 21,
no. 4, pp.359-381, April 2003.
[2] R. Cucchiara, P. Mello, M. Piccardi, Image analysis and
rule-based reasoning for a traffic monitoring, IEEE Trans. on
ITS, vol. 3, no. 1, pp.37-47, March 2002.
[3] G. L. Foresti, V. Murino, C. Regazzoni, Vehicle recognition
Fig. 5: Result of vehicle color detection. (a) Original and tracking from road image sequences, IEEE Trans. on
image. (b) Detection result. Vehicular Technology, vol. 48, no. 1, pp.301-318, Jan. 1999.
[4] J. Wu, X. Zhang, and J. Zhou, Vehicle detection in static road
images with PCA-and- wavelet-based classifier, 2001 IEEE
Intelligent Transportation Systems Conference, pp.740-744,
Oakland, C.A., USA, Aug. 25-29, 2001.
[5] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection
using Gabor filters and support vector machines, IEEE
International Conference on Digital Signal Processing,
Santorini, Greece, July 2002.
[6] M. Bertozzi, A. Broggi, and S. Castelluccio, A real-time
oriented system for vehicle detection, Journal of Systems
Fig. 6: Result of vehicle detection in a parking lot. Architecture, pp. 317-325, 1997.
[7] C. Tzomakas and W. Seelen, Vehicle detection in traffic
scenes using shadow, Tech. Rep. 98-06, Institut fur
neuroinformatik, Ruhtuniversitat, Bochum, Germany, 1998.
[8] J. C. Rojas and J. D. Crisman, Vehicle Detection in Color
Images, IEEE Conference on Intelligent Transportation
System, pp.403-408, Nov. 9-11, 1997.
[9] D. Guo et al., Color modeling by spherical influence field in
sensing driving environment, 2000 IEEE Intelligent Vehicles
Symposium, pp. 249- 254, Oct. 3-5 2000.
Fig. 7: Result of vehicle detection in parking lots. [10] Y. Ohta, T. Kanade, and T. Sakai, Color Information for
Another set of experiments was performed to examine Region Segmentation, Computer Graphics and Image
the capability of our proposed algorithm to detect all possible Processing, vol. 13, pp. 222-241, 1980.
vehicles from various static images. Fig. 6 shows the [11] G. Healey, Segmenting Images Using Normalized Color,
detection result of vehicles obtained from a parking lot. IEEE Transactions on Systems, Man, and Cybernetics, vol. 22,
no. 1 , pp. 64-73, 1992.
Although these vehicles have different colors, all of them
[12] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis
were correctly detected and located. In addition, although and Machine Vision, London, U. K.: Chapman & Hall, 1993.
there was a vehicle occluded by a tree, it still was correctly
detected. Fig. 7 shows another two results of vehicle
detection. In addition to parking lots, the proposed method