Sie sind auf Seite 1von 4

2010 International Symposium on Computer, Communication, Control and Automation

Vision-based Vehicle Detection in the Nighttime

Ying-Che Kuo Hsuan-Wen Chen


Department of Electrical Engineering Department of Electrical Engineering
National Chin-Yi University of Technology National Chin-Yi University of Technology
Taichung, Taiwan Taichung, Taiwan
kuoyc@ncut.edu.tw ddgreen850@gmail.com

Abstract—In this paper, we present a vision-based vehicle preceding vehicle and the test car. The proposed system
detection method for collision warning of driver assistance can be applied in collision warming (CW) of the intelligent
system on highway in the nighttime. The major function of transportation system (ITS). Our proposed algorithm and
our work is to find preceding vehicles in the dynamic detect procedures are shown in Fig. 1. The taillight is the
background. The system captures the image of road only characteristic of the preceding vehicle in the
environment by a camera mounted on the windshield of the nighttime road environment under the observation.
test car and uses multi-level image processing algorithms to Therefore, our works utilize multi-level image processing
extract taillights of preceding vehicles and identify the to extract red-colored taillights for vehicle detection. For
proceeding vehicles by taillight clustering processing. At the
reducing computation, a tracking method is also presented
same time, it estimates the related distance between the test
car and the preceding vehicle for collision warning. In our
once the preceding vehicle is verified.
experiments, the system is implemented on an embedded
system with Linux operation system, open source codes and
limited hardware resources. With the results of our
experiments, it shows that the system can correctly verify the
proceeding vehicles in the nighttime under the real-time
requirement.

Keywords-Vehicle Detection, Night Vision, Driver


Assistance.

I. INTRODUCTION
In recent years, vision-based vehicle detection is an
emerging research filed in advanced driver assistance
systems (ADAS) and autonomous vehicle. Several
researches based the vision image processing have been
presented [1]-[12]. In the nighttime, the characteristics of
a vehicle are not apparent. Therefore, most of the vehicle
detection methods in nighttime use the features of the pair
of taillights to verify vehicles [2]-[8][12]. In [2][3][6], two
regions in the image are separated by the horizon first. It Fig. 1 The flow diagram of vehicle detection in the nighttime.
can eliminate illuminant non-vehicle objects which appear
in the sky, such as street lamps and traffic lights. Then use
the characteristics of the red-colored taillights to recognize II. VEHICLE DETECTION ALGORITHM
taillights from headlights or other bright objects in
nighttime. Finally, calculate the relationship between a In our works, a CMOS web camera, mounted on the
pair of bright objects for clustering and labeling vehicles. windshield of a test car, captures the RGB color image in
The vehicle detection methods proposed in the literatures front of the test car. The experiment environment is set as
can effectively detect vehicles, but still has some the highway in nighttime. For the considerations of the
drawbacks. For instance, vehicle characteristic extracting realization on an embedded system with limited hardware
is sensitive to illuminant disturbances in nighttime, and resources, the procedures of our proposed detection
vehicles cannot be detected when the taillight of vehicle is algorithm are shown in Fig. 1 and mentioned as follows.
broken or turn off. In [8], the authors adopted active The region of interest (ROI) area, the region vehicles
sensor (e.g., radar, laser, and sonar, etc.) and computer possibly appear in the image, is defined firstly. The left
vision for vehicle detection that always have good and right boundaries of ROI are the same as the left and
performance of vehicle detection and range measurement. right boundaries of the captured image. The top boundary
However, active sensors are expensive and these make the of ROI is located at the virtual horizon in the image. The
system hard to be realized. bottom boundary of ROI is demarcated in five meters in
In this paper, we implement the vehicle detection front of the test car in our experiences. Many unnecessary
system on the embedded system. It can verify the influences (e.g.: lamps, traffic lights, and signs, etc.)
preceding vehicles and estimate the distance between the outside the ROI area are excluded.

978-1-4244-5567-6/10/$26.00 ©2010 IEEE 3CA 2010



In the nighttime road environment, taillights of restrictions. The taillight clustering algorithm uses the
vehicles are the main characteristics of vehicles which information of each bright object to make a match of two
composed of red color and luminant tint components. To bright objects which are a pair of taillights of the same
reduce the computing complexities and influences of other vehicle. The taillight clustering algorithm is mentioned as
lightings, the color image of ROI area (captured by the follows:
CMOS web camera) will be transformed from RGB color step1: Bi is one of the bright objects in the image.
space into YCbCr color space. At the subsequent image step2: The coordinate location of the left, right, top
processes, the bright objects (taillights) are extracted only and bottom of the bright object are denoted l ( Bi ) , r ( Bi ) ,
in accordance with the Y and Cr components of the YCbCr t ( Bi ) ,and b( Bi ) ,respectively. Then, the width W ( Bi ) and
image. In order to select the threshold values of Y and Cr
components in dynamic background, we use Otsu’s height H ( Bi ) of the bright object Bi are calculated with
Method [10] to decide the Y threshold values. these coordinate locations.
Furthermore, we observed the values of Cr components of step3: Nh(Bi, Bj) and Nv(Bi, Bj) are the numbers of
taillights always bigger than 136 in our experiments. pixels between any of two bright objects (Bi and Bj) in the
Therefore, the threshold value of Cr component is set to be horizontal and vertical directions. And they can be
136. Then the binarization procedure filters the image calculated by equation (1) and (2) respectively. The value
with these two thresholds and the result of bright object of Nh(Bi, Bj) will be a negative value when the two bright
extraction is shown in Fig. 2 (b). objects have overlap in horizontal direction. If the two
In Fig. 2 (b), the extracted bright objects still include bright objects have no overlap in vertical direction, Nv(Bi,
some non-vehicle objects (e.g. lane mark and lights of Bj) will also be a negative value in the same.
lamps) and the bright characteristics of vehicle taillights
are intensified weakly. Therefore, we utilize erosion N h ( Bi , B j )  max[l ( Bi ), l ( B j )]  min[r ( Bi ), r ( B j )] (1)
operation of morphology to filter out the tiny bright noises,
N v ( Bi , B j )  min[b( Bi ), b( B j )]  max[t ( Bi ), t ( B j )] (2)
and use dilation operation of morphology to intensify the
bright characteristic. The image processing mentioned
above is called opening operation in morphology [9] and step4: Ov ( Bi , B j ) is the ratio of overlap between two
the result is shown in Fig. 3 (a). bright objects in vertical direction. The value of
Ov ( Bi , B j ) is computed by equation (3). A judgment
value of Ov ( Bi , B j ) is denoted as To and used for decision.
The discriminant equation is presented in equation (4).

Ov ( Bi , B j )  N v ( Bi , B j ) / min[ H (bi ), H (b j )] (3 )
Ov ( Bi , B j )  To (4)

(a) (b)
Fig. 2 (a) Original captured image. (b) Bright object extraction in ROI step5: Calculate H(Bs)/H(Bl), the ratio of the heights of
region. the two bright objects, and determine whether this value is
larger than the threshold Tsh or not. The discriminant
To obtain the information of each bright object in the equation is presented in equation (5), Bs is the one with
image, each bright object is labeling and segmenting by smaller height of the two bright objects, and Bl is the other
connected-component labeling method. The results of
connected-component labeling are shown in Fig. 3(b). one with larger height. That means the pair of taillights
Besides, the information of the center position and the left, have similar height if the discriminant equation is true.
right, top, and bottom positions of each bright object are
also found by using the peak finding algorithm [9]. H ( Bs ) / H ( Bl )  Tsh (5)

step6: Calculate the width of vehicle VW ( Bi , B j ) and


the widths of the pair of taillights W(Bi)and W(Bj). Then
the discriminant equation (6) is defined for judgement and
the value of VW ( Bi , B j ) is computed by equation (7).

VW ( Bi , B j )  max[W ( Bi ),W ( B j )] Tn (6)


(a) (b) VW ( Bi , B j )  N h ( Bi , B j )  W ( Bi )  W ( B j ) (7)
Fig. 3 (a) Results of opening operation. (b) Results of connected-
component labeling.
step7: A pair of bright objects will be clustered
together in the image if all the above discriminant
III. TAILLIGHT CLUSTERING ALGORITHM equations (4), (5), and (6) are true. Then the system
encloses the two bright objects with a bounding box. With
According to our experiments and observations, the
the results of experiments and observations, the ratio of
pair of taillights of a vehicle exist some regular rules, such
width VW(Bi, Bj) and height H(Bi) of the bounding box is
as the ratio of width and height of a vehicle has some
usually within 2.5 to 8 and denoted in equation (8). A


candidate preceding vehicle enclosed by the bounding box shown in Fig. 4 and equations (10) and (11) are the
is verified if the equation (8) is also conformed. perspective transformation and inverse perspective
transformation, respectively. f denotes the focal length of
2.5  ((VW ( Bi , B j ) / H ( Bl ))  8 (8) camera. S v denotes the height of scaling factor. S u
denotes the width of scaling factor. Yc is the height of the
The values of thresholds To , Tsh and Tn mentioned mounted camera beyond the ground. v i is the bottom
above are chosen as 0.7, 0.8, and 6, respectively. The position of the verified vehicle in the image. Z c denotes
choices of thresholds are according to the specification of
the range between the preceding vehicle and the test car.
pairs of taillights in vehicles.
IV. VEHICLE VERIFICATION AND TRACKING
To verify the candidate vehicle in the image whether it
is the actual vehicle or not, we apply sequent images for
verification. The left and right boundary positions of
vehicles in sequent image frames are recorded. Then
compare the vehicle positions in current image with the
candidate vehicle positions in previous image. If the left
or right boundaries of the candidate vehicle in current
Fig.4 The transformation from camera plane to world plane
image overlap the right or left boundaries of the candidate
vehicle in previous image, the system will determine that
the candidate vehicle is invalid. On the contrary, if the f Su X c f S vYc
ui  , vi  (10)
candidate vehicle positions and vehicle detection results in Zc Zc
previous frame are matching, the verification is valid for Zc f Sv f S vYc
the candidate vehicle.  , Zc  (11)
So far, the proposed vehicle detection algorithm can Yc vi vi
accurately verify candidate vehicles. In order to
implement the vehicle detection system on the embedded
system with limited hardware resources, we proposed a VI. EXPERIMENTATIONS AND RESULTS
vehicle tracking algorithm for reducing computation in the In this paper, the system is implemented on the INTEL
following image frames once the candidate vehicle in XScale PXA270 SoC-based embedded hardware platform
current frame is verified. with Linux operating system, open source codes, and
In the tracking mode, the average gray-scale value in limited hardware resources, and installed in a test car. The
the region of the verified vehicle in previous frame will be camera is mounted on the windshield for capturing road
computed firstly. Then a new average gray-scale value in images under various climates in the nighttime. The
the same region in current frame will also be calculated. resolution of the captured image is 320 pixels by 240
The MAE, defined in equation (9), calculates the mean pixels in 30 frames per second.
absolute error between both of the regions in two continual Some representative results of the experiments are
frames. If the MAE value less than the threshold, we shown in Fig.5. Fig.5 (a) shows the result of vehicle
execute vehicle tracking process continuously. On the detection under the condition of two vehicles
contrary, we re-execute the vehicle detection process simultaneously appeared on the bright road environment
instead of the vehicle tracking process. with many street lamps. Fig. 5(b) shows the condition that
two vehicles appeared on the dark road environment. The
MAE = system can correctly detect vehicles under different
1 K L lighting conditions in the nighttime. Fig. 5(c) shows that
KL

I (x + i, y + j) - I
i 0 j 0
k k +1
(x +i, y + j) 2 (9) the system can also accurately detect the preceding
vehicles under the influences of the headlights of the
passing cars.

V. PRECEDING VEHICLE DISTANCE MEASURING


In the above sections, we use multi-level image
processing to locate preceding vehicle position in the
image, and then the distance between the preceding vehicle
and the test car is measured using the bottom location of
the preceding vehicle and the parameters of the camera.
We apply the estimation model presented in [11] to
estimate the distance between the preceding vehicle and
the test car in the captured images. Firstly, we will (a) (b) (c)
transform image coordinate into camera coordinate. Then Fig.5 Results of vehicle detection.
transform camera coordinate into world coordinate. In this
paper, vehicle coordinate is defined the same as the world Although the feature of the bottom of the preceding
coordinate. The transformation of coordinate plane is vehicle is not apparent, we use half of the width of the


vehicle to predict the bottom of the vehicle in the [1] Z. Sun, G. Bebis, and R. Miller, “On-road Vehicle Detection: A
fieldwork. Furthermore, the range can almost be measured Review,” IEEE Trans. on Pattern Analysis and Machine
Intelligence, vol. 28, no. 5, pp. 649-711, May 2006.
since the bottom of the vehicle is found.
[2] Y. L. Chen, C. T. Lin, C. J. Fan, C. M. Hsieh, and B. F. Wu,
For estimating the performance of the proposed system, “Vision-based Nighttime Vehicle Detection and Range Estimation
we measured the average processing time of vehicle for Driver Assistance,” in Proc. IEEE Int’l Conf. on Systems, Man
detection mode and vehicle tracking mode in each frame. and Cybernetics, 2008, pp. 2988-2993.
Table 1 shows the statistics of 10,000 times of processing. [3] Y. L. Chen, Y. H. Chen, C. J. Chen, and B. F. Wu, “Nighttime
The processing time depends on the variant of road Vehicle Detection for Driver Assistance and Autonomous
environments. Vehicles,” in Proc. 18th Int’l Conf. on Pattern Recognition, 2006,
pp. 687-690.
TABLE I. PROCESSING TIME OF VEHICLE DETECTION AND [4] M. C. Lu, C. P. Tsai, M. C. Chen, Y. Y. Lu, W. Y. Wang, and C. C.
TRACKING Hsu, “A Practical Nighttime Vehicle Distance Alarm System,” in
Proc. IEEE Int’l Conf. on System, Man and Cybernetics, 2008, pp.
Maximum Minimum Average 3000-3005.
Time (ms) Time (ms) Time (ms) [5] Y. L. Chen, B. F. Wu, and C. J. Fan, “Real-time Vision-based
Vehicle Multiple Vehicle Detection and Tracking for Nighttime Traffic
436.42 111,18 188.87
Detection Surveillance,” in Proc. IEEE Int’l Conf. on System, Man and
Vehicle Cybernetics, 2009, pp. 3352-3358.
5.49 0.17 1.23
Tracking [6] C. C. Wang, S. S. Huang, and L. C. Fu, “Driver Assistance System
for Lane Detection and Vehicle Recognition with Night Vision,” in
It can be seen in Table 1, the processing time in vehicle Proc. IEEE Int’l Conf. on Intelligent Robots and Systems, Aug.
detection mode is about 188.87ms in average. The 2005, pp. 3530-3535.
processing time in vehicle tracking mode is about 1.23ms. [7] S. Naqumo, H. Hasegawa, and N. Okamoto, “Extraction of
Forward Vehicles by Front-mounted Camera using Brightness
Thus it can be seen that the processing time can Information,” in Proc. IEEE Int’l Conf. on Electrical and Computer
significantly be reduced in vehicle tracking mode. Based Engineering, May. 2003, pp 1243-1246.
on the results of experiments, we successfully [8] S. Y. Kim, S. Y. Oh, J. K. Kang, Y. W. Ryu, K. Kim, S. C. Park,
implemented a vehicle detection system on an embedded and K. H. Park, “Front and Rear Vehicle Detection and Tracking in
system. It can correctly detect preceding vehicles during the Day and Night Times using Vision and Sonar Sensor Fusion,”
the nighttime. in Proc. IEEE Int’l Conf. on Intelligent Robots and Systems, Aug.
2005, pp 2173-2178.
VII. CONCLUSION [9] R. C. Gonzalez and R. E. Woods, Digital Image Processing,
Prentice-Hall, Inc., 2002.
This paper proposed the vehicle detection algorithm [10] N. Otsu, “A Threshold Selection Method from Gray-Level
implemented on the embedded system. The system can Histograms,” IEEE Trans. on Systems, Man, and Cybernetics, vol.
effectively detect preceding vehicles and measure the SMC-9, no.1, pp. 62-66, Jan. 1979
range between the preceding vehicle and the test car in the [11] G. P. Stein, O. Mano and A. Shashua, “Vision-based ACC with a
nighttime. In the future, we still need to consider more Single Camera: Bounds on Range and Range Rate Accuracy,” in
kinds of environmental factors to optimize system Proc. IEEE Int’l Conf. on Intelligent Vehicles Symposium, Jun.
2003, pp.120-125.
performance and make this system more useful in the CW.
[12] M. Y. Chern and P. C. Hou, “The Lane Recognition and Vehicle
REFERENCES Detection at Night for a Camera-assisted Car on Highway,” in Proc.
IEEE Int’l Conf. on Robotics and Automation, Vol. 2, 14-19, Sep.
2003, pp. 2110-2115.



Das könnte Ihnen auch gefallen