Sie sind auf Seite 1von 8

Chapter 20

Early Fire Detection Using Multi-Stage


Pattern Recognition Techniques
in Video Sequences

Dongkoo Shon, Myeongsu Kang, Junsang Seo and Jong-Myon Kim

Abstract This paper proposes an efficient early fire detection approach using
multi-stage pattern recognition techniques, including background subtraction for
movement-containing region detection, statistical rule-based color segmentation in
YCbCr color space, a single-level spatial wavelet decomposition for observing
flicker of fire, and a support vector machine to identify between fire of non-fire.
This paper evaluates the proposed approach in terms of percentage of true positive
and false negative. Experimental results indicate that average fire detection and
false non-fire detection rates are 99.67 and 3.69 %, respectively.

Keywords Background subtraction  Wavelet decomposition  Support vector



machine Surveillance system

20.1 Introduction

Early fire detection has been an increasingly important issue since it is closely
related to personal security and property. In spite of the fact that sensor-based fire
detection systems first came into the spotlight by detecting either heat or smoke for
early identification of whether or not a fire is occurring, these systems have the

D. Shon  M. Kang  J. Seo  J.-M. Kim (&)


School of Electrical Engineering, University of Ulsan, Ulsan, South Korea
e-mail: jongmyon.kim@gmail.com
D. Shon
e-mail: dongkoo88@gmail.com
M. Kang
e-mail: ilmareboy@gmail.com
J. Seo
e-mail: siberiaj00@gmail.com

J. J. (Jong Hyuk) Park et al. (eds.), Frontier and Innovation in Future Computing 161
and Communications, Lecture Notes in Electrical Engineering 301,
DOI: 10.1007/978-94-017-8798-7_20,  Springer Science+Business Media Dordrecht 2014
162 D. Shon et al.

drawback that sensors should be densely distributed in a wide area for a highly
precise fire detection system [1]. Recent advances in video processing technologies
have led to a wave of research on computer vision-based fire detection systems
whose advantages are summarized as follows [2]: (1) As the speed of light
transmission is much faster than that of the heat and smoke, computer vision-based
fire detection is appropriate for early detection of fire, and (2) In general, images
have more scene information such as color and texture, which enables diverse
approaches to fire detection.
Several video-based fire detection algorithms have been introduced by using
color pixel recognition, motion detection, or both [3–6]. For example, Toyeyin
et al. [6] proposed a flame detection algorithm not only detecting fire and flame
colored moving regions in video but also analyzing the motion of such regions in
wavelet domain for flicker estimation. This algorithm is with considerable success
but it lacks robustness. To improve the performance of early fire detection in video
sequences, this paper proposes an efficient fire detection approach using multi-
stage pattern recognition techniques.
The rest of this paper is organized as follows. Section 20.2 presents the pro-
posed fire detection approach using background subtraction, color segmentation,
wavelet decomposition, and support vector machine. Section 20.3 illustrates
experimental results and Sect. 20.4 finally concludes this paper.

20.2 The Proposed Fire Detection Method

20.2.1 Movement-Containing Region Detection Based


on Background Subtraction

Since the boundaries of a fire tend to continuously fluctuate, MCRD has been
widely used as the first step of fire detection, which selects candidate regions of
fire. As mentioned earlier, background subtraction is utilized for MCRD in this
study. A pixel positioned at (i, j) is assumed to be moving if the following con-
dition is satisfied:
jIn ði; jÞ  Bn ði; jÞj [ Th; ð20:1Þ
where In(i, j) represents the intensity value of the pixel at location (i, j) in the nth
gray-level input video frame, Bn(i, j) is the background intensity value at the same
pixel position, and Th is a threshold value (which was experimentally set to 3 in
this study). The background intensity value is iteratively updated using (20.2):
8
< Bn ði; jÞ þ 1 if In ði; jÞ [ Bn ði; jÞ
Bnþ1 ði; jÞ ¼ Bn ði; jÞ  1 if In ði; jÞ\ Bn ði; jÞ ; ð20:2Þ
:
Bn ði; jÞ if In ði; jÞ ¼ Bn ði; jÞ
20 Early Fire Detection Using Multi-Stage Pattern Recognition Techniques 163

Fig. 20.1 Example of MCRD. a An original fire-containing frame and b a fire-containing frame
after MCRD

where Bn+1(i, j) is the estimated background intensity value of the pixel at location
(i, j) and Bn(i, j) is the previously estimated background intensity value at the same
pixel position. Initially, the background intensity value B1(i, j) is set to the
intensity value of the first video frame, I1(i, j). Figure 20.1 illustrates an example
of MCRD.

20.2.2 Color Segmentation

A number of moving objects (e.g., people, vehicles, animals, and so on) besides
fire can be still included after MCRD. Thus, this paper uses further information
such as color variations. A set of rules has been developed over the past few
decades to classify fire pixels by utilizing raw red–green–blue (RGB) information
in color video sequences. However, the RGB color space has the disadvantage of
illumination dependence. The chrominance can be used to model the color of fire
rather than its intensity, which gives a more robust representation for fire pixels.
Thus, recently, many researchers have used color spaces, such as YCbCr, YUV, and
CIE Lab, in which the chrominance components (Cb, Cr, U, V, a, b) and lumi-
nance component (Y) of an image can be processed independently. In this study,
we use the YCbCr color space for detecting fire pixels. The conversion from RGB
to YCbCr color space is performed as follows:
2 3 2 32 3 2 3
Y 0:2568 0:5041 0:0979 R 16
4 Cb 5 ¼ 4 0:1482 0:2910 0:4392 54 G 5 þ 4 128 5; ð20:3Þ
Cr 0:4392 0:3678 0:0714 B 128
where Y is the luminance and Cb and Cr are the chrominance components for blue-
difference and red-difference, respectively. To model fire pixels, the defined rules
for the RGB color space, i.e., R [ G [ B and R [ Rmean, can be translated into the
YCbCr space such as Y [ Cb and Cr [ Cb. In addition, since the fire-containing
regions are generally the brightest regions in the observed scene, the mean values
164 D. Shon et al.

Fig. 20.2 Color segmentation result from a fire-containing image after MCRD. a An original
fire-containing image, b MCRD result, and c color segmentation result

of the three channels include important information, which can be expressed as


follows:
8
< if Yði; jÞ [ Ymean ; Cbði; jÞ\Cbmean ;
1;
Fcandidate ði; jÞ ¼ Crði; jÞ [ Crmean ; ð20:4Þ
:
0; Otherwise
where Fcandidate(i, j) indicates that any pixel at the spatial location (i, j) which
satisfies the condition given in (20.4) is labeled as a fire pixel. Likewise, the mean
values of the three channels in the YCbCr color space for an M 9 N image can be
defined as follows:

1 XM X
N
1 XM X
N
Ymean ¼ Yði; jÞ; Cbmean ¼ Cbði; jÞ;
MN i j MN i j
ð20:5Þ
1 XM X
N
Crmean ¼ Crði; jÞ;
MN i j

where Y(i, j), Cb(i, j), and Cr(i, j) are the luminance, chrominance-blue, and
chrominance-red values at the spatial location (i, j) after MCRD, respectively.
Figure 20.2 shows the color segmentation result from a fire-containing image after
MCRD, and its result is good enough to detect more refined candidate regions
compared to the candidate regions of fire right after MCRD.

20.2.3 Color Variations Using Spatial Wavelet Analysis

Due to the nature of turbulent fire flicker, there are generally more color variations
in genuine fire-containing regions whereas there are few color variations in the
candidate regions of fire, which may still include fire-colored objects after color
segmentation. Thus, this paper captures color variations in pixel values by per-
forming spatial wavelet analysis. Since high-frequency information such as edges
and texture around the fire is not sensitive to lighting change and more prominent
signatures can discriminate irregular fire from the regular movement of
20 Early Fire Detection Using Multi-Stage Pattern Recognition Techniques 165

Fig. 20.3 Normalized wavelet energies with fire/non-fire-containing movies

fire-colored objects, the wavelet energy of high frequency sub-images provides a


good representation of turbulent fire flicker, which is calculated as follows:
1 n o
EðnÞ ¼     jHLn j2 þjLHn j2 þjHHn j2 ; ð20:6Þ
floor M=2  floor N=2

where E(n) is the normalized wavelet energy of the nth video frame, and HLn, LHn,
and HHn contain the horizontal, vertical, and diagonal high frequency of the nth
floor(M/2) 9 floor(N/2) sub-images obtained by a single-level wavelet transform,
respectively, and Fig. 20.3 depicts the normalized wavelet energies with two fire-
containing videos and two non-fire containing videos, each video consists of 200
frames. E(n) is then used as the input of a classifier for detecting fire in a video
clip. Furthermore, we use a Daubechies 4-tap filter in this study, which avoids bad
localizations.

20.2.4 Classification

To classify candidate pixels as fire or non-fire pixels, this paper employs a support
vector machine (SVM), which offers high classification accuracy with limited
training data and does not require heuristic parameters for detecting fire pixels.
The SVM is a non-probabilistic binary classifier and its main goal is to find an
optimal hyper-plane that correctly separates the largest fraction of data points
while maximizing the distance between two classes on the hyper-plane. The SVM
classification function is defined as:
166 D. Shon et al.

!
X
l1
f ð xÞ ¼ sign wi  kðx; xi Þ þ b ; ð20:7Þ
i¼0

where wi are weights for outputs of each kernel, k() is a kernel function, b is a bias
term, l is the number of support vectors of xi, and sign() determines the class
membership of x (i.e., +1 class and -1 class). The classification function deter-
mined by support vectors is then used to measure how much a pixel belonging to
the fire class (e.g., +1 class) is different from the non-fire class (e.g., -1 class). In
this study, we use a one-dimension feature vector including fire signatures in order
to identify fire in the video clip. However, since two classes (e.g., fire or non-fire)
are not linearly separable with the non-linear feature vector, it is necessary to find
an optimal hyper-plane that can split the non-linear feature vector by mapping it to
a high-dimensional feature space. To deal with this problem, we use the radial
basis function (RBF) kernel as follows:
!
kx  y k2
kðx; yÞ ¼ exp  for r [ 0; ð20:8Þ
2r2

where x and y are input feature vectors, and r is a parameter that determines the
width of the effective basis function, which affects the classification accuracy. In
this study, we experimentally set the standard deviation (r) to 0.1 yielding high
classification performance. The input test value x and the support vectors xi
obtained from a training data set are non-linearly mapped features using the RBF
kernel. A candidate fire pixel is finally classified as either a real fire pixel if the
result is 1 or a non-fire pixel if the result is -1 by using (20.7). To train the SVM,
we build a training dataset that includes 200 wavelet energies from training fire
pixels and 200 wavelet energies from fire-colored moving pixels, respectively.

20.3 Experimental Results

We implement the selected fire detection algorithm in MATLAB 2012b on an Intel


Quad-Core 3.4 GHz PC platform. Furthermore, five videos are used for evaluating
the accuracy of the fire detection algorithm, including 2,642 samples with
dimensions of 256 9 256 (1,301 samples containing fire and 1,341 samples
containing non-fire), as illustrated in Fig. 20.4.
Table 20.1 presents the accuracy of the fire detection algorithm in terms of true
positives (TP) and false negatives (FN). TP is the number of all frames that cor-
rectly detect a real fire as a fire and the percentage of TP (PTP) is the overall fire
detection rate. Moreover, the FN is the number of all frames that detect a real fire as
a non-fire and the percentage of FN (PTN) is the overall false non-fire detection
rate. As shown in Table 20.1, the results indicate that average fire detection and
false non-fire detection rates are 99.67 and 3.69 %, respectively, which are good
20 Early Fire Detection Using Multi-Stage Pattern Recognition Techniques 167

Fig. 20.4 Examples of test videos used in this study

Table 20.1 Result of the selected fire detection algorithm


Movies (# of frames)
Movie 1 (500) Movie 2 (599) Movie 3 (199) Movie 4 (946) Movie 5 (393)
TP PTP TP PTP TP PTP FN PFN FN PFN
500 100.00 598 99.83 198 99.50 0 0.00 29 7.38

enough for fire detection since they consistently increase the accuracy of fire
detection while decreasing the error of false fire detection in all videos.

20.4 Conclusion

This paper proposed an efficient fire detection approach using multi-stage pattern
recognition techniques: background subtraction for MCRD, rule-based CS in
YCbCr color space, wavelet decomposition for describing the behavior of fire, and
a SVM for identifying between fire and non-fire. Experimental results showed that
the proposed method achieves a low false alarm rate, and high reliability in test
videos. These results demonstrate that the proposed method is a promising can-
didate for use in automatic fire-alarm systems.

Acknowledgments This work was supported by the Leading Industry Development for Eco-
nomic Region (LeadER) grant funded the MOTIE (Ministry of Trade, Industry and Energy),
Korea in 2013 (No. R0001220) and by the National Research Foundation of Korea (NRF) grant
funded by the Korea government (MEST) (No. NRF-2013R1A2A2A05004566).

References

1. Celik T, Demirel H (2009) Fire detection in video sequences using a generic color model. Fire
Saf J 44(2):147–158
2. Qiu T, Yan Y, Lu G (2012) An autoadaptive edge-detection algorithm for flame and fire image
processing. IEEE Trans Instrum Meas 61(5):1486–1493
168 D. Shon et al.

3. Chen TH, Wu PH, Chiou YC (2004) An early fire-detection method based on image
processing. In: IEEE international conference on image processing. Singapore, pp 1707–1710
4. Toreyin BU, Centin AE (2007) Online detection of fire in video. In: IEEE international
conference on computer vision and pattern recognition. Minneapolis, pp 1–5
5. Ko BC, Cheong KH, Nam JY (2009) Fire detection based on vision sensor and support vector
machine. Fire Saf J 44(3):322–329
6. Toreyin BU, Dedeoglu Y, Gudukbay U, Cetin AE (2006) Computer vision based method for
real-time fire and flame detection. Pattern Recogn Lett 27(1):49–58

Das könnte Ihnen auch gefallen