Sie sind auf Seite 1von 8

Face Recognizability Evaluation for ATM Applications With Exceptional Occlusion Handling

Sungmin Eum, Jae Kyu Suhr, and Jaihie Kim School of Electrical and Electronic Engineering, Yonsei University Republic of Korea
{eumsungmin,lfisbf,jhkim}@yonsei.ac.kr

Abstract
Biometrics has been extensively utilized to lessen the ATM-related crimes. One of the most widely used methods is to capture the facial images of the users for follow-up criminal investigations. However, this method is vulnerable to attacks made by the criminals with heavy facial occlusions. To overcome this drawback, this paper proposes a novel method for face recognizability evaluation with exceptional occlusion handling (EOH). The proposed method conducts a recognizability evaluation based on local regions of the facial components. Subsequently, the resulting decisions are reaffirmed by the EOH exploiting the global aspect of the frequently occurring facial occlusions. The EOH can be divided into two separate approaches: 1) accepting the falsely rejected cases, 2) rejecting the falsely accepted cases. In this paper, two typical facial occlusions, eyeglasses and sunglasses, are chosen to prove the validity of the EOH. To evaluate the proposed method in the most realistic environment, an ATM database was constructed by using an off-the-shelf ATM while the users were asked to make withdrawals as they would in real situations. The proposed method was evaluated by the ATM database which includes 480 video sequences with 20 subjects. The results showed the feasibility of the face recognizability evaluation with the EOH in practical ATM environments.

1. Introduction
Automatic Teller Machine (ATM) use is now one of the standard methods for making financial transactions and is continuously increasing due to its convenience [1]. However, as the usage of ATMs increased, related crimes have also been on the rise to become major threats for both the customers and banks worldwide [2]. For the purpose of suppressing the related crimes, there have been considerable efforts by introducing various methods of biometrics. These biometrics-driven efforts can be categorized into two approaches. The first approach is by requesting the biometrics such as face, fingerprints or

finger veins as an essential part of on-site user authentication [3]-[5] before being allowed to make any financial transactions. The other approach is to capture the images of a user at the ATM and use those images in the process of criminal face matching for follow-up criminal investigations [6]. The second approach is more commonly utilized by the ATM systems because of its advantages in providing non-intrusive environment, and less time consumption in making a transaction. However, it suffers from difficulties in tracking down the suspects when their faces are heavily occluded unable to be recognized. It is reported that the suspects tend to take advantage of its weakness by occluding their faces with typical objects such as sunglasses or masks [7]-[14]. Figure 1 shows the images of the suspects with heavy facial occlusions captured by the actual ATMs mostly difficult to be recognized. To reduce the tendency of similar sorts of fraud, an extensive research has been devised and can be categorized into three approaches: specific attack detection, skin color-based occlusion detection, and frontal bare face detection. The first approach, specific attack detection, searches for specific occluding objects such as helmets, masks, or scarves that are commonly used by the criminals [7]-[9]. The systems based on this approach reject the users when those specific objects are detected. These methods proved their feasibility for each specific occluding object. However, this approach is constrained to specifically assigned occlusions which make it inadequate to handle various occlusions that may occur in real ATM situations. Skin color-based occlusion detection is categorized as the second approach. This determines the degree of facial occlusion by using several methods for skin color analysis: skin color ratio in the whole face area [10] or specific facial regions [11]. Although the skin color-based methods can be applied to various facial poses, they tend to show unstable performance towards various lightings commonly occurred in the actual ATM environments [12]. The last category is frontal bare face detection-based approach [13], [14]. This approach takes advantage of using holistic face detectors to locate the regions of interest. A process of determining the facial occlusion is then carried out by partially analyzing the detected face regions:

89

(a)

Figure 1: Suspects with facial occlusions captured by the cameras on the ATMs.

upper and lower facial regions [13] or regions based on facial component detection [14]. Among the methods in the third approach, the method using the facial components found inside a frontal face region [14] possesses several advantages over the first and the second approaches (specific attack detection and skin color-based occlusion detection) in the following aspects. First of all, the method is able to handle users with various partial occlusions since it tries to look for the existence of facial components instead of detecting specific occluding objects. Moreover, this method can show a relatively superior performance over the second approach in various lighting conditions by employing a gray image-based detection and verification scheme. Although the method can show a degraded performance with the non-frontal faces, this problem can be overcome by applying a scenario that guarantees to provide images of frontal faces. In spite of all the advantages, the above method bears the following problems when applied in the actual ATM environments due to the reason that it uses the local information of facial components such as eyes or mouth. The first type of problem arises when partial, yet acceptable, occlusion over the facial features causes the system to reject the face even if it is recognizable in the global perspective. Figure 2(a) shows a typical example where eyes are slightly occluded by the frame of eyeglasses. Another type of problem is falsely accepting a user by wrongly locating a local region closely similar to the regions in the actual facial components. Figure 2(b) shows a frequently occurred situation where local patterns found on the reflecting surfaces of sunglasses could be mistaken for real eyes (Details will be discussed in the later sections). To overcome these obstacles, this paper proposes a method that combines an exceptional occlusion handling (EOH) and a facial component-driven recognizability evaluation. After the system carries out a face recognizability evaluation based on verifying the facial components within the face area, the EOH reaffirms whether the user is falsely rejected or falsely accepted possibly caused by any misleading occlusions. The EOH process can be carried out by two different approaches: 1) accepting the falsely rejected cases, 2)
90

(b) Figure 2: Faces with typical occlusions difficult to be handled by facial component-driven approach. (a) Eyes are slightly occluded by the frame of eyeglasses. (b) Local patterns on the reflecting surface of sunglasses may be mistaken for the real eyes.

rejecting the falsely accepted cases. In this paper, two typical facial occlusions (eyeglasses and sunglasses) which frequently interrupt the facial component-driven recognizability evaluation are chosen to represent the two different approaches of the EOH process. In the experiments, we have acquired a database which consists of 480 video sequences including 20 subjects. To build a more realistic database we have used an off-the-shelf ATM while the users were asked to make the actual withdrawals. The evaluation results using the acquired ATM database clearly showed a reasonable performance in the practical ATM environments.

2. Overview of the Proposed Method


Before discussing on the face recognizability, it is required to define what a recognizable facial image is. In this paper, a facial image that contains a mouth and at least an eye visible is defined as recognizable. In general, a visible mouth and a visible eye mostly guarantees a half of a face visible in longitudinal direction. Moreover, longitudinal half faces are proven to provide adequate information for criminal investigations [15] and automatic face recognition [16]. Nose is excluded from the definition assuming that there are almost no criminals occluding only the nose area when other facial components are completely visible. The proposed method is operated according to the flowchart shown in Figure 3. To begin with, the image sequences are acquired during the 4 to 5 second period centered at the moment of the card insertion. Images which have higher probabilities of containing frontal faces are selected to be used in the following procedures. Recognizability evaluation is performed by verifying the facial components (eyes and mouth) found in the face regions using the selected images. Finally, the exceptional occlusion handling (EOH) process is carried out to handle

(a)

(b)

Figure 4: ATM Environment. (a) Recognizability evaluation is being conducted. (b) ATM used for database acquisition Figure 3: Flowchart of the proposed method.

problematic, yet typical occlusions which frequently occur in real ATM situations. As shown in the flowchart, the EOH process consists of two different approaches. One is accepting the falsely rejected cases and the other is rejecting the falsely accepted cases. The former approach is activated when the recognizability evaluation procedure makes a decision that the face is non-recognizable. The decision is reaffirmed by checking the existence of typical acceptable occlusions (in this case, eyeglasses) near the facial components. The latter approach is activated when the face is evaluated as recognizable. It reaffirms the primary decision by searching for non-acceptable, yet commonly occurring occlusions (in this case, sunglasses) within the face regions. Since the objective of the system is to acquire a recognizable face of the user, it is designed to be terminated as soon as a recognizable facial image is acquired.

all of the users were shown to take a glance at the card slot on the ATM for one or two seconds while they try to insert the cards. Under this assumption that the ATM users are likely to show their frontal faces sometime while they insert the cards, we have set up our scenario as evaluating the recognizability of the facial images obtained during the 4 to 5 second period centered at the moment of card insertion. This scenario was used to acquire our ATM database containing 480 video sequences of 20 subjects. Each video sequence contains 61 frames. Details on the ATM database acquisition will be discussed in section 6.1.

3.2. Selecting Quality Images


The computational resources are often limited in real-time embedded systems such as ATMs. Moreover, the recognizability evaluation is required to be finished as soon as possible for the convenience of the users. Therefore, we need to select a certain number of images that contain high probability of being recognizable. In order to select the quality images from each video sequence, we have come up with a simple measure called, frontal face response. Frontal face response of an image measures the number of faces detected by the Viola-Jones frontal face detector [18]. This detector is designed to look for the frontal faces by sliding a window in various locations and scales. Thus, the detector produces a greater number of face responses on the frontal faces with slightly variant locations and scales than

3. Scenario and Image Selection


3.1. ATM Scenario
In the process of designing our recognizability evaluation scenario, we have recorded and analyzed several video sequences where a user walks in and performs a financial transaction using a real ATM as shown in Figure 4(a). ComNet-9000DM [17], one of the popular ATM models in major Korean banks, was selected to be used in acquiring the video sequence. From the observation of the recorded videos, we have picked up a simple fact that there is a higher probability of acquiring images of frontal faces around the moment of inserting the card than any other moments of a transaction process. The reason lies in the fact that the camera on the ATM model is installed right above the card slot as shown in Figure 4(b). Thus, almost

Figure 5: Frontal face response for three different facial postures. From left to right, frontal face responses for frontal, near-frontal, and non-frontal face postures are shown.

91

(a)

Figure 7: Facial component detection. 1st row: Detecting the facial components inside the face region, 2nd row: Selecting the facial components within the region that satisfy the predefined geometric constraints. Detected regions for face, eyes, and mouths are shown by the solid, dotted, and dashed lines, respectively.

(b) Figure 6: Selecting Quality Images. (a) Histogram of the face response acquired from 480 sequences, each bin indicating the frame index of each sequence. Sample frames of a sequence are depicted above the histogram. (b) Consecutive image selection (top) and the intermittent (bottom) image selection.

the consecutive image selection. Moreover, selecting the images intermittently was shown to avoid occasional cases where the set of consecutive frames are aggregated around a certain frame containing a non-frontal face of the user. Note that choosing the number of frames and selecting a set of frames to be used in the system should be considered based on the performance of the camera and the hardware specifications of the ATM.

4. Facial Component-driven recognizability Evaluation


Before the system searches for facial features such as eyes or mouths, holistic frontal face detection is applied to the whole image to define the region of interest (ROI). The proposed system utilizes the Viola-Jones general object detector [19] for detecting the frontal faces and facial components. The detector is one of the widely used object detectors because of its high detection rate [20] and computational efficiency [21]. After the ROIs are defined, the locations of eyes and mouths are being searched for inside the given ROIs. This procedure is shown on the 1st row of Figure 7. In this figure (along with Figure 8 and Figure 10), only the face regions from the large original images were cropped after the evaluation to show the details of the detection results. The large original images are shown in Figure 11. After the facial component detection process within the ROIs, face candidates are then generated by selecting only the eye-mouth combinations that satisfy the predefined geometric constraints among every combination that can be made inside the ROI. The rest of the facial component detection results are discarded from usage as shown in the bottom row of Figure 7. Finally, a simple verification method is applied to the facial component regions to confirm the recognizability of the
92

on non-frontal faces as shown in Figure 5. This figure shows the frontal face responses for three different facial postures. Frontal face response is used as an indirect method to measure the images degree of being recognizable under the assumption that frontal faces have higher probability to be verified as recognizable. Figure 6(a) shows the histogram of the face responses of 29280 images from 480 video sequences of the acquired ATM database (details are explained in section 6.1). Each bin of the histogram indicates the frame number for each sequence. A sequence is composed of 61 frames. Several sample frames of a typical video sequence is shown above the histogram, and the numbers above the images indicate the frame numbers. It is observable that the quality images for our scenario are found around the 32nd frame, which is in accordance with the moment of card insertion. Therefore, we have decided on utilizing several frames per sequence centered around the 32nd frame. Figure 6(b) presents the two image selection methods, the consecutive (top) and the intermittent image selection (bottom). After a careful comparison of the two methods, the intermittent selection method was chosen to provide the system with higher variations of frontal faces than the similar frontal faces in

Figure 8: Two types of misclassification caused by typical facial occlusions. 1st row: Faces determined as non-recognizable due to adjacent frame of eyeglasses near the eye region. Images on the left and center are falsely rejected by the verification process. Image on the right is rejected due to failing to detect the eyes, 2nd row: Faces determined to be recognizable due to misclassifying the reflection on the sunglasses as an eye. Solid, dotted, and dashed lines indicate the faces, eyes, and mouths regions.

Figure 10: Detecting the faces with typical facial occlusions. Each image corresponds to the images found in Figure 8. 1st row: Faces are determined as recognizable which were falsely rejected without the EOH. 2nd row: Faces are determined as non-recognizable which were falsely accepted without the EOH

faces. For verifying the facial component regions, principal component analysis (PCA) feature extractors and support vector machine (SVM) classifiers are utilized. These procedures were conducted using the methods explained in [22].

5. Exceptional Occlusion Handling


Facial component-driven face recognizability evaluation holds weakness in dealing with various facial occlusions that occur frequently in real situations. In other words, recognizability evaluation becomes a challenging problem when the typical facial occlusions tend to exist close to eye or mouth regions directly interfering with the detection and the verification process. The existence of a typical facial wear like eyeglasses may hinder a normal innocent user from getting an approval to use the ATM. On the other

Figure 9: A conceptual diagram describing the Exceptional Occlusion Handing (EOH). Four-pointed star and the red circle indicate the boundary of the recognizable faces and the acceptance boundary, respectively. Falsely accepted faces (fan-shaped regions) and falsely rejected faces (triangle-shaped regions) are handled by separate schemes in the process of EOH. 93

hand, a suspicious user wearing sunglasses could be evaluated as a recognizable user due to reflective surfaces of sunglasses that bear eye-like regions. While the first case could induce numerous customer complaints, the second case can be shown as indirectly giving permission to ATM frauds along with bringing difficulties in the follow-up investigations. Figure 8 depicts several cases where the two types of misclassifications occur due to typical facial occlusions. As can be seen in the figure (1st row), eye detectors may fail in detecting or verifying when the eyes are present close to the frame of the eyeglasses. In the cases of sunglasses (2nd row of Figure 8), false eye approvals may occur due to falsely detected eye regions on the reflections of the sunglasses. In order to handle these problems, the proposed method facilitates the exceptional occlusion handling (EOH) process to handle both types of falsely evaluated cases. The EOH can be divided into two different approaches: 1) accepting the falsely rejected cases, 2) rejecting the falsely accepted cases. Both of the approaches ought to follow a rule that each approach should concentrate on its sole purpose while maintaining the adverse effects under control. That is, while an eyeglasses detector can be devised in purpose of accepting the falsely rejected users wearing eyeglasses, this detector should avoid in generating false detections of eyeglasses when given a face wearing sunglasses. In short, the EOH should be designed to detect the target objects exclusively although it may hold a slightly inadequate detection rate. A conceptual diagram in Figure 9 describes the need of the EOH to handle two different types of misclassifications. As we have shown in Figure 3, the type of the EOH process is chosen based on the outcome of the facial component-driven recognizability evaluation. When the facial component-driven phase accepts the user, the EOH proceeds by investigating whether the user is falsely

accepted or not. When the opposite occurs where the user is rejected, the facial image is reexamined to prevent the false rejections. To demonstrate the feasibility of the proposed method, two typical facial occlusions, eyeglasses and sunglasses, were chosen as specific targets to be handled by the two types of EOH process. The process begins with an effort to detect eyeglasses or sunglasses inside the predefined face ROIs. If the objects are detected, they are paired with a mouth, if any, found within the same ROIs to form a face candidate. Then, the face candidate is confirmed as a face wearing a certain object after assuring that the detected components satisfy the geometric constraints to be a reasonable face. Finally, the decision whether the user should be accepted or not is made based on the results from the EOH. The user is given approval if he or she is proven to be wearing normal eyeglasses. In the similar manner, the user is rejected by the system if found wearing a pair of sunglasses. Figure 10 depicts how the EOH can handle the misclassifications shown in Figure 8.

Training the detectors for EOH


Detectors for eyeglasses and sunglasses have been trained by adopting the Viola-Jones object detection framework [19]. Positive samples for training the eyeglasses detector were obtained from various sources such as the Internet, CASPEAL-R1 [23] face database, and our own facial occlusions database acquired using a web camera. From these databases, 950 eyeglasses images (600 from the Internet, 200 from CASPEAL-R1, 150 from our facial occlusion database) were cropped to be used in the training. Several manipulations to these original samples have been carried out to construct a bigger set of positive samples containing 7600 eyeglasses images which could better represent the variations of the target object. This sample manipulation procedure was referred from [24]. First, the original samples were manually rotated to be aligned in horizontal direction before being added to the training set. The rotated samples were also added to the set. These samples were obtained by rotating the aligned samples in 5 randomly chosen degrees between -5 and +5. Last, the mirrored images of these newly generated samples were added to the training set. The negative samples were obtained from the general face-free images from [25] along with eyeglasses-free facial images from the Internet and from several publicly available face databases such as AR [26], CAS-PEAL-R1 [23], GEORGIA TECH [27], and VALID [26]. Eyeglasses-free facial images were included in the negative set to minimize possible false detections inside the face region. In the process of training a sunglasses detector, 700 original sunglasses images were cropped from AR [26], CAS-PEAL-R1 [23], our facial occlusion database, and the downloaded images from the

Figure 11: Example images of the acquired ATM database. Top two rows and bottom two rows are acceptable cases and non-acceptable cases, respectively.

Internet. These images were manipulated to generate 8000 sunglasses images. The manipulations for the positive samples and the construction of the negative samples were carried out in the equivalent manner as done in the eyeglasses case. To satisfy the sole purpose of the EOH as mentioned previously, the eyeglasses and the sunglasses detectors were tuned to detect the target objects exclusively although it may hold a slightly inadequate detection rate.

6. Experimental Results
6.1. ATM Database Acquisition
To evaluate the proposed method in the most practical manner, we have constructed an ATM database which can reflect the real ATM environments. To strengthen the reality, we have chosen to use one of the popular ATM models, ComNet-9000DM [17], extensively used by major Korean banks. The ATM has a built-in camera located just above the card slot as shown in Figure 4(b). The cameras resolution and the approximate field of view are 640480 pixels and 9070 in horizontal and vertical directions, respectively. The subjects were given a cash card in a wallet and were asked to act as natural as possible at the ATM. Accordingly, they were not given any information regarding the scenario nor the location of the camera. Although the ATM was located indoors, lighting environments were slightly varied according to the time of the acquisition randomly chosen between 9 A.M. and 9

94

Table 1: Description of the acquired ATM database.

P.M. The database consists of 480 video sequences, each sequence comprising 61 single images (frames). Each subject was asked to wear 3 different typical occlusions including eyeglasses (acceptable), sunglasses (non-acceptable), and a mask (non-acceptable). To secure the diversity of the occluding objects, 3 different eyeglasses and 3 different sunglasses were given for each subject. Video sequences without any facial occlusions were also acquired. Example images of the acceptable and non-acceptable occlusions are shown in Figure 11 and the details regarding the database are described in Table 1.

Figure 12: ROC Curves showing the optimized performance using the EOH

6.2. Performance Evaluation


The performance of the proposed method was evaluated using the acquired ATM database described in section 6.1. In the proposed method, a sequence is regarded as recognizable if more than one frame is evaluated as recognizable. That is, the sequence is regarded as non-recognizable if and only if all the frames are evaluated as non-recognizable. Table 2 shows the evaluation results for 4 different implementations: recognizability evaluation without EOH, with EOH for eyeglasses (EG) only, with EOH for sunglasses (SG) only, and with EOH for both of the occlusions (eyeglasses and sunglasses). This table presents the accuracies for 4 different occlusion cases. They were obtained based on the acceptability of each facial occlusion.

Table 2: Performance of the proposed system for typical facial occlusions. EG and SG indicates eyeglasses and sunglasses, respectively. The numbers in the table are expressed in

In the case of the recognizability evaluation without using the EOH, the performances for users wearing eyeglasses and the users wearing sunglasses were clearly poorer than the other occlusion cases. When the EOH with eyeglasses handling was added, the performance for the users wearing the eyeglasses was improved by 5% while maintaining other performances under control. The implementation with the EOH for only the sunglasses was showed the similar trend by 6% increase in the performance. Lastly, the recognizability evaluation supported by the EOH considering both of the occluding objects were shown to perform better than the other implementations without bringing any adverse effects towards the cases of bare face or mask. The 5 to 6% performance enhancements are significant in the actual ATM situations considering the enormous number of ATM users. For detailed performance evaluation and comparison for the 4 implementations, the receiver operating characteristic (ROC) curves are depicted in Figure 12. In this figure, the false acceptance rate (FAR) indicates the percentage of the falsely approved users wearing non-acceptable occlusions while the true acceptance rate (TAR) indicates the percentage of the correctly approved users wearing acceptable occlusions. The ROC curves were obtained by gradually incrementing the threshold values for the facial component verifiers. The black dotted line presents the performance of the recognizability evaluation without any EOH process. The green dash-dotted and blue dashed lines indicate the performances when the EOH for the eyeglasses and the EOH for the sunglasses were applied, respectively. Lastly, the red solid line shows the performance of the proposed method supported by the EOH for both (eyeglasses and sunglasses) of the occlusion cases. The figure clearly

95

reveals the superiority of the recognizability evaluation with the EOH targeted for both of the typical facial occlusions.
[12]

7. Conclusions
This paper proposes a recognizability evaluation method which combines a facial component-driven approach and the exceptional occlusion handling (EOH) taking eyeglasses and sunglasses into account. The feasibility of the proposed method was proved using a practical database acquired by an off-the-shelf ATM along with a realistic withdrawal scenario. In the following research, we are planning to devise a recognizability evaluation scheme robust to various unconstrained illumination conditions and facial postures. In addition, EOH that deals with other types of typical occlusions such as mustache near the mouth will be considered.
[13]

[14]

[15]

[16]

[17] [18]

8. References
[1] A. Hutchinson, The top 50 inventions of the past 50 years, Popular Mechanics, Dec. 2005. [2] ATM Crime: Overview of the European situation and golden rules on how to avoid it, European Network and Information Security Agency (ENISA), Aug. 2009. [3] H. Sako and T. Miyatake, Image-recognition technologies towards advanced automated teller machines, In Proceedings of the International Conference on Pattern Recognition, Aug. 2004, vol. 3, pp. 282-285. [4] S. Prabhakar, S. Pankanti, and A. K. Jain, Biometric recognition: security and privacy concerns, IEEE Security & Privacy, vol. 1, no. 2, pp. 33-42, Mar. 2003. [5] G. Graevenitz, Biometric authentication in relation to payment systems and ATMs, DATENSCHUTZ UND DATENSICHERHEIT, vol. 31, no. 9, 2007. [6] L. Duan, X. Yu, Q. Tian, and Q. Sun, Face pose analysis from mpeg compressed video for surveillance applications, In Proceedings of the International Conference on Information Technology: Research and Education, Aug. 2003, pp.549-553. [7] C. Wen, S. Chiu, J. Liaw, and C. Lu, The safety helmet detection for ATMs surveillance system via the modified Hough transform, In Proceedings of the Annual IEEE International Carnahan Conference on Security Technology, Oct. 2003, pp. 364-369. [8] C. Wen, S. Chiu, Y. Tseng, and C. Lu, The mask detection technology for occluded face analysis in the surveillance system, Journal of Forensic Science, vol. 50, no. 3, pp. 1-9, May 2005. [9] R. Min, A. DAngelo, and J.-L. Dugelay, Efficient scarf detection prior to face recognition, In Proceedings of the 18th European Signal Processing Conference, Aug. 2010, pp. 259-263. [10] D. Lin and M. Liu, Face occlusion detection for automated teller machine surveillance, Lecture Notes in Computer Science, vol. 4319, pp. 641-651, Dec. 2006. [11] G. Kim, J. K. Suhr, H. G. Jung, and J. Kim, Face occlusion detection by using b-spline active contour and skin color information, In Proceedings of the International

[19]

[20]

[21]

[22]

[23]

[24]

[25] [26] [27] [28]

Conference on Control, Automation, Robotics and Vision, Dec. 2010, pp. 627-632. P. Kakumanu, S. Makrogiannis, and N. Bourbakis, A survey of skin-color modeling and detection methods, Pattern Recognition, vol. 40, no. 3, pp.1106-1122, 2007. J. Kim, Y. Sung, S. M. Yoon, and B. G. Park, A new video surveillance system employing occluded face detection, Lecture Notes in Computer Science, vol. 3533, pp. 65-68, Jun. 2005. I. Choi and D. Kim, Facial fraud discrimination using detection and classification, Lecture Notes in Computer Science, vol. 6455, pp. 199-208, Dec. 2010. A. Paterson, Computerised facial construction and reconstruction, In Proceedings of the Asia Pacific Police Technology Conference, Jan. 1991, pp. 135-144. N. Ramanathan and R. Chellappa, Face verification across age progression, IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3349-3361, Nov. 2006. Comnet-9000DM, http://www.chunghocomnet.com/english/, 2011. P. Viola and M. Jones, Robust real-time face detection, International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, May 2004. P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, In Proceedings of the International Conference on computer vision and pattern recognition, Dec. 2001, vol. 1, pp. 511518. J. Wu, S. C. Brubaker, M. D. Mullin, and J. M. Rehg, Fast asymmetric learning for cascade face detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 369-382, Mar. 2008. M. Hussein, F. Porikli, and L. Davis, A comprehensive evaluation framework and a comparative study for human detectors, IEEE Transactions of Intelligent Transportation Systems, vol. 10, no. 3, pp. 417-427, Sep. 2009. J. K. Suhr, S. Eum, H. G. Jung, G. Li, G. Kim, and J. Kim, Recognizability Assessment of Facial Images for Automated Teller Machine Applications, submitted to Pattern Recognition, Mar. 2011. W. Gao, B. Cao, S. Shan, X. Chen, D. Zhou, X. Zhang, and D. Zhao, The CAS-PEAL large-scale Chinese face database and baseline evaluations, IEEE Transactions on System Man, and Cybernetics - Part A, vol. 38, no. 1, pp149-161, Jan. 2008. P. Viola and M. J. Jones, Fast multi-view face detection, Technical Report TR2003-96, Mitsubishi Electric Research Laboratories, Jul. 2003. N. Seo, Tutorial: OpenCV haartraining, http://note.sonots.com/SciSoftware/haartraining.html, 2011. A.M. Martinez and R. Benavente, The AR face database, CVC Technical Report #24, Jun. 1998. Georgia tech face database, http://www.anefian.com/research/face_reco.htm, 2010. N. A. Fox, B. A. O'Mullane, and R. B. Reilly, VALID: A New Practical Audio-Visual Database, and Comparative Results, Lecture Notes in Computer Science, vol. 3546, pp.777-786, Jul. 2005.

96

Das könnte Ihnen auch gefallen