Sie sind auf Seite 1von 5

AUTCON-01240; No of Pages 5

Automation in Construction xxx (2011) xxxxxx

Contents lists available at ScienceDirect

Automation in Construction
j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / a u t c o n

Application of neural networks for detecting erroneous tax reports from construction companies
Jieh-Haur Chen a,, Mu-Chun Su b, Chang-Yi Chen a, Fu-Hau Hsu b, Chin-Chao Wu b
a b

Institute of Construction Engineering and Management, National Central University, Jhongli, Taoyuan 32001, Taiwan Department of Computer Science and Information Engineering, National Central University, Jhongli, Taoyuan 32001, Taiwan

a r t i c l e

i n f o

a b s t r a c t
In this study we develop an automatic detection model for discovering erroneous tax reports. The model uses a variety of neural network applications inclusive of the Multi-Layer Perceptrons (MLPs), Learning Vector Quantization (LVQ), decision tree, and Hyper-Rectangular Composite Neural Network (HRCNN) methods. Detailed taxation information from construction companies registered in the northern Taiwan region is sampled, giving a total of 5769 tax reports from 3172 construction companies which make up 35.98% of the top-three-class construction companies. The results conrm that the model yields a better recognition rate for distinguishing erroneous tax reports from the others. The automatic model is thus proven feasible for detecting erroneous tax reports. In addition, we note that the HRCNN yields a correction rate of 78% and, furthermore, generates 248 valuable rules, providing construction practitioners with criteria for preventing the submission of erroneous tax reports. 2011 Elsevier B.V. All rights reserved.

Article history: Accepted 26 March 2011 Available online xxxx Keywords: Neural networks Pattern classication Tax report Construction company

1. Introduction The problem of submission by construction companies of erroneous tax reports to tax bureaus is critical. According to ofcial information from the National Tax Administration of Northern Taiwan (NTANT), errors have been found in almost half to the total tax reports led in recent years [1]. The government considers this behavior as tax evasion and it denitely affects government operations. It is also dangerous for the construction companies since those companies found guilty of tax evasion are subject to penalties, which can be serious enough to force them out of business. Detecting erroneous tax reports in the initial phase can be regarded as a two-class pattern recognition problem because of only two kinds of results which are yes for erroneous tax report and no for correct tax report; however, such detecting work in practice is normally dependent on experienced personnel. In their examination of tax reports, professional auditors look for the appearance of any suspicious values in the accounting, followed by comprehensive inspection to specify exact errors if any. Suspicious values vary widely, making such inspection an exhaustive task. The detection process is tedious and costs both companies and government a tremendous amount of money and manpower. In addition, auditing personnel may need the specication of solid upper and lower bounds for those accounts to facilitate the work. Developing an automatic model to

solve the aforementioned problems is the motivation for this work. The study objective is to develop an automatic model for construction practitioners that will (1) save time and manpower from the wearisome task of manually reviewing massive tax reports, and (2) provide upper and lower bounds for each account to facilitate auditing. The research scope is limited to corporate tax information which is usually condential and related to corporate privacy. Thus, there are two compulsory criteria to be met for the protection of the taxpayers: the selection of past taxation data that is at least 10 years old, and preservation of anonymity. 2. Applications of pattern classication in construction One of the fundamental aspects of pattern recognition lies in determining which is featured should be employed for the best classication results. Next, an effective classier is desirable. Construction practitioners and researchers have been able to solve realistic problems using the pattern classication concept for years. Chen utilized the k-mean Nearest Neighbor (kNN) approach to establish a knowledge-sharing model, which provides detailed resolutions as determined by the courts for construction disputes [2]. He also developed a hybrid model using the methods based on Articial Neural Networks (ANN) and Case Based Reasoning (CBR) to predict the likelihood of litigation in cases of severe disputes by change orders [3]. The Ant Miner tool, integrated with rule-based classication extraction is introduced to predict the outcome of construction litigation [4]. Predicting performance for construction companies and projects is

Corresponding author. Tel.: + 886 3 4227151x34112; fax: +886 3 4257092. E-mail address: jhchen@ncu.edu.tw (J.-H. Chen). 0926-5805/$ see front matter 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.autcon.2011.03.011

Please cite this article as: J.-H. Chen, et al., Application of neural networks for detecting erroneous tax reports from construction companies, Autom. Constr. (2011), doi:10.1016/j.autcon.2011.03.011

J.-H. Chen et al. / Automation in Construction xxx (2011) xxxxxx

also one of the major applications of pattern classication. A change prediction system utilizing an activity-based dependency structure matrix was introduced to facilitate change management [5]. Construction agents and rms can predict and control budgets and schedules by the use of neurofuzzy models [6]. Scholars have recently started to apply pattern classication concepts to nancial failure prediction [7,8] and risk-hedging [9,10]. Researchers have also used numerous techniques to deal with the typical application of pattern classication for the selection of construction contractors. Wong et al. adopted logistic regression and multivariate discriminate analysis for the classifying of construction contractors [11,12]. Elazouni developed an example of selection of the desired contractor automatically using unsupervised-learning neural networks [13]. Although there are numerous methods discussed in the literature, the ANN applications especially can be used to develop classiers suitable for multi-purposes in construction [1418]. Learning Vector Quantization (LVQ) and decision trees are also effective methods for solving classication problems [1922]. As a result, we adopt four effective approaches to construct our classier: Multi-Layer Perceptron (MLP) [2], LVQ [2325], decision trees [2628], and Hyper-Rectangular Composite Neural Networks (HRCNNs) [29,30]. 3. Data collection and analysis Taxation information is condential. Authorization must be granted before data collection can commence. With the help of the NTANT, we acquired access to detailed taxation information from construction companies registered in the northern Taiwan region. However, in this study, we only considered tax reports led for the scal years 1998 and 1999, which were available for release for research purposes due to the consideration of condentiality, corporate privacy, and NTANT internal auditing processes. The collected data, thus, included 5769 data sets from 3172 construction companies comprising 35.98% of the top-three-class construction companies in Taiwan at that time [31]. Each tax report consists of 180 accounting accounts representing the annual tax status of the corresponding construction company. After consultation with auditing experts from NTANT, the most common errors in the reports are identied (31 out of 180 accounts related to costs and expenses) as follows: operating expense, salary/wage expense, lease expense, travel expense, shipping expense, postage/telephone expense, maintenance expense, advertising expense, utility expense, insurance expense, allowance, bad debt loss, room and board expense, welfare expense, R/D expense, commission expense, accounts payable, notes payable, accrued payable, accrued expense, direct material cost, direct material less other cost, direct material plus other cost, indirect material cost, direct labor cost, manufacturing burden, manufacturing cost, operating cost, other operating cost, overhead, and total amount of manufacturing burden. Selling expenses are a major category of the owner's equity including the rst 16 accounts and the accrued expenses out of the aforementioned 31 accounts. These expenses cover necessities for corporate operation. The amounts may be questionable in cases where the accounting staff have entered erroneous amounts or mistakenly identied transactions. Accounts payable, notes payable and accrued payable are liabilities to a rm. The reasons most such erroneous entries are made are misjudgment of the amount or the entry date. The other accounts are associated with operating costs. Accounting staff may make mistakes because of unclear invoices or difculty in classifying receipts to the correct accounts. In general, computational complexity involved in classication is directly proportional to the number of features. It is always preferable to reduce unnecessary features to improve computational efciency. There are three popular methods of feature reduction: principal component analysis (PCA) [32], Pearson correlation [33], and the Fisher ratio [34]. We rst introduce the principles of PCA, the Pearson

correlation, and the Fisher ratio, and then subsequently employ these methods for processing the collected data in an attempt to reduce the dimensions of data features. 3.1. Principal component analysis PCA is a popular technique that offers reduction of a large set of correlated variables to a small number of uncorrelated components [32]. This transformation is also commonly referred to as an eigenvector, Karman-Love, or Hotelling transform. Suppose we have an r-dimensional vector x and wish to use an n-dimensional vector f to represent it, where n r. Consider a set of samples consisting of N r-dimensional vectors fx 1 ; ; x N g. The mean vector and covariance matrix of these samples are computed as follows: 1 N x N i=1 i 1 N T x m x i m : N i=1 i 1

m=

C=

Let ei and i, i = 1, , m, be the eigenvectors and corresponding eigenvalues of C, arranged in descending order so that j j + 1 for j = 1, , r 1. Let A be a matrix of order n r whose rows are formed from the n eigenvectors corresponding to the n largest eigenvalues of C. Then the n-dimensional vector f can be computed by the following transformation: 1 eT B P1 C f = A xm = B Cxm : A @ eT Pn 0

For this study, r = 31. By using PCA, 13 features are extracted from the original 31 variables. Their accounting accounts are operating expense, salary/wage expense, lease expense, travel expense, postage/ telephone expense, maintenance expense, insurance expense, allowance, room and board expense, direct material cost, manufacturing cost, operating cost, and total amount of manufacturing burden. This is because the ratio of the sum of these 13 eigenvalues to the sum of all eigenvalues is greater than 99.0%. 3.2. Pearson's correlation The Pearson correlation can be used to measure the degree of correlation between two variables [33]. The Pearson correlation is dened as   N  xi; j mj yi my i=1 Rj = ss ; 2 2 N  N  yi my xi; j mj
i=1 i=1

x the class label of the ith vector Pi ; mj =

x where xi, j represents the jth component of the vector Pi ; yi represents 1 N 1 N xij ; and my = yi . N i=1 N i=1 Note that Rj [ 1, + 1]. A value of Rj close to either + 1 or 1 signies a high correlation; a low value of Rj close to 0 signies low correlation. In this way the top 10 features with the corresponding values of Rj are chosen. They are: operating expense, salary/wage expense, lease expense, travel expense, maintenance expense, allowance, room and board expense, direct material cost, manufacturing cost, and operating cost. These are similar to the results obtained in the previous section.

Please cite this article as: J.-H. Chen, et al., Application of neural networks for detecting erroneous tax reports from construction companies, Autom. Constr. (2011), doi:10.1016/j.autcon.2011.03.011

J.-H. Chen et al. / Automation in Construction xxx (2011) xxxxxx

3.3. Fisher ratios The Fisher ratio is also an effective tool for feature selection [34]. Let the mean and variance of samples belonging to class C1 and class C2 in the direction of the jth feature be denoted by m1, j, m2, j, 1, j and 2, j, respectively. The Fisher ratio is dened as the ratio of the inter-class difference to the intra-class spread  2 m1; j m2; j : Fj =  2 j + 2 j 1; 2;

4.2. Learning vector quantization LVQ is an adaptive data classication technique that uses class information to move the cluster centers, so as to improve the quality of the classiers decision regions [2325]. In general, the LVQ learning involves two steps. First, an unsupervised clustering algorithm such as the k-means algorithm is utilized to search several cluster centers. Once the cluster centers are found, they are labeled using the so-called voting method. Next, the cluster centers are ne-tuned to minimize the missclassications based on the class information. After training, an input pattern is assigned to the same class as the output unit which has the weight vector closest to the input pattern. Therefore, the weight vector of the winner can be expressed as   8 < w + P w n ; if P label = j label x Pj x Pj   6 w j n + 1 = : w x w n ; if x label j label ; P P j P Pj where w j stands for the weight vector of the winner; n is the nth neuron; indicates the learning rate; and x is a randomly selected data point input to the networks. For the LVQ classier in this study, we typically set the initial values of one hidden layer with 20 hidden neurons, learning rate at 0.5, and 500 training epochs. Table 1 presents the results obtained by LVQ. 4.3. Decision tree Decision trees can be used to solve classication problems. A decision tree is a tree structure consisting of internal and external nodes. Each terminal node contains a label that indicates the predicted class of a given feature vector. The most famous method of decision-tree induction is the ID3 and C4 procedures proposed by Quinlan [2628]. In this research, we grow a ternary-tree based on the ID3 procedure. The results are summarized in Table 1. 4.4. Hyper-rectangular composite neural networks The class of HRCNNs, a type of hybrid networks, integrates the paradigms of neural networks with the rule-based approach [29,30]. The mathematical description of a two-layer HRCNN with J hidden nodes is as follows:
J

The value of Fj denotes the class separation between class C1 and class C2 in the direction of the jth feature. A higher Fisher ratio indicates a more effective feature. The features with the signicant values are: operating expense, salary/wage expense, shipping expense, postage/ telephone expense, maintenance expense, utility expense, insurance expense, allowance, room and board expense, direct material cost, direct labor cost, manufacturing cost and operating cost. These are slightly different from the results obtained with the previous two methods. 4. Recognition methods There are numerous recognition methods available for establishing the classier for the proposed model. The MLP, LVQ and decision tree methods are popular and effective and have been cited in various research elds as stated in the previous sections. HRCNN is a hybrid network combining neural networks and rule-based concepts. Its feasibility has been demonstrated in prior studies [14,15,29,30]. Accordingly, these 4 methods are used to develop the classier. Each has its own advantageous properties and limitations. A comparison of the model development helps us to explore these characteristics. 4.1. Multi-layer perceptrons MLP is one of the most popular of the neural network methods, having been successfully applied to solve difcult and diverse problems using the well-known back-propagation algorithm [32]. An MLP typically consists of at least three layers of neurons: an input layer, one or more hidden layers and an output layer. Unlike the input layer, the hidden and output layers have a non-linear activation function (e.g., sigmoid function). The back-propagation algorithm makes two passes through the network: a forward pass to calculate the network outputs on a layer-by-layer basis and a backward pass to update the network's weights. The performance of a trained MLP depends on its architecture, initial weights and the number of training epochs. In this study, the architecture is set to one hidden layer with 5 hidden neurons with each initial weight set at 1, learning rate at 0.5 and 100,000 training epochs. Table 1 shows the results of the MLP classier using 3 different feature section methods.

! 7

Out x = f

j=1

Outj x ;

  Outj x = f netj x ;
n

netj x = f
i=1

   Mji xi xi mji n

and &
Table 1 Results among classiers. Classier MLP LVQ Decision tree HRCNN Training Test Training Test Training Test Training Test Principal component analysis 81.36% 76.69% 77.86% 77.21% 77.67% 78.06% 100% 76.60% Pearson correlation 82.32% 78.59% 76.90% 78.10% 79.80% 79.62% 100% 78.50% Fisher ratio 81.90% 79.09% 77.03% 77.94% 79.80% 79.62% 100% 78.90% Average 81.86% 78.39% 77.26% 77.75% 79.09% 79.10% 100% 78%

f y =

1 if y 0 ; 0 if y b 0

10

where Mji and mji R are the adjustable synaptic weights of the jth hidden node; x = x1 ; :::: xn T is an input pattern; n is the dimensionality of the input variables; is the small positive real number; and Out x : Rn f0; 1g is the output function of a two-layer HRCNN with J hidden nodes. The class of HRCNNs can be used to solve two-class problems. If the output is one then the pattern is recognized to be the positive class. The values of the synaptic weights of a trained HRCNN can be interpreted as a set of crisp IfThen rules. The IfThen classication rules extracted

Please cite this article as: J.-H. Chen, et al., Application of neural networks for detecting erroneous tax reports from construction companies, Autom. Constr. (2011), doi:10.1016/j.autcon.2011.03.011

J.-H. Chen et al. / Automation in Construction xxx (2011) xxxxxx

from a trained HRCNN with J hidden nodes can be represented as follows: x If P m11 ; M11 m1n ; M1n Then Out x = 1;  h i h i If x mJ1 ; MJ1 mJn ; MJn Then Out x = 1; Else Out x = 0: This group of rules can be interpreted as follows: If a data pattern, x = x1 ; :::: xn T , falls inside at least one of the J n-dimensional hyperrectangles dened by [mj1, Mj1] [mjn, Mjn] for j = 1,J, then the output of the network is one (i.e., Out x = 1). The idea of using hyper-rectangles to represent rules can also be found in the nested generalized example (NGE) algorithm [35], the fuzzy minmax neural network classier (FMMC) [36], fuzzy ARTMAP [37], and the method proposed by Abe and Lan [38]. These algorithms differ in their corresponding learning algorithms. To improve the correct recognition performance of a trained HRCNN, we replace the function Outj x by
2 Per x Perj mj x = e j ;

information to construction practitioners. For example, a typical rule explains that: If $11; 681; 054 the operating expense $47; 969; 408 and $4; 057; 926 wages $157; 619; 385 and $0 manufactoring fee $662; 748; 002 Then this tax report contains error s:

11

16

Eq. (16) is the rst rule out of a total of 248 that demonstrates the corresponding ranges for 31 accounts set as input variables. Any tax report which contains account values within the ranges, for example, as shown in Eq. (16), is recognized as an erroneous report. Compared to the average erroneous tax report rate of nearly 50% every year, the results shown in the table are promising and encouraging. The HRCNN yields a correct rate of 78%, although this is not the dominant of these four approaches. In spite of that, it generates 248 valuable rules, providing construction practitioners with criteria for preventing the submission of erroneous tax reports. These rules clarify the solid upper and lower bounds for those accounts, suiting the needs for the auditing professionals. They are indicative of each account's status which is valuable in verifying if the tax reports under examination are erroneous. 6. Conclusion Erroneous ling of tax reports to a governmental tax bureau might be considered as deliberate tax evasion. The National Tax Administration of Northern Taiwan (NTANT) has spotted this problem which has also plagued construction practitioners for years. A detection model is developed using applications of neural networks including Multi-Layer Perceptrons (MLP), Learning Vector Quantization (LVQ), decision tree, and Hyper-Rectangular Composite Neural Networks (HRCNN) methods. The results conrm that, no matter which neural network approach is adopted, the correct recognition rate reaches almost 80% in distinguishing erroneous tax reports from the total of 5769 datasets. The HRCNN approach is particularly capable, extracting 248 rules for construction practitioners. The automated tool has been shown to be feasible for detecting erroneous tax reports. The major contribution of this study is the establishment of a detection model for the construction industry as well as nding rules for practitioners or auditors to improve review efciency. The suggestions for future study are to improve the classier by utilizing other approaches, to increase the efciency of feature reduction, and to extend the database. Effective methods to deal with 2-class pattern recognition exist. Implementing new techniques is always desirable for future work. With the improvement of feature reduction, it is possible to raise the calculation efciency and to yield better results. Finally the data only represent 35.98% of construction companies, all located in the north region of Taiwan. Various business types and special codes for different regions of Taiwan might affect the results. Further data collection is strongly suggested. References
[1] National Tax Administration of Northern Taiwan, http://www.ntx.gov.tw/Index. aspx, 2009 Taiwan. [2] J.-H. Chen, KNN based knowledge-sharing model for severe change order disputes in construction, Automation in Construction 17 (6) (2008) 773779. [3] J.-H. Chen, S.C. Hsu, Hybrid ANN-CBR model for disputed change orders in construction projects, Automation in Construction 17 (1) (2007) 5664. [4] T. Pulket, D. Arditi, Construction litigation prediction system using any colony optimization, Construction Management and Economics 27 (3) (2009) 241251. [5] Z.Y. Zhao, Q.L. Lv, J. Zuo, G. Zillante, Prediction system for change management in construction project, Journal of Construction Engineering and Management 136 (6) (2010) 659669. [6] D.K.H. Chua, Y.C. Kog, P.K. Loh, A model for construction project budget and schedule performances using fuzzy data, Civil Engineering and Environmental Systems 18 (4) (2001) 303329. [7] S.-M. Huang, C.-F. Tsai, D.C. Ten, Y.-L. Cheng, A hybrid nancial model for business failure prediction, Expert Systems with Applications 35 (3) (2008) 10341040.

12

where Perj =
n

i=1

Mji mji

13

  n Perji x = max Mji mji ; Mji xi ; xi mji :


i=1

14

Then the nal output of the network is computed by Out x = max mj x :


j = 1;; J

15

Finally, the pattern x is classied as positive class if the value of Out x is larger than a pre-specied threshold (e.g., 0.5 in this study). The results are summarized in Table 1. 5. Results and discussion Table 1 demonstrates the results obtained by the proposed classier utilizing the MLP, LVQ, decision tree, and HRCNN approaches, respectively. It appears that any one of these approaches is feasible and effective enough to construct a model for detecting erroneous tax reports from construction companies. In Table 1 it can be seen that the best results come from at least 20 trained MLPs, since the performance of MLPs greatly depends on their architectures, initial weights and the number of training epochs. The LVQ classier is similar to the MLP and both reach the upper 70% successful recognition rate. Analogously, the classication performance is dependent on the number of branches and the quantization scheme for the proposed decision tree. The entropy function is used to quantify the performance of a node, separating data from different intervals. To prevent the tree from growing exceedingly deep, a prepruning scheme is adopted to stop further branching. In other words, pre-selection of a threshold for each node means that the node is not split if the classication performance of the node is greater than the threshold (e.g., 80% in the study). Although the HRCNN classier achieves 100% correct classication during training (test results shown in Table 1) it only reaches 78% correct classication. However, the 248 rules extracted from the trained HRCNN provide valuable

Please cite this article as: J.-H. Chen, et al., Application of neural networks for detecting erroneous tax reports from construction companies, Autom. Constr. (2011), doi:10.1016/j.autcon.2011.03.011

J.-H. Chen et al. / Automation in Construction xxx (2011) xxxxxx [8] H.L. Chen, Model for predicting nancial performance if development and construction corporations, Journal of Construction Engineering and Management 135 (11) (2009) 11901200. [9] J.-H. Chen, J.-Z. Lin, Developing an SVM-based risk-hedging prediction model for construction material suppliers, Automation in Construction 19 (6) (2010) 702708. [10] J.-H. Chen, L.-R. Yang, M.-C. Su, J.-Z. Lin, A rule extraction based approach in predicting derivative use for nancial risk hedging by construction companies, Expert Systems with Applications 37 (9) (2010) 65106514. [11] C.H. Wong, J. Nicholas, G.D. Holt, Using multivariate techniques for developing contractor classication models, Engineering, Construction and Architectural Management 10 (2) (2003) 99116. [12] C.H. Wong, Contractor performace prediction model for the United Kingdom construction contractor: study of logistic regression approach, Journal of Construction Engineering and Management 130 (5) (2004) 691698. [13] A.M. Elazouni, Classifying construction contractors using unsupervised-learning neural networks, Journal of Construction Engineering and Management 132 (12) (2006) 12421253. [14] M.-C. Su, A neural approach to knowledge acquisition, Ph.D. dissertation, University of Maryland, College Park, MD, 1993. [15] M.-C. Su, Y.-X. Zhao, E. Lai, A neural-network-based approach to recognizing 3D arm movements, Biomedical EngineeringApplications, Basis and Communications 15 (1) (2003) 1726. [16] J.-H. Chen, KNN based knowledge sharing model for severe change order disputes in construction, Automation in Construction 17 (6) (2008) 773779. [17] J.-H. Chen, S.C. Hsu, Hybrid ANN-CBR model for disputed change orders in construction projects, Automation in Construction 17 (1) (2007) 5664. [18] S.K. Sinha, P.W. Fiequth, Neuro-fuzzy network for the classication of buried pipe defects, Automation in Construction 15 (1) (2006) 7383. [19] C. Wang, Z. Fu, J. Zhang, L. Hu, F. Wang, X. Zhang, On automatic classication for agricultural products web pages in Chinese based on LVQ network and ontology, Journal of Computational Information Systems 4 (3) (2008) 971976. [20] A. Asikainen, M. Kolehmainen, J. Ruuskanen, K. Tuppurainen, Structure-based classication of active and inactive estrogenic compounds by decision tree, LVQ and kNN methods, Chemosphere 62 (4) (2006) 658673. [21] M. Moussa, J. Ruwanpura, G. Jerqeas, Decision tree modeling using integrated multilevel stochastic networks, Journal of Construction Engineering and Management 132 (12) (2006) 12541266.

[22] D. Arditi, T. Pulket, Predicting the outcome of construction litigation using boosted decision trees, Journal of Computing in Civil Engineering 19 (4) (2005) 387393. [23] T. Konohen, Self-Organization and Associative Memory, 3 rd Edition SpringerVerlag, Berlin, Germany, 1989. [24] T. Kohonen, Improved versions of learning vector quantization, International Joint Conference on Neural Networks, 1990, pp. 545550. [25] T. Kohonen, J. Kangas, J. Laaksonen, K. Tokkola, LVQ-PAK: The Learning Vector Quantization Program Package, Helsinki University of Technology, Finland, Finland, 1992. [26] J.R. Quinlan, Introduction of decision trees, Machine Learning 1 (1986) 81106. [27] J.R. Quinlan, Simplifying decision trees, International Journal of Man-Machine Studies 27 (1987) 221234. [28] J.R. Quinlan, Decision trees and decision making, IEEE Transactions on Systems, Man, and Cybernetics 20 (2) (1990) 339346. [29] M.-C. Su, Use of neural networks as medical diagnosis expert systems, Computers in Biology and Medicine 24 (6) (1994) 419429. [30] M.-C. Su, C.T. Hsieh, C.C. Chin, A neuro-fuzzy approach to speech recognition without time alignment, Fuzzy Sets and Systems 98 (1) (1998) 3341. [31] J.-H. Chen, L.-R. Yang, W.H. Chen, C.K. Chang, Case-based allocation for supervisory manpower in construction project sites, Construction Management and Economics 26 (8) (2008) 803812. [32] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd Edition Prentice Hall, Upper Saddle River, NJ, 1999. [33] D.R. Cooper, P.S. Schindler, Business Research Methods, 7th Edition Irwin/Mc Graw-Hill, New York, NY, 2001. [34] R.O. Duda, P.E. Hart, Pattern Classication and Scene Analysis, Wiley, New York, NY, 1973. [35] S.L. Salzberg, Learning with Nested Generalized Examples, Kluwer Academic, Himham, MA, 1990. [36] P.K. Simpson, Fuzzy min-max neural networks-Part 1: classication, IEEE Transactions on Neural Networks 3 (5) (1992) 776786. [37] G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, D. Rosen, Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps, IEEE Transactions on Neural Networks 3 (5) (1992) 698713. [38] S. Abe, M.-S. Lan, A method for fuzzy rules extraction directly from numerical data and its application to pattern classication, IEEE Trans. on Neural Networks 3 (1) (1995) 1828.

Please cite this article as: J.-H. Chen, et al., Application of neural networks for detecting erroneous tax reports from construction companies, Autom. Constr. (2011), doi:10.1016/j.autcon.2011.03.011

Das könnte Ihnen auch gefallen