Sie sind auf Seite 1von 10

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.

ORG

48

Modeling of Two-dimensional Warranty Policy using Artificial Neural Network (ANN) Approach
Hairudin A. Majid, Jun C. Ang, and Azurah A. Samah
Abstract Modeling of two-dimensional warranty policy is an important but difficult task due to the uncertainty and instability of data collection. Moreover, conventional numerical methods of modeling a two-dimensional warranty policy involves complex distribution function and cost analysis. Therefore, this paper attempts to present an Artificial Intelligence (AI) technique, which is the Artificial Neural Network (ANN) approach in order to improve the flexibility and effectiveness of the conventional method. The proposed ANN is trained with historical data using multi-layer perceptron (MLP), feed forward back-propagation (BP) learning algorithm. The Logarithmic (logsig) and Hyperbolic Tangent (tansig) sigmoid functions are chosen as transfer function. Four popular training functions are adopted to obtain the best BP algorithm, that are, Levenberg-Marquardt (trainlm), Gradient Descent (traingd), Gradient Descent with momentum (traingdm), and Gradient descent with momentum and adaptive learning (traingdx) back propagation algorithm. This ANN model demonstrated a good statistical performance with the mean square error (MSE) values in this four training function, especially traingd. Finally, the adopted sensitivity analysis has revealed that the proposed model had successfully implemented. Index Terms Artificial Intelligence, Artificial Neural Network, Two-dimensional Warranty.

1 INTRODUCTION
twodimensionalwarrantyiseitheranimpliedoran expresscontractbetweenthemanufacturerandcon sumer. Under this contract, manufacturers agree to pro vide a satisfactory service either repair or replace items thatfailduringthespecifiedperiodorusage(whichever comes first). Nowadays, consumers always compare the productperformance,characteristicsofcomparablemod els of competing brands before purchase a product. So, warranty becomes a major new direction in manufactur ingindustrysinceitplaysanimportantroleinproviding aguidelinetocustomers. In the automobile industry, accurate prediction of op timalwarrantyperiodandwarrantycostsisoftensought bythemanufacturer.LeBlanc[1]mentionedthatitisdif ficulttoquantifytherisksandrewardsofofferingawar ranty. It is because the warranty period is too short, as wellastoolongwhichmaybeunprofitableforthemanu facturers [2]. A very short warranty period will interfere withsales,whileaverylongonewillleadtolossesfrom compensation of consumer claims. Hence, application of Artificial Intelligence (AI) in warranty market is much more interesting requisite to affirm the rationality and

H.A.MajidiswithFacultyofComputerScienceandInformation System,UniversitiTeknologiMalaysia,81310Skudai,Johor,Malaysia. J.C.AngiswithFacultyofComputerScienceandInformationSystem, UniversitiTeknologiMalaysia,81310Skudai,Johor,Malaysia. A.A.SamahiswithFacultyofComputerScienceandInformation System,UniversitiTeknologiMalaysia,81310Skudai,Johor,Malaysia.

accuracyofwarrantypolicyprediction. Vast research effort has been devoted to the use of ANN as a practical forecasting tool [3]. According to Khashei and Bijari [4], ANN is one of the most accurate and widely used forecasting models that have enjoyed fruitfulapplicationsinforecastingsocial,economic,engi neering, foreign exchange and stock problems. Apart fromthat,theusedofhistoricaldatainANNforpredic tion or forecasting is very popular and its efficiency is provenbymanyresearcherssuchas[5][8].Forexample, asurveythatwasconductedbyMarzietal.[5]hadused twentyyeardatafromS&P500Europeanindexcallop tion prices to forecast the financial market, and Xu and Lim[8]hadusedrawandhistoricaldataintheirstudyto forecastthenetflowofacarsharingsystem. In this paper, we present the application of an ANN techniquetopredicttheminimumwarrantycostandop timal inspection interval during a warranty period. This paper starts with section 1, which introduce two dimensionalwarrantyandresearchproblems.Thisisfol lowedbySection2whichdescribestherelatedworkand motivationofthisresearch.InSection3,theassumptions fortheeasinessoftheimplementationoftheresearchand result were presented. Section 4 briefly describes the ANN approach. The framework is thoroughly described inSection5andisfollowedbySection6whichdiscussed the effects of ANN structures towards the MSE values. The discussion of this paper ends with the conclusion whichispresentedinSection7.

2011 Journal of Computing Press, NY, USA, ISSN 2151-9617 http://sites.google.com/site/journalofcomputing/

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

49

2 RELATED WORK AND MOTIVATION


A complete historical data is pivotal during the developmentofwarrantypolicy.However,itisalwaysan exhaustinganddifficultjobfortheautomobileworkshop to have accurate and complete historical data. Historical warranty claim and service data usually contain partial informationasitmayberecordedincorrectlyanddueto uncertainty and instability of data collection. Hence, predictionofanaccuratewarrantypolicybecomesavery difficult task. This view is supported by Yang [9] who stated that warranty modeling is complicated due to warranty censoring especially for twodimensional warranty. These vagueness real world problems are typically too complex for a formal mathematical model [10]. For a conventional mathematical modeling, the first step to predict a warranty policy is to model the items failuresandthecostsofrectificationactionsoverthewar ranty period [11]. Numerous failure probability models such as Weibull, Exponential and Lognormal have been developedforautomobilewarrantyclaimsdata[12][14]. Probability distribution is a mathematical function used to model the frequencies and probability of occurrences overatime.Thosemathematicalmodelinginvolvesever alstages,soitmaytakealongtimetobetrulyproficient. Thus,inthisresearch,anattemptismadetoapplyanAI technique in warranty analysis to reduce the uncertainty problemsandspeedupthecomputingtime. AI approaches were broadly adopted in many areas where conventional mathematical model were replaced with expert systems to improve flexibility and effective ness of the corresponding system. However, in warranty research,theapplicationofAItechniqueisveryfewsuch as [15][22]. Among the AI subject area, soft computing technique was found to be the most popular technique thathasbeenintegratedwithwarrantyarea[23][27].The applications ofANN in warranty domain from previous workarediscussedinvariousresearches[28][30].Hrycej andGrabert[28]usedfailureprobabilityasgeneralfunc tional.Inparticular,theauthorusedmultilayerperceptron as a functional approximation. The parameters of multi layerperceptronweretrainedwithhelpofminimumcross entropy rule in forecasting the warranty cost of alterna tive warranty condition scenarios. In other work, Lee at al.[29]haddesignedanearlyclaimwarningsystemusing neural network learning. Precisely, the system protects both manufacturers and consumers by giving prior warningaboutabnormalincreaseofclaimsrateatacer tain point based on trend and estimation by monitoring various claim data. Lee et al. [30] had also suggested neural network learning model in determining early warning grade of warranty claims data, which includes Analytic Hierarchy Process (AHP) analysis and know ledge of quality experts. In year 2008, Lee et al. [23] had proposedadifferenttoolwhichisappropriateformodel ingatwodimensionalwarrantyplan.

3 ASSUMPTION
A few assumptions have been made to simplify the im plementation of the proposed algorithm. First, the usage conditionsareassumedtobestatisticallysimilarandthe warrantyclaimswerereportedimmediately,withnode lay.Second,theproposedANNapproachisconsideredto besuitableforanymodelandmakeofautomobile.Third, the input and targeted output data of the ANN process are assumed to be completely known. Fourth, although the number of data used in the development of ANN is small, it is assumed that the data is sufficient enough to achievetheperformancegoalinthisresearch.

4 ARTIFICIAL NEURAL NETWORK (ANN): AN INTRODUCTION


ANN or commonly neural network (NN) is an intercon nectedgroupofartificialneuronsthatuseamathematical orcomputationalmodelforinformationprocessingbased on a connectionist approach to computation [31], [32]. AccordingtoPrincipleetal.[33],oneofthemostsignifi cantstrengthofANNisitsabilitytolearnfromalimited setofexamples.ANNhasbeensuccessfullyusedinsolv ing complicated problems in different domain such as pattern recognition, identification, classification, speech, vision,andcontrolsystems[34]. AnANN,whichimitatesthehumanbraininproblem solving, is capable in modeling the complex relationship between input and output to find patterns in data. Typi cally, an ANN consists of a set of interconnected processing elements or nodes called perceptrons. The nodesareorganizedindifferentwaystoformanetwork structurewhereeachANNiscomposedofacollectionof perceptron grouped in layers. Each perceptron is designed to mimic its biological counterpart, the neuron and to accept a weighted set of input and respond with an out put[35]. AsophisticatedANNmayhaveseveralhiddenlayers, feedbackloopsandtimedelayelements,whichdesigned tomakethenetworksaseffectiveaspossibleindiscrimi nating relevant features or patterns [35]. The most well knownANNisfeedforwardANN.AfeedforwardANN consistsofasetofnonlinearneuronsconnectedtogether, in which the information flows in the forward direction [31]. Among the feedforward network, multilayer Per ceptron (MLP) is the most widely and commonly used modelfree estimators. A MLP consists of at least three layers,whicharetheinput,hiddenandoutputlayer.The input and output layer contain a collection of neurons representinginputandoutputvariables. TherearethreelearningtypesofANNmodels,which aresupervised,unsupervisedandreinforcementlearning. Thenetworkmodifiestheweightbasedonasequenceof training vector with an associated target output node, known as supervised training. On the other hand, unsu pervisedtrainingreferstoanetworkthatmodifiesweight

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

50

by assigning the most similar input vectors to an output unit. And the third learning type is reinforcement train ing, which lies between supervised and unsupervised learning. Among the various neural network models, back propagation is the best general purpose model and is probably the best at generalization [36],[37]. The back propagationistheclassicalalgorithmusedforlearning.It is an iterative gradient descent algorithm which is de signed to minimize the mean squared error between the desired output and the generated output for each input pattern [38]. In this research, focus is given to the feed forward and backpropagation model with multi layer perceptron.

whichiseitherservicemaintenanceorrepairimmediate ly. These historical data are passed to the next stage, the datapreprocessing.

5.2 Data design and pre-processing


Datapreprocessingordatanormalizationhastobedone beforeitcanbeusedintrainingthenetwork[40].Accord ing to Birbir et al. [41] and Wang [42], normalization of data refers to a process of scaling the numbers in a data set to improve the accuracy of the subsequent numeric computations. The authors also mentioned that normali zation helps in shaping the activation function during a training process. Based on this statement, the elements withhugedifferentiaamongthedatasuchasmileageand ageintheinputandoptimalinspectionintervalandmin imum cost in the target output are normalized into [0,1] bytheexpression: (1)

5 ANN FRAMEWORK
Fourmainstageswereincludedintheframework.The stagesaredatacollection,datadesignandpreprocessing, backpropagation network design and network imple mentation.Figure1showsthemainflowofthisresearch.

where isanobservationvalueofthefactori; xi xmax isthemaximumvalueofthefactori; istheminimumvalueofthefactori; xmin isthenormalizedvalueofxi. Xi Thenormalizeddatawerethendividedinto70groups (10 each) and the target outputs from each group are computed using mathematical model. Finally, the data are divided into three parts for training, validating and testingprocess.
Fig. 1. Main flow of this research

5.3 Back Propagation (BP) Network Design


The structure of a BP Network design includes elements asillustratedinFigure2.Eachofthestructureelementsis thoroughlydiscussedinthissection.

5.1 Data Collection


Seven hundred historical data sets were collected in this study from a Malaysia automobile company, known as MalaysianTruckandBus(MTB).Thedatasetscomefrom thesameautomobileproductknownasHICOMPerkasa. Each data sets comprised of the elements of date when theclaimwasmade,dateclaimreceived,enginenumber, drivingmileageandagewhentheclaimwasmade,pro ductiondateandfailureordefectdata.Thehistoricaldata isoffiveyearsperiod,rangingfrom1998to2002.Precice ly, the historical data consist of information of a 100 ve hicles.Fromthe100samples,onceamaintenanceservice isdone,thevehiclestatusandinformationwillberecord edasthehistoricaldata. AccordingtoGeorgilakisetal.[39],thetaskofdeciding which of the elements to be selected as input variable is an arduous task. The selected elements must correspond toparameters,whichmeanthatitwilldirectlyorindirect lyaffectthepredictionresult.Inthisresearch,fourmain inputpatternswereproduced,whicharemileageandage ofavehicleduringtheservicing,thedefectorfailurerate forfortycomponentsofa vehicle,andthedatarecorded

Fig. 2. Structure of Network Design

5.3.1 Network Architecture Determination


In the stage of a network architecture determination, the numberoflayersandthenumberofprocessingelements perlayerareofimportantconsideration[41].Multilayer back propagation neural network comprise of input, hidden and output layer. Its architecture is illustrated in figure3.Inthisresearch,inputvariablesaretheageand mileageduringtheservicing,defectorfaultycomponents

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

51

and information of maintenance checking or immediate repair. The minimum warranty cost and optimal inspec tion interval were identified to be the output data in the ANNprocess.

TABLE 1 NUMBER OF NEURONS OF HIDDEN LAYER Formula h=n h=n/2 h=2n h=2n+1 Proposedby TangandFishwick Kang Wong Lippmann

n=numberofneuronsintheinputlayer h=numberofneuronsinthehiddenlayer

Amongstthetransferfunctionsusedforhiddenand outputlayerareLogarithmicSigmoidfunction(logsig) andHyperbolicTangentSigmoidfunction(tansig).Both Sigmoidfunctionsareoftenusedinhiddenlayerdueto theirpowerfulnonlinearapproachcapability[45].Forthe outputofactivationfunction,therangeoftansigis(1,1) andlogsigis(0,1).Themathematicalequationsforboth functionsare: (2)

Fig. 3. Structure of back propagation ANN for two-dimensional warranty

In this stage, the set of data were grouped into two setsi.e.inputandoutput.Thedatasetwerearrangedina matrixformofx(input),andy(output)inacolumn.Fig ure 4 shows the generic form of input and target output design.ThedatawerekeyedinintoMs.Excelandsaveas atextfilesothatitcanbeusedinMatlabtools.

(3)

5.3.3 Training function optimization


Four popular training functions used in this study are trainlm,traingd,traingdm,traingdx[46][48]. LevenbergMarquardt back propagation algorithm (trainlm) is a network training function that updates weight and bias values according to Levenberg Marquardt optimization. It was designed to approach secondorder training without having to compute the Hessian matrix. Trainlm is the fastest back propagation algorithminMatlabtoolbox,andishighlyrecommended as a firstchoice supervised algorithm, although it does requiremorememorythanotheralgorithm.Thetraining parameters for trainlm are epochs, show, goal, time, min_grad, max_fail, mu, mu_dec, mu_inc, mu_max, mem_reduc.Thetrainingstatusisdisplayedforeveryshow iterationofthealgorithm.Thetrainingwillterminatesin four conditions i.e. if the number of iterations exceeds epochs,iftheperformancefunctiondropsbelowgoal,ifthe training time longer than time seconds, or if the magni tudeofthegradientislessthanmin_grad.Theparameter muistheinitialvaluefor.Iftheperformancefunctionis reducedbyastep,themuvalueismultipliedbymu_dec. On the other hand, if the performance is increased by a step,thenthemuvalueismultipliedbymu_inc.However, ifthevalueofmuisbiggerthanmu_max,thealgorithmis terminated.

(a) Inputdesign
Fig. 4. Input and Output design

(b) Outputdesign

5.3.2 Hidden Neuron Number and Transfer Function Optimization


Thenumberofhiddenlayerandnodesineachofthe hiddenlayeraffecttheperformanceofanANN.Ifthe numberofhiddennodesistoosmall,itisnotenoughto generalizetherulesoftrainingsample.Otherwise,the ANNwilltakenoisydataintomemory.Todate,thereis nospecificmethodtochoosetheoptimalnumberofhid denlayerandthenumberofnodesinhiddenlayer[43]. Fauset[44]introducedaruletodeterminethenumberof neuronnodesinhiddenlayerasillustratedinTable1.

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

52

Gradientdescentbackpropagationalgorithm(traingd) is a network training function that updates weight and bias values in the direction of negative gradient descent of performance function. The training parameters for traingdareepochs,show,goal,time,min_grad,max_fail,and lr. The parameter, learning rate (lr) in Traingd algorithm, is multiplied with the negative of gradient to determine thechangesofweightandbiases.Thelargerthelearning rate,thebiggerthestep.Ifthevalueoflearningrateistoo large, the algorithm becomes unstable. Otherwise, if the value of learning rate is too small, the algorithm takes a longtimetoconverge. Gradient descent with momentum back propagation algorithm (traingdm) allows the network to respond not onlytothelocalgradient,butalsotorecenttrendsinthe error surface. The training parameters for traingdm are epochs, show, goal, time, min_grad, max_fail, lr and mc. The momentum,mcactslikealowpassfilter,whichallowsthe network to ignore small features in error surface. A net workwithoutmomentumcangetstuckinashallowlocal minimum,whileanetworkwithamomentum,canslide through such minimum. The interaction of learning rate andmomentumleadstoanacceleratedlearning[49]. Gradientdescentwithmomentumandadaptivelearn ing back propagation algorithm (traingdx) is a network training function that updates weight and biases values according to gradient descent momentum and an adap tivelearningrate.Thetrainingparametersfortraingdxare epochs,show,goal,time,min_grad,max_failmax_perf_inc,lr, lr_inc, lr_dec, and mc. Adaptive learning rate, lr_inc and lr_dec attempts to keep the learning step size as large as possible while keeping learning stable. If the new error exceeds the previous error by more than a predefined ratio i.e.. max_perf_inc in the network with momentum, thenthenewweightsandbiasesareabandoned.

Fig. 5. The flow to find the best ANN model

5.4.1 ANN Model Development


TheANNdevelopmentprocessstartswithnetworktrain ing,followedbynetworkvalidating.Thetrainingprocess mayconsumealotoftime.Atthebeginning,thenetwork models are trained with a set of input and target output data. The parameters setting in the training function are changed orderly to find out the best ANN model. The network adjusts the weight coefficients, which usually begin with a random set, so the next iteration will pro duceaclosermatchbetweenthetargetoutputandactual output of ANN. The training method tries to minimize the current errors from all processing elements, and the globalerroriscalculatedbyaperformanceindex. If the performance index in training process achieves thetargetedgoalwhichis0.01thenthevalidatingprocess willbecontinuedbyanothersetofinputdataandtarget outputdata.Similartothetrainingprocess,aglobalerror iscalculatedbyperformanceindex.Thetargetedgoalset in the validation stage is in the range of 0 to 0.07 to be accepted. The network model had undergone training and validating process called developed system. The de

5.4 ANN Implementation


Thisimplementationstageinvolvesthreeprocesseswhich are network training, validating and testing. Figure 5 showstheflowtodeterminethebestANNmodel.

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

53

veloped system with its weight will passed to the next processwhichistestingprocess.

node

5.4.2 Model Performance Evaluation


Thenetworkperformancewasquantifiedbycalculating MeanSquareError(MSE)forthedifferencebetweenthe expectedandtargetedoutputdataset.Haganetal.[50] highlightedthattheperformanceevaluationoftheback propagationalgorithmformultilayerANNisMSE.The learningalgorithmadjuststhenetworkparametersin ordertominimizetheMSE.TheexpressionofMSEis: (4)

where isoutputpredicted, zk yk isactualtargetoutputand istheerrorofithoutputnumber1,2,,n.

logsig logsig logsig logsig 1 x 86 1 x 86 1 x 43 1 x 22 nodes nodes nodes nodes tansig tansig tansig 1 x 86 1 x 43 1 x 22 tansig Hidden 1 x 22 nodes nodes nodes node nodes logsig logsig logsig 1 x 22 1 x 43 1 x 22 logsig nodes nodes nodes tansig tansig tansig MSEvalue(goal=0.01) Training 0.0100 0.0100 0.0100 0.0096 Validating 0.0424 0.0369 0.0404 0.0580 Testing 0.0098 0.0172 0.0194 0.0113 TheBestANNmodelamongthetrainingfunction TABLE 3 THE PARAMETERS USED IN EACH TRAINING FUNCTION Parameter Value Traingd 300000 (goal met= 206128) 0.01 0.3

5.4.3 Network Testing


Testingprocesscanonlystartoffifthetrainingandva lidatingprocesseshadbeencarriedthroughbyachieving theirgoal.Atthisstep,theneuralnetworkwiththesmal lest MSE value is defined as the best neural network model.Inthisresearchaninterval(00.03)forMSEvalue oftheoutputaredefinedasthesmallestvalue. Traingdm 300000 (goalmet =106643) 0.01 0.6 5(de fault) 0.1 Traingdx 300000 (goal met= 174933) 0.01 0.9 0.7 1.05 5(de fault) 1.04 0.9 Trainlm 300000 (goal met= 811) 0.01 5(de fault) 0.006 0.1

epochs goal learningrate (lr) learningdec (lr_dec) learninginc (lr_inc) max_fail

6 EXPERIMENTAL RESULT
AnoptimizedANNstructureisusedtoillustratetheper formance of the proposed model. Since the neural net work is a nonlinear procedure and the network parame terswillaffecteachotherthentheadjustmentofeachpa rametertooptimizethewholenetworkisnotaneasytask [51].ThissectiondiscussestheoptimizedANNstructure whichproducedtheminimumMSEvalueineachtraining function.An analysis is presented to verify the accuracy oftheresult. Table 2 and 3 display the structure of the best ANN model and its corresponding parameters setting in each trainingfunctionwhichachievedthesmallestMSEvalue intestingprocess. TABLE 2 STRUCTURE OF THE BEST ANN MODEL IN EACH TRAINING
FUNCTION

Item Number of hidden layer Input node Output

Value/Character Traingd Traingdm Three Three

Traingdx Three

Trainlm Two

43 43 43nodes 43nodes nodes nodes 2 nodes 2 nodes 2 nodes 2 nodes

5(de fault) max_perf_inc momentum constant(mc) InitialMu (mu) Mudecrease factor (mu_dec) Muincrease factor (mu_inc) Maximum Mu (mu_max) 1.0000e min_grad 010 show 100 time Infinity

10

1.0000e 010 1.0000e 010 100 Infinity

1.0000e 010 100 Infinity

1.0000e 010 100 Infinity

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

54

The best parameters setting to obtain the bestANNmodel.Thenullhypothesisusedinthiscasestudyis: model 6.1 Accuracy Analysis upon Experimental Result There is no significance difference between H0: Thederivedoutputandexpectedoutputshouldundergo thederivedoutput(x1)andtheexpectedout denormalizationbeforeanalysisprocess.Theoutputsare put(x2),thatisx1=x2. denormalizedbytheexpression: H1: Thereisasignificancedifferencebetweenthe derived output (x1) and the expected output (x2),thatiseitherx1x2,x1<x2,orx1>x2. Table5and6showtheTtestprocessfromsteps2to6for where inspectionintervalandwarrantycost,respectively. X isthedenormalizedvalueofx x isthederivedoutputorexpectedoutput TABLE 5 isthemaximumvaluederivedinnormalization xmax T-TEST FOR THE INSPECTION INTERVAL istheminimumvaluederivedinnormalization xmin Derived output Expected output The denormalized derived outputs which were gen (x1) (x2) eratedbythebestANNmodelarecomparedwiththede normalized expected output in terms of accuracy using Replicate1 0.6723 0.5833 thefollowingequation: Replicate2 1.2688 1.4167 Replicate3 wherezf isthederivedoutputandzeistheexpectedout put. Table4showsthedenormalizedoutputandtheaccuracy of the proposed ANN model in deriving the optimum warrantycostandinspectioninterval. TABLE 4 THE ACCURACY OF THE PROPOSED MODEL Expected Derived Accuracy output output (%) 0.5833 0.6723 84.74 Inspection interval 1.4167 1.2688 89.56 (Year) 1.0833 1.0419 96.17 178.09 169.18 95.00 Warranty Cost 75.64 89.10 82.21 (RM) 101.50 107.98 93.62 Fromthisaccuracyanalysis,itisrevealedthatanaver age of 90 percent of accuracy is achieved by using this proposedmodel.Itcanbesummarizedthattheproposed ANN model is successfully applied in deriving the opti mumwarrantycostandoptimuminspectioninterval. 6.2 Sensitivity Analysis: Statistic T-test Ttestisahypothesistesttoinvestigatethesignificanceof twosamplesfromanormallydistributedpopulation.The Ttestisprobablythebestknowntechniqueandthemost frequentlyusedstatisticaldataanalysismethodforhypo thesistesting[52][53]. Inthisstudy,Ttestisconductedtoassesstheaccuracy oftheresultswhichwereobtainedbytheproposedANN x Observation(n) Mean( ) d2=x2(( x)2/n) Variance,2= d2/(n1) pooled stan dard deviation, Tvalue TABLE 6 T-TEST FOR WARRANTY COST Replicate1 Replicate2 Replicate3 x Derived output Expected output (x1) (x2) 169.18 89.10 107.98 366.26 178.09 75.64 101.50 355.23 3 118.4100 5676.9234 2838.4617 1.0419 2.9830 3 0.9943 0.1813 0.0907 0.3651 0.1122 1.0833 3.0833 3 1.0278 0.3519 0.1760

Observation(n) 3 Mean( ) d2=x2(( x)2/n) Variance, 2 = d2/(n1) pooled stan darddeviation, 122.0867 3504.9003 1752.4501 47.9109

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

55

Tvalue 0.0940 Inthisstudy,95percentofconfidenceintervalisadopted. Thesignificancelevelandcriticalregionarestatedasbe low. Significancelevel,=0.05 CriticalRegion:Z<1.96orZ>1.96 Based on the calculations in Table 5 and 6, both Tvalue for inspection interval and warranty cost lie outside the criticalregion.Hence,thenullhypothesis,H0isaccepted at 5 percent significance level. It can be concluded that there is no significance difference between the derived outputandtheexpectedoutput.

[4]

[5]

[6]

[7]

[8]

[9]

7 CONCLUSIONS
This paper has presented a faster and intelligent way to predict minimum warranty cost and optimal inspection intervalduringawarrantyperiodbyusingartificialneur al networks. Different network structures were trained and validated with analytical result of mathematical model.Themodelwasthentestedwithaseriesofhistori caldata.Itwasfoundthatthemostefficientalgorithmfor modeling the twodimensional warranty policy is back propagationlearningalgorithmwithGradientDescent.In this research, although the amount of experimental data is limited, significant result proves that the proposed al gorithm is capable to predict the twodimensional war rantypolicy.Forfurtherresearch,itisrecommendedthat other AI techniques is used in modeling the two dimensionalwarrantypolicyinordertoreducethecom plexity and time consuming of conventional mathemati calmodel.

[10] [11]

[12]

[13]

[14]

[15]

[16]

ACKNOWLEDGEMENT
The authors honorably appreciate Ministry of Science, TechnologyandInnovation(MOSTI)forthefundingofE Science grant and Research Management Center (RMC), Universiti Teknologi Malaysia (UTM) for the support in makingthisprojectasuccess.

[17]

[18] [19]

REFERENCES
[1] LeBlanc B., Analysis of decisions involved in offering a prod uctwarranty,AnnualReliabilityandMaintainabilitySymposium, 2008. Chukova S. S., Dimltrov B. N., and Rykov V. V., Warranty Analysis (Review),JournalofMathematicalSciences,vol67, no. 6,1993,34863508,doi:10.1007/BF01096273. Weigend,A.S.,D.E.Rumelhart,andB.A.Huberman,Genera lization by WeightElimination with Application to Forecast ing, Advances in Neural Information Processing Systems 3 (NIPS*90),1991.

[20]

[2]

[21]

[3]

[22]

KhasheiM.andBijariM.,Anartificialneuralnetwork(p,d,q) modelfortimeseriesforecasting,ExpertSystemswithApplica tions37(2010),pp.479489,2009. MarziH.andMarziE.,UseofNeuralNetworksinForecasting Financial Market, IEEE Conference on Soft Computing in Indus trialApplication,SMCia08,pp.240245,2008. TaylorJ.W.andBuizzaR.,NeuralNetworkLoadForecasting WithWeatherEnsemblePredictions,IEEETransactionOnPow erSystems,vol.17,no.3,2002. WangQ.,YuB.andZhuJ.,ExtractRulesfromSoftwareQuali ty Prediction Model Based on Neural Network, Proceeding of the16thIEEEInternationalConferenceonToolswithArtificialIntel ligence,ICTAI1004,2004. XuJ.X.andLimJ.S.,Anewevolutionaryneuralnetworkfor forecastingnetflowofacarsharingsystem,IEEECongresson EvolutionaryComputation,CEC2007,pp.16701676,2007. Yang G. and Zaghati Z., TwoDimensional Reliability Model ingFromWarrantyData,IEEEProceedingAnnualReliabilityand MaintainabilitySymposium,pp.272278,2002. Kastner, J.K., A review of expert systems, European Journal of OperationalResearch,vol.18,pp.285292,1984. MurthyD.N.P., and Djamaludin I.,Newproduct warranty :A literature review, International Journal of Production Economics 79(2002),pp.231260,2002. Chattopadhyay, G. and A. Rahman, Development of lifetime warranty policies and models for estimating costs, Reliability Engineering&SystemSafety,vol93,no.4,pp.522529,2008. Majeske K. D., A mixture model for automobile warranty da ta, Proceedings of Reliability Engineering and System Safety, pp. 7177,2003. Askar K., Dougherty M., and Roche T., Agent rased system thatsupportreliabilitytransportengineering,Proceedingsofthe International Conference on Applications of Advanced Technologies inTransportationEngineeringinBeijing,2004. Deep R., et al. BitMapping Classifier Expert System In War ranty Selection, Proceedings of the IEEE Conference on National AerospaceandElectronicsinDayton,USA. Derr,J.H.andR.J.Louch,Advancedmethodologyforproject ingfieldrepairratesandmaintenancecostsforvehicleelectron ic systems, SAE (Society of Automotive Engineers) Transactions, 1991.100(Sect2),pp.111,1991. Hyman, W.A. (1989). Legal liability in the development and use of medical expert systems. Journal of Clinical Engineering, 14(2):p.157163. KalerJr,G.M.Expertsystempredictsservice,HPACHeating, Piping,AirConditioning,vol.60,no.11,pp.99101,1988. Kasravi K., Improving the engineering processes with text mining.ProceedingsoftheConferenceonASMEDesignEngineer ingTechnicalinSaltLakeCity,UT,2004. Lee,S.andL.M.Chang,Digitalimageprocessingmethodsfor assessing bridge painting rust defects and their limitations, Proceedings of the 2005 ASCE International Conference on Compu tinginCivilEngineeringinCancun,2005. Lin, P.C., J. Wang, and S.S. Chin, Dynamic optimization of price, warranty length and production rate, International Jour nalofSystemsScience,vol.40,no.4,pp.411420,2009. Lee S. H., J. H. Lee, et al., A Fuzzy Reasoning Model of Two dimensional Warranty System, 7th International Conference on Advanced Language Processing and Web Information Technology, Liaoning,PEOPLESRCHINA,IEEEComputerSoc,2008b.

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

56

[23] LeeS.H.,LeeD.S.,etal.,AFuzzyLogicBasedApproachto TwoDimensional Warranty System, 4th International Confe rence on Intelligent Computing, Shanghai, PEOPLES R CHINA, SpringerVerlagBerlin,2008c. [24] VujosevieM.,MakajieNikolieD.,StrakM.(2004),FuzzyPetri netbasedreasoningforthediagnosisofbuscondition,Seventh SeminaronNeuralNetworkApplicationsinElectricalEngineering Proceedings,NEUREL2004,Belgrade,2004. [25] ZhouG.,CaoZ.,MengZ.,andXuQ.,AGAbasedapproachon a repair logistics network design with M/M/s model, Proceed ingsInternationalConferenceonComputationalIntelligenceandSe curity,CIS2008,Suzhou2008. [26] Hotz E. and Nakhaeizadeh G., Petzsche B. and Spiegelberger H.,WAPS,a DataMiningSupportEnvironmentfor thePlan ningofWarrantyandGoodwillCostsintheAutomobileIndus try.ProceedingsofthefifthACMSIGKDDinternationalconference on Knowledge discovery and data mining, San Diego, California, UnitedStates,pp.417419,1999b. [27] Hrycej, T. and M. Grabert, Warranty Cost Forecast Based on CarFailureData.ProceedingsofInternationalJointConferenceon NeuralNetworks,Orlando,Florida,USA,pp.108113,2007. [28] LeeS.H.,SeoS.C.,etal.,AStudyonWarning/DetectionDe gree of Warranty Claims Data Using Neural Network Learn ing. Sixth International Conference on Advanced Language ProcessingandWebInformationTechnology(ALPIT),2007. [29] Lee S. et al., On determination of early warning grade based on AHP analysis in warranty database, Lecture Notes in Com puterScience(includingsubseriesLectureNotesinArtificialIntelli genceandLectureNotesinBioinformatics).2008,Shanghai,pp.84 89,2008a. [30] DomingoM.,AgellN.andParraX.,Connectionisttechniques to approach sustainability modeling, Revista Internacional De Tecnologia,SostenibilidadyHumanismo,diciembre2006,no.1,pp. 6173,2006. [31] ZabiriH.andMazukiN.,ABlackBoxApproachinModeling ValveStiction,InternationalJournalofMathematical,Physicaland EngineeringSciences4:12010,2010. [32] Principe, J., Neural Networks and Adaptive Systems, John Wiley andSons:NewYork,NY,1999. [33] Sozen A, Arcaklioglu E., Solar potential in Turkey, Applied Energy,80(1),pp.3545,2004. [34] Oladokun V. O., Adebanjo A. T. and CharlesOwaba O. E., Predicting Students Academic Performance using Artificial NeuralNetwork:ACaseStudyofanEngineeringCourse,The PacificJournalofScienceandTechnology,vol.9,no.1,2008. [35] Lawrence J., Introduction to Neural Network: Design, Theory and Application, 6th ed. Nevada City. CA: California Scientific Soft ware,1994. [36] Mitchell T., M. Machine Learning, 1st edition, New York: McGrawHillScience/Engineering/Math,1997. [37] Rumelhart D. E. and McClelland J. L., Parallel distributed processing: explorations in the microstructure of cognition, Cam bridge,Mass:MITPress,1986. [38] Georgilakis P. S., HatziargyriouN. D., DoulamisA. D., Doula misN.D.andKolliasS.D.,ANeuralNetworkFrameworkfor Predicting Transformer Core Losses, Proceedings of the 21st 1999 IEEE International Conference on Power Industry Computer Applications,PICA99,pp.301308,1999. [39] Nor Haizan Mohamed Radzi, Habibollah Haron, and Tuan Irdawati Tuan Johari, Lot Sizing Using Neural Network Ap

[40]

[41] [42]

[43] [44]

[45]

[46]

[47]

[48]

[49] [50] [51]

[52]

proach,Proceedingofthe2ndIMTGTRegionalConferenceonMa thematics, Statistics and Applications, Universiti Sains Malaysia, Penang,pp.1315,June2006. BirbirY.,Nogay H.S.andTopuzV.,Estimation ofTotalHar monicDistortioninShortChordedInductionMotorsUsingAr tificial Neural Network, Proceedings of the 6th WSEAS Interna tional Conference on Applications of Electrical Engineering, Istan bul,Turkey,pp.206210,2007. Wang, S., Neural Network Approach to Generating the Learning Curve,INFOR.31(3),pp.136150,1993. GaoM.,SunF.,ZhouS.,ShiY.,ZhaoY.andWangN.,Perfor mance prediction of wet cooling tower using artificial neural network under crosswind conditions, International Journal of ThermalSciences48(2009),pp.583589,2009. FausettL.V.,FundamentalsofNeuralNetwork:Architecture,Algo rithms,andApplications,N.J.:PrenticeHall,1994. ZhangY.,LiW.,ZengG.M.,TangL.,FengC.L.,HuangD.L. and Li Y. P., Novel Neural NetworkBased Prediction Model for Quantifying Hydroquinone in Compost with Biosensor Measurements, Environmental Engineering Science, vol. 26, no. 6,pp.10631070,2009. MenH.,LiX.,WangJ.,andGaoJ.,AppliesofNeuralNetwork to identify gases based on electronic nose, IEEE International ConferenceonControlandAutomationFrC42Guangzhou,CHINA, 2007. SirajFadzilah,YusoffNoorainiandKeeL.C.,Emotionclassifi cationusingneuralnetwork,InternationalConferenceonCompu ting&Informatics,ICOCI06,pp.17,68June2006. Zhou M., Zhang S., Wen J. and Wang X., Research on CVT Fault Diagnosis System Based on Artificial Neural Network, IEEE Vehicle Power and Propulsion Conference (VPPC), Harbin, China,2008. KeJ.,LiuX.,andWangG.,TheoreticalandEmpiricalAnalysis oftheLearningRateandMomentumFactorinNeuralNetwork ModelingforStockPrediction,AdvancesinComputationandIn telligence. Lecture Notes in Computer Science, vol. 5370/2008,pp. 697706,2008,doi:10.1007/9783540921370_76. Hagan M. T., Demuth H. B. and Beale M. H., Neural Network Design,PWSPublishingCompany,1996. Lee T. L. Neural network prediction of a storm surge. Ocean Engi neering33,pp.483494,2006. Marryanna, L., Siti Aisah S. and Saiful Iskandar K., Water qualityresponsetoclearfellingtreesforforestplantationestab lishment at Bukit Terek F. R., Selangor, Journal of Physical Science,18(1),pp.3345,2007. Neideen T. and Brasel K., Understanding Statistical Tests, JournalofSurgicalEducation,pp.9396,2007.

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 12, DECEMBER 2011, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

57

Hairudin Abdul Majid has received Diploma and Bachelor of Science in Computer Science-majoring Industrial Computing from Universiti Teknologi Malaysia in 1993 and 1995 respectively. In 1998, he obtained his M.Sc. in Operational Research and Applied Statistic from University of Salford, UK. His PhD thesis in Warranty and Maintenance(Submited). Currently, he is a lecturer in Faculty of Computer Science and Information System, Universiti Teknologi Malaysia. His research interests focused on Image Processing, Operations Management and Warranty and Maintenance. Mr. Hairudin received Excellent Service Award by Universiti Teknologi Malaysia in 2004 and Excellent Staff Award by ISS Service in Manchester UK in 2006. Mr. Hairudin is the author of about 19 papers, 1 book chapter entitled Recent Operations Research Modelling and Applications (Warranty Modelling) (UTM, 2009) and 1 text book entitled Permodelan Simulasi (UTM, 2000). He has been a member of UK Operational Research Society and an active member of Operations and Business Intelligence (OBI) Research Group.

Ang Jun Chin obtained her M.Sc. in Computer Science from Universiti Teknologi Malaysia in 2011 and Bachelor of Computer Science in 2009. She is currently working as a system analyst in Singapore. Azurah A Samah has received the Diploma and Bachelor from Universiti Teknologi Malaysia in 1991and 1993 respectively. In 1996, she obtained her M.Sc. from the University of Southampton, UK and recently in 2010, she received her Ph.D from Salford university, UK. Currently she is a lecturer in Faculty of Computer Science and Information System, Universiti Teknologi Malaysia. Her research interests encompass Image Processing, Soft Computing Techniques and Operational and Simulation Modeling.

Das könnte Ihnen auch gefallen