Sie sind auf Seite 1von 10

Expert Systems with Applications 37 (2010) 43584367

Contents lists available at ScienceDirect

Expert Systems with Applications


journal homepage: www.elsevier.com/locate/eswa

Applying moving back-propagation neural network and moving fuzzy neuron


network to predict the requirement of critical spare parts
Fei-Long Chen a, Yun-Chin Chen a,*, Jun-Yuan Kuo b
a
b

Department of Industrial Engineering and Engineering Management, National Tsing-Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan, ROC
Department of International Business, Kainan University, No. 1, Kainan Road, Taoyuan 33857, Taiwan, ROC

a r t i c l e

i n f o

Keywords:
Moving back-propagation neural network
Moving fuzzy neuron network
Critical spare part
Prediction

a b s t r a c t
The critical spare parts (CSP) are vital to machine operation, which also have the characteristic of more
expensive, larger demand variation, longer purchasing lead time than non-critical spare parts. Therefore,
it is an urgent issue to devise a way to forecast the future required amount of CSP accurately.
This investigation proposed moving back-propagation neural network (MBPN) and moving fuzzy neuron network (MFNN) to effectively predict the CSP requirement so as to provide as a reference of spare
parts control. This investigation also compare prediction accuracy with other forecasting methods, such
like grey prediction method, back-propagation neural network (BPN), fuzzy neuron network (FNN), etc.
All of the prediction methods evaluated the real data, which are provided by famous wafer testing factories in Taiwan, the effectiveness of the proposed methods is demonstrated through a real case study.
2009 Elsevier Ltd. All rights reserved.

1. Introduction
The spare parts management plays an increasingly important
role in factories. It not only directly inuences the equipments
operation and yields rate, but also inuences the slack risk and
inventory level. Therefore, spare parts management has always
been the part that managers focus on. The spare parts can be classied into critical and non-critical, depending on the criticality of
the equipment for production.
The critical spare parts are considerably expensive, the demand
variation is huge, the purchasing lead time is long, and necessary
for machine operations. The prices of critical spare parts (CSP)
are range from tens to hundreds of thousand dollars. As the equipments operation, some critical spare parts need to be replaced due
to wear and tear. If appropriate amount of critical spare parts are
not prepared, machines may not be able to function, thus resulting
in a waste of resources. However, to predict the demand of CSP
accurately is a complicated issue, not only have to consider the
quantity of work orders, but also have to deliberate about other
unpredictable factors, such as human factors or spare parts quality
problems. Such a circumstance is more obvious in semiconductor
industries. For this consideration, it is urgent to be able to accurately forecast the requirement of CSP in advance.
To solve this problem, this investigation proposed two forecasting methods in order to predict future CSP requirement accurately,
one is moving back-propagation neural network (MBPN) and an* Corresponding author. Tel.: +886 922201151; fax: +886 3 341 2175.
E-mail
addresses:
citysching@yahoo.com.tw,
d947817@oz.nthu.edu.tw
(Y.-C. Chen).
0957-4174/$ - see front matter 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2009.11.092

other is moving fuzzy neuron network (MFNN). This investigation


also compare prediction accuracy with other forecasting methods,
such like grey prediction method, back-propagation neural network (BPN), fuzzy neuron network (FNN), moving average method
(MA). All of the prediction methods evaluated the real data, which
are provided by famous wafer testing factories in Taiwan, the effectiveness of the proposed methods is demonstrated through a real
case study, and both of the MBPN and MFNN have better prediction
accuracy than other forecasting methods mentioned in this study.
This paper is organized as follows: Section 2 gives an overview
of related literatures on the spare parts demand forecasting and
management. Section 3 illustrated the methodologies of this study
in solving real forecasting problem. Section 4 presents a case study
and demonstrates the workability of our proposed methods. Then
the conclusion will be provided in Section 5.

2. Literatures review
Although the prediction of spare parts consumption is so important in industries, researches focus on demand forecasting of spare
parts is still very under-developed, there are not many investigations focus on the CSP requirement prediction. Investigations on
semiconductor industries are even fewer. In general, there is no
appropriate forecasting model for predicting the requirement of
critical spare parts.
Prakash, Ganesh, and Rajendran (1994) applied analytic hierarchy process (AHP) method to evaluate the criticality of spare parts.
Their approach was to categorize the parts using a variety of
partitioning techniques, including three kind of analysis, the ABC

4359

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

analysis, the fast, slow and the non-moving items (FSN) analysis, and
the vital, essential and desirable (VED) analysis.
Kabir and Al-Olayan (1996) developed a simulation model to
determine the optimal value of the decision variable by minimizing the total cost of replacement and inventory. Dekker, Kleijn,
and Rooij (1998) indicated that spare parts can be classied into
critical and non-critical demand, and proposed a stocking policy
with a deterministic replenishment lead time, which is veried
by simulation. Ghobbar and Friend (2003) experimented with 13
forecasting methods to predict spare parts demand for airline
eets, they also devised a predictive error-forecasting model which
compares and evaluates forecasting methods based on their factor
levels when faced with intermittent demand.
Aronis, Magou, Dekker, and Tagaras (2004) calculated the required stock levels for each type at several locations and determine
the distributions of demand for spare parts by Bayesian approach.
Caglar, Li, and Simchi-Levi (2004) investigated a spare parts inventory problem and formulated a model to minimize the inventory
cost subjected to a response time constraint at each eld depot.
Li and Kuo (2008) focused on the automobile spares parts
inventory in a central warehouse, they proposed an enhanced fuzzy neural network (EFNN), which applied fuzzy AHP to determine
the factors weights, and generated and rened activation functions according to genetic algorithm. The results are then input
to the neural networks for training and analysis. Hua and Zhang
(2006) applied support vector machines to forecast the lead time
of spare parts using real data sets of 30 kinds of spare parts from
a petrochemical enterprise in China.
Based on the above literatures, the researches mostly focus on
the inventory level policies and the criticality evaluation of spare
parts. The investigations focus on the prediction of spare parts
requirement is very rare. If the demand of critical spare parts can
be accurately predicted, there will be no problem of controlling
inventory level and purchasing quantities. Hence, this investigation proposed moving back-propagation network (MBPN) and
moving fuzzy neuron network (MFNN) to predict the demand of
critical spare parts accurately, improving the efciency of purchasing and inventory control.

3. Methodology
Several methods have been employed to forecast the target value
in many elds, and the grey prediction method and back-propagation neural network (BPN) have good prediction performance in
many elds. Sheu and Kuo (2006) apply grey prediction model to
forecast the timing of prevent maintenance accurately. Lin and Yang
(2003) forecast accurately the output value of Taiwans opto-electronics industry through grey forecasting model. Ansuj, Camargo,
Radharamanan, and Petry (1996) used time-series models and BPN
to predict the behaviors of sales, the result indicated BPN had better
prediction performance than time series models. Law (2000) utilized
BPN to forecast the demand of tourism, the result indicated that the
BPN has higher forecasting accuracy than time-series models, feedforward neural networks, and regression models. Thus, this paper
applied grey prediction model, BPN to forecast the demand of CSP.
Yeh (1999) proposed fuzzy neuron network (FNN) and test efciency and accuracy in modeling chaotic two-dimensional mapping, the experimental results demonstrate that the FNN have
superior performance than BPN. Yeh (2005) applied FNN to modeling complex classication problems and function mapping problems, the result indicate that the FNN is superior to BPN in
accuracy and speed of learning. Because the fuzzy neuron network
has good performance in function mapping and classication area,
this research also applied FNN for prediction purpose and examines its accuracy.

The data processing is an important procedure for neural network, it may affect prediction accuracy greatly, however, when
forecasting the CSP requirement, it is possible to face a difcult situation which the data have great variation, the inuential factors
may not easily to nd out, and the value of inuential factors corresponding to CSP requirement in coming term may unknown. It will
face difculty to precisely forecast the desired target using single
method, hence, this investigation integrated moving average method, back-propagation neural network and fuzzy neuron network
and proposed two new forecasting methods, one is moving backpropagation neural network (MBPN) and another is Moving fuzzy
neuron network (MFNN). The research framework of this investigation is shown as Fig. 1. At the beginning, the author will collect the
raw data and applied analytic hierarchy process (AHP) to sieve out
the more inuential factors, then input the data into different forecasting methods, including grey theory, MA, BPN, FNN, MBPN,
MFNN, and nd out the appropriate parameters of each forecasting
method, afterwards, the author will derive the prediction result and
compare the prediction accuracy of each forecasting method.
3.1. Analytic hierarchy process
Analytic hierarchy process (AHP) was proposed by Thomas L.
Saaty in 1980, primarily applied to uncertainty and decision involving many evaluation rules. AHP makes use of pairwise comparisons,
hierarchical structures, and 9-point ratio scaling to apply weights to
attributes. In this study, the AHP was used to select inuence factors
of the target and determine the relative importance.
The three main steps of AHP are illustrated as follows:
Step 1: Construction of hierarchical structure.
AHP decomposes problems into a hierarchy of a goal,
attributes, and alternatives. After the authors set up the
goal and create criteria for assessing the alternatives
and structures the hierarchy, which breaks down the
complex problem into a number of small constituent
elements.
Step 2: Calculation of weights between factors at each hierarchical
level.
This step asks evaluators to make pairwise comparisons
of the relative importance of variables using the scale.
Based on the results of the questionnaire, a pairwise comparison matrix is constructed to calculate the characteristic values and the characteristic vectors, thereby
examining the consistency of the matrix to derive a consistency index (C.I).
For each alternative, the consistency ratio (C.R) is measured by the ratio of the consistency index to the random
index (RI). Eq. (1) calculates the C.I. values. The values of
RI are also described in Table 1.

C:I

kmax  n
;
n1

C:R

CI
:
RI

Generally, C.R. should be less than 0.1 to guarantee consistency. If consistency does not comply with the requirement, it means judgments made are inconsistent, and
then the researcher shall explain the problem of every
pairwise comparison to the authorizer.
Step 3: Calculation of the overall hierarchical weights.This step is to
determine the weight of each decision element. This work
employs eigenvalue computations to derive the weights of
factors.
After calculating weights of every factor, further analysis and
choice are made according to weights and signicance represented
by each factor.

4360

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

Fig. 1. Research framework.

Table 1
Random index.
Matrix
Size (n)
RI

1
0

2
0

3
0.58

4
0.9

5
1.12

6
1.24

7
1.32

3.2. Grey prediction method


Grey theory was proposed by Deng (1982), focusing on the
relational analysis and model construction of the system under
the circumstances of uncertainty and incomplete information.
The grey theory has been successfully applied to many different
elds, such as environmental engineering, agriculture, trafc,
meteorology, engineering, transportation, economics, medicine,
education, geology, management and physical education (Wen,
2003, Chapter 3).
The grey prediction method (GM) is base on the grey theory.
The GM(1,1) is the most frequently used grey prediction method. It is formed by a rst-order differential equation with a single variable, encompassing a group of differential equations
adapted for parameter variance. The GM(1,1) has been proven
that needs least four data sets to predict future result (Deng,
1982).
The following symbols are used in this method:
n
the number of samples used in the progressive predictive
series
A
the rst parameter for Least-square method
B
the second parameter for Least-square method

8
1.41

X 0 k

9
1.45

10
1.49

11
1.51

12
1.54

13
1.56

14
1.57

15
1.58

the predicted progress of the measurement value. For


_
example x 0 1 represents the predicted value of the prediction target for the rst period, where k = 1, 2, 3, 4 ,. . . n
the predicted progress of the measurement value in the kth
x1 k
period for the rst predictive series based on accumulated
generating operation (AGO),where k = 1, 2, 3, 4 . . . n. AGO is
a process working on previous predictive series to generate
the next predictive series. The purpose of AGO is transfer
random raw sequence into a more robust progressive sequence to reduce the randomness of raw sequence and increase the smoothness of progressive sequence
the mean value of added trigger measurement value,
Z 1 k
where k = 1, 2, 3, . . ., n where Z 1 k 0:5 x1 k
x1 k  1
sequence is generated to smooth out the x1 k sequence
Z 1 k
B
the matrix of Z 1 k
a vector indicates incremental trigger measurement in a
YN
period
_
x 0 k 1 the forecasted increment of the measurement for k 1th
period in the 0th predictive sequence
_
x 1 k 1 The forecasted increment of the measurement
for k 1th period in the 0th predictive sequence using
AGO

4361

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

Pn

The construction steps of GM(1,1) include:


1. Accumulated generating operation
When we apply GM(1,1) model to prediction, an original
sequence of data with n measurement is expressed as



x0 x0 1; x0 2; x0 3; . . . ; x0 n


x0 k; k 1; 2; 3; . . . ; n :

_
0

x1 3 x0 1 x0 2 x0 3 x1 2 x0 3;

x k x k  1 x k:
The standard formula for the accumulated generating operation at
the rth accumulation is:

x k; r 1; 2; 3; . . . ; n

k
X

!
x

r1

m :

m1

Then, x0 is made the rst-order accumulated generating operation


to obtain the number series.



X 1 x1 1; x1 2; . . . ; x1 n :

xr1 k x k  x k  1

k 1 1  ea x0 1 

b ak
e ; k 1  n:
a

x0 k  _
x 0 k

ek

 100%;

x0 k

dx
x1 Dt t  x1 t x1 k  x1 k  1 x0 k
dt
Calculate: Z 1 k x1 k.The GM (1,1) differential formula (Wen,
2003), that is:

6
k 2; . . . ; n:

3. Apply Least-square method to nd out vender Y N and the


matrix B.
Substitute k = 2 , . . ., n into x0 k aZ 1 k b, and express the
_
result in the form of matrix Y N B a to obtain Y N ; B.

6 x0 3 7
7
6
7;
YN 6
7
6 ..
5
4.
x0 n

Z 1 2; 1
6 1
6 Z 3; 1
6
B 6.
6.
4.

Z 1 n; 1:

3
7
 
7
a
7 ^
BT B1 BT Y N ;
7 a
7
b
5
7

4. Find out a and b.

12

where e(k) denote the error between true value and predicted value,
_
x0 k is the true value, and x 0 k is the predicted value of x0 k.

Find out

x0 2

11

The ow chart of GM(1,1) is shown as Fig. 2. According to Fig. 2,


the reader can easily understand the forecasting process of
GM(1,1).
After derived the predicted value, we need to establish an error
formula and evaluate the accuracy of predictions as well,

x0 k x1 k  x1 k  1

10

when r = 1, then

dx
ax1 b;
dt
x0 k aZ 1 k b;

k 1 x 1 k 1x 1 k:

2. Find out x0 k; Z 1 k.
Dene the inverse accumulated generating operation (IAGO) as
follows:

6. Forecasting the trigger measurement value for the subsequent


periods. The forecasting method of subsequent periods is the
same as above.

0
k2 x k

Pn

Therefore, the prediction data can be expressed by combining Eqs.


(8)(10) as



x1 2 x0 1 x0 2;



b
b
^x1 k 1 x0 k  eak ;
a
a
_0

x1 1 x0 1;

k2 z

5. List

The rst-order accumulated generating sequence is:

Pn

 n  1 k2 z1 kx0 k
;
Pn
2
2
1
n  1 k2 z1 k 
k2 z k
Pn 1 Pn 1
Pn  1 2 Pn 0
0
z k
k2 x k 
k2 z k
k2 z kx k
b k2
:


P
Pn
2
2
n
1
n  1 k2 z1 k 
k2 z k
a

Pn

Fig. 2. The ow chart of GM(1,1).

4362

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

3.3. Moving back-propagation neural network (MBPN)


Neural networks have been widely applied to various issues.
Back-propagation neural network (BPN) is the most representative
of the articial neural networks. It is a kind of supervised learning
network and very powerful in terms of assessment and prediction.
The BPN has been successfully applied to many research elds,
such as data compression, handwritten recognition, and stock price
prediction, etc. The algorithm of BPN applies the fundamental principle of the gradient steepest descent method to minimize the error function. It compares the outputs of the processing units in the
output layer with desired outputs to adjust the connecting
weights. There are three layers in compendium of BPN shown in
Fig. 3 and illustrated as follows:
(1) Input layer: To demonstrate the input nodes of variables. The
input layer receives features of input data and distributes
them to hidden layer. The number of input nodes depends
on different problems, f(x) = x.
(2) Hidden layer: To show the interactions among input layer
and output layer. There is no standard rule to dene the
number of hidden nodes. The usual way to get the best number of nodes is by experiments.
(3) Output layer: To indicate the output nodes of variables. The
number of output nodes depends on different problems to
be solved. where X i is the input vectors,i = 1, 2, . . ., m, Y j is
the output vectors, j = 1, 2, . . ., n and Ht is the hidden nodes,
t = 1, 2, , . . ., k.

Philip (1989) suggested that training the BPN requires the following steps:
(1) Select training pair from the training set and apply the input
vector of the pair to the network.
(2) Calculate the output of the network.

Fig. 3. The structure of BPN.

(3) Evaluate the error between the network output and the
desired output.
(4) Adjust the weights of the network in a way that minimizes
the error.
(5) Repeat steps 14 for pairs in the training set until the error
for the entire set is small enough.
After sufcient iterations, the error is reduced to a predened
small value, and the network is then well trained. The mean absolute percentage error (MAPE) is used as a tool for judging the
predictability:

MAPE

PN

T i Ai

i1
T i

13

where T i is actual value and Ai is prediction value.


When we used BPN to predict desired target at testing process,
it is very likely that some input variables value are unknown at
prediction timing (coming term), when this situation happened,
this study applied moving average (MA) method to derive the predicted values of input variables, then put the values into BPN to derive the predicted value of desired target. The structure of MBPN is
shown as in Fig. 4.
The moving average is the mean of the previous n data sets. The
formula for the simple moving average is

F t MAn

At1 At2    Atn


:
n

14

F t is the forecast for the coming period, n the number of periods to


be averaged, At1 , the Actual occurrence in the past period for up to
n periods, and MA(n) the n period of moving average.
3.4. Moving fuzzy neuron network (MFNN)
Fuzzy set theory was proposed by Zadeh (1965) is now recognized as highly effective tool in many real-world applications. Fuzzy sets theory is a generalization of multivalued logic, that allows
intermediate values to be dened between conventional evaluations like true or false, yes or no, high or low, etc. In addition to
symbol manipulation, it uses numeric computation to facilitate
approximate reasoning. The fuzzy neuron network (FNN) was present by Yeh (1999). The architecture of FNN is a standard backpropagation neural network, however, a hidden layer called fuzzy
layer is added to the network. The basic motivation behind the
FNN is that it partitions the input space into localized regions
which allows to easily building a local model of the system (Yeh,
2005). The architecture of FNN is shown as Fig. 5.
In this architecture, each input neuron has ve corresponding
fuzzy neurons. The fuzzy neurons receive nonlinear transformation
of the corresponding input neuron according to the following
membership function formula. The membership function of fuzzy-neurons is illustrated as follows:

Fig. 4. The structure of MBPN.

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

4363

Fig. 6. The structure of MFNN.

Fig. 5. The architecture of FNN.

1. S-type fuzzy-neuron

F ik

1


k
1 exp  Xi m
s

when k 1:

15

learning rate (in this paper, =0.1), a0 the initial value of momentum
factor, and r a the reduced factor of momentum factor (in this paper,
=0.95), and amin is the minimum bound of momentum factor (in this
paper, =0.1).
This paper applied FNN to predict desired target, when some input variables value are unknown at prediction timing (coming
term), this study will apply moving average (MA) method to derive
the predicted values of input variables, then put the predicted values into FNN to forecast the desired target. The structure of MFNN
is shown as Fig. 6.

2. Bell-Type fuzzy neuron


2 !
X i  mk
when k 2; 3; 4:
F ik exp 
sk

4. Case study

16

3. Z-type fuzzy-neuron

F ik 1 

1


k
1 exp  X i m
s

when k 5;

17

This investigation is veried by comparing the predicted demand and actual demand of critical spare parts in a semiconductor
factory. This company is one of the leading wafer testing factories
in Taiwan. The BGA socket is one of the critical spare parts in this

where X i is the output value of the ith input neuron, F ik the output
value of the kth fuzzy neuron for the ith input neuron, Mk the
parameter that controls the horizontal shift of nonlinear
transformation of the kth fuzzy neuron, and Sk is the parameter that
controls the slope of nonlinear transformation of the kth fuzzy
neuron.
Each fuzzy neuron is sensitive to a local region of given
input, each upper hidden neuron may synthesize some fuzzy neuron outputs to play as a local expert, and the output layer unit
may synthesize the local experts to model the output values
(Yeh, 2005).
In this approach,

2
1
1 2
m1 ; m2 ; m3 ; m4 ; m5  ;  ; 0; ; ;
3
2
2 3
1 1 1 1 1
s1 ; s2 ; s3 ; s4 ; s5 ; ; ; ; :
3 2 2 2 3
The connections between the input layer and fuzzy layer are
xed, and the others are variable. The learning rule of the network
architecture is the same as the standard BPN, the General Delta
Rule (Rumelhart, Hinton, & Williams, 1986) is employed to modify
the networks connection weights.
Before training the neural nets, the input and output data must
be normalized, the learning rate and momentum factor of General
Delta Rule are decayed with following formulas:

g rg  g0 P gmin ;
a ra  a0 P amin ;

18

where g0 is the initial value of learning rate, rg the reduced factor of


learning rate (in this paper, = 0.95), gmin the minimum bound of

Fig. 7. The demand chain of BGA sockets.

4364

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

tionnaire investigation, data analysis and weight calculation


according to AHP method, the descriptions and weights of each
inuential factor are listed as Table 2.
According to Table 2, the weights of the The revenue of testing
processing monthly and the The historical requirement at the
same month have lower impact degree than other factors, which
is 0.07 and 0.09, respectively. In order to eliminate irrelevant noise
and derive the forecasting result more accurate, the author will not
take into account of these two factors. Other ve inuential factors
are collected as input variables which are fed into several prediction methods, such like GM(1,1), MA, BPN, FNN, MBPN, MFNN. In
order to compare different forecasting methods objectively, the
author compare prediction accuracy of the last ten terms of BGA
sockets requirement for each forecasting method.
4.1. Moving average method prediction result
Fig. 8. The monthly consumption of BGA sockets.

company, which has the characteristics of expensive, large variation of demand, long purchasing lead time and necessary to the
operation of machine. The price of BGA socket can be thousands
of US dollars and the requirement variation is huge, the purchasing
lead time of BGA socket vary according to different countries
(domestic or foreign) and range from one week to two weeks,
the demand chain of BGA sockets as shown in Fig. 7.
Such condition makes the managers difcult to prepare the required number of BGA sockets. Therefore, this investigation applied MBPN, MFNN, BPN, FNN, GM(1,1) and MA to predict the
requirement of BGA sockets accurately.
As for data collection, the historical requirements of the BGA
sockets and the relevant factors in duration of 28 months from
September, 2005 to December, 2007 are collected, and the requirements of BGA socket in each term are shown in Fig. 8. The duration
of one month will be regard as one term. The last ten months of
BGA sockets requirement will use to compare the prediction accuracy of each forecasting method.
According to the Fig. 8, we can understand clearly that the consumption of BGA sockets not only has huge variation, but also has
no denite tendency. Such situation makes the purchasing and
inventory managers difcult to estimate the requirement of BGA
sockets accurately, thus resulting in insufcient or excessive inventory. If the shortage of BGA sockets happened, the testers can not
work regularly, such condition will cause large cost and time wasting. Thus, it is urgency to accurate predict the requirement of BGA
sockets in each term. Hence, this investigation proposed MBPN and
MFNN in order to predict the requirement of BGA sockets in each
term accurately. This study also compares the prediction accuracy
with other famous forecasting methods, such like GM(1,1), MA,
BPN and FNN.
In order to nd out the more inuential factors of BGA socket
consumption. The authors have discussed with the experts and
the questionnaire based on AHP was distributed to 40 managers
and staffs, 33 effective questionnaires were collected. After ques-

This paper used moving average (MA) method to derive the predicted value of BGA requirement, the prediction result of MA also
can be regard as foundation to compare the prediction accuracy
with other forecasting methods. Because the last ten terms
requirement of BGA socket will use for comparing the difference
between predicted and actual requirement, so the author used 2
18 periods of MA to derive the forecasted value of last ten terms
of BGA sockets requirement, and compare the difference with the
actual requirement. The average prediction accuracy of the MA is
shown as Table 3.
Table 3 shows that the 3-period of MA has 66.59% prediction
accuracy which is better than other periods of MA, the result also
indicate that the forecasting of BGA sockets requirement is very
difcult, not only because of the large data variation, but also the
historical data might not enough to predict future demand
accurately.
4.2. Grey prediction result
This investigation utilizes GM(1,1) 410 entry (n = 410) to predict the consumption of BGA socket. The reason we utilized 410
entry of GM(1,1) is according to the 28 terms of data length, the
GM(1,1) needs least four data sets to predict future situation, and
Table 3
The average prediction accuracy of the MA.
n

Average accuracy (%)

Average accuracy (%)

Moving Average (MA)


2
3
4
5
6
7
8
9
10

66.29
66.59*
62.95
61.23
62.28
60.85
60.5
61.92
62.86

11
64.34
12
65.84
13
66.44
14
66.5
15
66.56
16
66.15
17
66.03
18
65.45
n: The number of periods

Table 2
The descriptions and weights of each inuential factor.
No.

The title of inuential factors

The description of the inuential factors

Weights

1
2
3
4
5
6
7

Quantity of ICs tested on tester


IC yield rate
The loss number of misusage and accident
The number of defective goods
Month
The historical requirement at the same month
The revenue of testing processing monthly

The
The
The
The
The
The
The

0.25
0.24
0.11
0.1
0.14
0.09
0.07

quantity of IC which have tested on tester


IC yield rate monthly
loss number of the BGA socket, which is misuse by operators or cause by other accidents
number of defective BGA sockets, which is cause by quality problem
slack or boom month, that will affect the quantity of work order greatly
requirement of BGA socket at the same month in last year
revenue which is earned by testing process

4365

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367


Table 4
The average prediction accuracy of GM(1,1).
n

Average accuracy (%)

GM(1,1)
4
5
6
7
8
9
10
n: The number of entries in GM(1,1)

55.76
65.23
67.42*
62.9
63.67
61.13
56.36

the more entries of GM(1,1) may not indicate better prediction performance than 410 entry. The average prediction accuracy of the
GM(1,1) is shown as Table 4.
The Table 4 shows the 6-entry of GM(1,1) with an average accuracy of 67.42% is higher than other entries of GM(1,1). In this case,
the most suitable entry of GM(1,1) is 6. It might imply that when
managers decide the demand quantities of BGA sockets, they
should consider six months of historical data at least.
It also can be understand easily that the appropriate entry of
GM(1,1) for different kind of data pattern should be derived by
experiment, there are not xed entry for any kind of data patterns.

casting the BGA socket requirement of coming term, the values


of some inuential factors like Quantity of ICs tested on tester,
IC yield rate, The loss number of misusage and accident, The
number of defective goods, all of them are all unknown at the prediction timing of coming term, and they will be known until the
end of this term. Thus, this paper used the inuential factors data
of last term to predict the BGA socket requirement of coming term,
for example, use the inuential factors data of 18th term to predict
the BGA socket requirement of 19th term.
At the training and testing process of BPN and FNN, the rst 18
data sets are used as training samples, and the last 10 data sets are
used as testing samples. Fig. 9 shows the BPN and FNN forecasting
structure of BGA sockets requirement. The appropriate parameters
setting of the BPN and FNN are shown in Tables 5 and 6 which derived by experiment. The prediction results of the BPN and FNN are
listed in Table 7.
The average accuracy is of BPN and FNN is 66.02% and 60.61%,
the BPN has better prediction accuracy than FNN, and the MAPE
of BPN is also less than FNN. It implied that when the input data
are not so corresponding to the output target (use the last term input data to predict the coming term target), the BPN will have better forecasting performance than FNN. In the process of training
and testing, this investigation also nds out that the FNN may have
overtraining situation, the number of training cycle should be
concerned.

4.3. BPN and FNN prediction result


This paper also applied BPN and FNN to predict the requirement
of BGA sockets in last ten terms. However, when the author fore-

Fig. 9. BPN and FNN forecasting structure of BGA sockets requirement.

4.4. MBPN and MFNN prediction result


At the MBPN and MFNN process, this paper use 3, 4 and 5 period
of moving average to derive some inuential factors predicted value of coming term, including Quantity of ICs tested on tester, IC
yield rate, The loss number of misusage and accident, The
number of defective goods.(because these inuential factors true
value are unknown in coming term.) Then the author applied the
predicted value as input data to MBPN and MFNN in testing process. The rst 18 data sets are used for training, and the last 10 data
sets are used for testing. Fig. 10 shows the MBPN and MFNN forecasting structure of BGA sockets requirement. The appropriate
parameters setting and prediction result of the MBPN and MFNN
are shown in Tables 8 and 9 respectively. The appropriate parameters are derived by experiment. The highest average accuracy of
MBPN is 71.76% which occurred at 3-period of moving average,
the highest average accuracy of MFNN is 76.34% which occurred
at 3-period of moving average.
According to Table 8, the 3-period of MBPN has 71.26% average
accuracy which is better than other period of MBPN. The Table 9
points out the 3-period of MFNN has 76.34% average accuracy
which is better than other period of MFNN. The prediction result
of MBPN and MFNN can respond to the prediction result of MA, because the 3-period of MA is also more accurate than other periods
of MA, such situation indicate that the more corresponding data input to the MBPN and MFNN, the more accurate prediction result
we can derived. This prediction result also point out when the
inuential factors information is known or approach to the true
value, the FNN will have better prediction accuracy than BPN,
otherwise the BPN will have better prediction accuracy than FNN.

Table 5
Parameter setting of the back-propagation neural network.
Learning Rule: Delta Rule
Input nodes
Initial value of Learning rate
Initial value of momentum rate

Transformation function: Sigmoid function


5
1.0
0.99

Output nodes
Decline rate of Learning rate
Decline rate of momentum rate

1
0.99
0.5

Number of hidden layer


Hidden nodes
Number of training cycle

1
2
1000

4366

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

Table 6
Parameter setting of the fuzzy neuron network.
Learning Rule: Delta Rule

Transformation function: Sigmoid function

Input nodes
Initial value of Learning rate
Initial value of momentum rate

5
1.0
0.99

Output nodes
Decline rate of Learning rate
Decline rate of momentum rate

Table 7
Prediction result of the BPN and FNN.

1
0.3
0.5

Number of hidden layer


Hidden nodes
Number of training cycle

1
2
500

Table 10
The highest prediction accuracy of each forecasting method.

Average accuracy (%)

MAPE

Forecasting methods (n)

Average prediction accuracy (%)

BPN
FNN

66.02
60.61

0.3398
0.3938

MA (3-period)
GM(1,1) (6-entry)
BPN
FNN
MBPN (3-period)
MFNN (3-period)

66.59
67.42
66.02
60.61
71.76
76.34*

Table 10 presents the highest prediction accuracy of each forecasting method mentioned by this paper.
According to the Table 10, the MFNN (3-period) have the best
prediction accuracy of 76.34% than other forecasting methods
mentioned in this investigation, the order from high to low average
prediction accuracy of forecasting methods is MFNN (3-period),
MBPN (3-period), GM(1,1) (6-entry), MA (3-period), BPN, FNN. If
the manager want to choose one forecasting method to predict
the requirement of BGA sockets or predict any data sets with large
variation and there has some corresponding information is unknown, it is suggest to apply MFNN which is proposed by this paper to forecast future situation accurately.
5. Conclusions
Spare parts management has always been a very important part
in factories. It not only directly affects the operation of machines
and yield rate of production line but it also affects the inventory level and slack risk. When the equipment is operating, spare parts
will be required to be changed due to abrasion and attrition. Excessive spare parts will cause accumulation of the inventory and
insufciency will cause termination of machine operation, thereby
leading to loss. Spare parts can be classied into critical and noncritical spare parts, depending on the criticality of the equipment

Fig. 10. The MBPN and MFNN forecasting structure of BGA sockets requirement.

Table 8
Parameter setting and prediction result of MBPN.
Learning Rule: Delta Rule
Input
nodes

Period of
MA
3-Period
4-Period
5-Period

Output
nodes

Transformation function: Sigmoid function


1

Number of hidden
layer

Number of training cycle

1000

Hidden nodes

Initial value of
Learning rate

Decline rate of
Learning rate

Initial value of
momentum rate

Decline rate of
momentum rate

Average
accuracy (%)

MAPE

2
3
1

1
1
1

0.6
0.7
0.6

0.5
0.5
0.1

0.99
0.99
0.5

71.76*
71.29
70.21

0.2824
0.287
0.2979

Table 9
Parameter setting and prediction result of MFNN.
Learning Rule: Delta Rule
Input nodes

Period of MA
3-Period
4-Period
5-Period

Output nodes

Transformation function: Sigmoid Function


1

Number of hidden layer

Number of training cycle

1000

Hidden nodes

Initial value of Learning rate

Decline rate of
Learning rate

Initial value of
momentum rate

Decline rate of
momentum rate

Average
accuracy (%)

MAPE

2
3
2

1
1
1

0.7
0.7
0.7

0.5
0.2
0.2

0.99
0.9
0.9

76.34*
74.19
75.18

0.2366
0.258
0.2481

F.-L. Chen et al. / Expert Systems with Applications 37 (2010) 43584367

for production. Critical spare parts (CSP) are more expensive, indispensable than non-critical spare parts, the purchasing lead time of
CSP is long and the requirement often varies vigorously, and the
inuential factors which directly affect the CSP requirement may
unknown or not easily to nd out. Hence, it is an urgent issue to
devise a way to forecast the future requirement of CSP effectively.
This investigation proposed moving back-propagation neural
network (MBPN) and moving fuzzy neuron network (MFNN) to
effectively predict the CSP requirement of testing machines in wafer testing factories which can be provide as a foundation of purchasing strategy. This investigation also compares prediction
accuracy with other forecasting methods, such like GM(1,1), MA,
BPN, and FNN. All of the forecasting methods evaluated the real
data, which are provided by famous wafer testing factories in Taiwan, the effectiveness of the proposed methods is demonstrated
through a real-world scenario. The main contributions of this paper
are illustrated as follows:
(1) This investigation proposed two new forecasting methods,
MBPN and MFNN, and the empirical results demonstrated
the MFNN (3-terms) has the best predict accuracy than other
forecasting methods mentioned in this paper.
(2) This paper solved the problem that when some inuential
factors value is unknown at prediction timing, and the data
variation is large, how to predict the target more accurately.
The MBPN and MFNN can be applied for this situation, and
the empirical result showed that the prediction accuracy of
MBPN and MFNN is good.
(3) This investigation compare prediction accuracy with several
different forecasting methods in large variation data and
some inuential factors value is unknown at predict timing.
All of the forecasting methods have evaluated the real data,
the order from high to low average prediction accuracy of
forecasting methods is MFNN, MBPN, GM(1,1), MA(3-terms),
BPN, and FNN.
(4) This paper examined the prediction performance of FNN in
large variation data. Before this investigation, the FNN always
applied for classication and functions mapping which has
not applied for prediction purpose on large variation data.
(5) This paper nd out the more inuential factors corresponding to the requirement of BGA socket, which can help
managers to understand and control the BGA sockets
consumption. The forecasting results of this paper can be
provided as a reference of critical spare parts management

4367

in case company, the material managers can accord the


forecasting requirement of critical spare parts to make
planning and reduce risks and costs between various
operations.

References
Ansuj, A. P., Camargo, M. E., Radharamanan, R., & Petry, D. G. (1996). Sales
forecasting using time series and neural networks. Computers and Industrial
Engineering, 31, 421424.
Aronis, K. P., Magou, I., Dekker, R., & Tagaras, G. (2004). Inventory control of spare
parts using a Bayesian approach: A case study. European Journal of Operational
Research, 154, 730739.
Caglar, D., Li, C. L., & Simchi-Levi, D. (2004). Two-echelon spare parts inventory
system subject to a service constraint. Institute of Industrial Engineers
Transactions, 36, 655666.
Dekker, R., Kleijn, M. J., & Rooij, P. J. (1998). A spare parts stocking policy based on
equipment criticality. International Journal of Production Economics, 56, 6977.
Deng, J. L. (1982). Introduction to grey systems theory. Journal of Grey Systems, 1,
124.
Ghobbar, A. A., & Friend, C. H. (2003). Evaluation of forecasting methods for
intermittent parts demand in the eld of aviation: A predictive model.
Computers and Operations Research, 30, 20972114.
Hua, Z., & Zhang, B. (2006). A hybrid support vector machines and logistic regression
approach for forecasting intermittent demand of spare parts. Applied
Mathematics and Computation, 181, 10351048.
Kabir, A. B. M. Z., & Al-Olayan, A. S. (1996). A stocking policy for spare part
provisioning under age based preventive replacement. European Journal of
Operational Research, 90, 171181.
Law, R. (2000). Back-propagation learning in improving the accuracy of neural
network-based tourism demand forecasting. Tourism Management, 21,
331340.
Li, S. G., & Kuo, X. (2008). The inventory management system for automobile spare
parts in a central warehouse. Expert Systems with Applications, 34, 11441153.
Lin, C. T., & Yang, S. Y. (2003). Forecast of the output value of Taiwans optoelectronics industry using the grey forecasting model. Technological Forecasting
and Social Change, 70, 177186.
Philip, D. W. (1989). Neural computing: Theory and practice. New York: Van Nostrand
Reinhold.
Prakash, G. P., Ganesh, L. S., & Rajendran, C. (1994). Criticality analysis of spare parts
using the analytic hierarchy process. International Journal of Production
Economics, 35, 293297.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal
representation by error propagation. Parallel Distributed Processing, 1,
318362.
Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill.
Sheu, D. D., & Kuo, J. Y. (2006). A model for preventive maintenance operations and
forecasting. Journal of Intelligent Manufacturing, 17, 441451.
Wen, K. L. (2003). Principle and application of grey forecast. Taiwan: Chwa Inc.
Yeh, I. C. (1999). Modeling chaotic two-dimensional mapping with fuzzy-neuron
networks. Fuzzy Sets and System, 105, 421427.
Yeh, I. C. (2005). Classication and function mapping with fuzzy-neuron networks.
Journal of Science and Technology, 14, 153159.
Zadeh, L. (1965). Fuzzy sets. Inform and Control, 8, 338353.

Das könnte Ihnen auch gefallen