Sie sind auf Seite 1von 6

Food Control 18 (2007) 928933

www.elsevier.com/locate/foodcont

A neural network for predicting moisture content of grain


drying process using genetic algorithm
Xueqiang Liu, Xiaoguang Chen, Wenfu Wu *, Guilan Peng
Biological and Agricultural Engineering School, Jilin University, Changchun 130025, China

Received 13 January 2006; received in revised form 18 May 2006; accepted 22 May 2006

Abstract

This paper is concerned with optimizing the neural network topology for predicting the moisture content of grain drying process using
genetic algorithm. A structural modular neural network, by combining the BP neurons and the RBF neurons at the hidden layer, was pro-
posed to predict the moisture content of grain drying process. Inlet air temperature, grain temperature and initial moisture content were
considered as the input variables to the topology of neural network. The genetic algorithm is used to select the appropriate network archi-
tecture in determining the optimal number of nodes in the hidden layer of the neural network. The number of neurons in the hidden layer was
optimized for 6 BP neurons and 10 RBF neurons using genetic algorithm. Simulation test on the moisture content prediction of grain drying
process showed that the SMNN optimized using genetic algorithm performed well and the accuracy of the predicted values is excellent.
2006 Elsevier Ltd. All rights reserved.

Keywords: Grain drying; Predicting; Neural network; Genetic algorithm; Moisture content

1. Introduction Recently, the articial intelligence methods as well as neu-


ral networks have been proposed for process control of
Grain drying is a non-linear process with a long delay. grain drying.
Its main objective is to achieve the desired nal moisture The articial neural network (NN) is a well-known tool
content. Over-drying requires excessive energy and even for solving complex, non-linear biological systems
can damage the quality of the dried material, especially (De Baerdemaeker & Hashimoto, 1994) and it can give rea-
in case of seed. On the other hand the grain will be vulner- sonable solutions even in extreme cases or in the event of
able to mildew if the moisture content remains high. There technological faults (Lin & Lee, 1995). Huang and Mujum-
is an option to determine the moisture content in the drying dar (1993) created a NN in order to predict the perfor-
process by measurement but the accuracy of this approach mance of an industrial paper dryer. The NN model by
is not satisfactory due to the technical limitations of the Jay and Oliver (1996) was used for predictive control of
available moisture sensors used in the on-line observing drying process. Trelea, Courtois, and Trystram (1997) used
process. In case of farm dryers, the weather conditions explicit time and recurrent NNs for modelling the moisture
and dust have a great eect on the accuracy, as well. content of thin-layer (5 cm) corn during the drying process
Another way to predict the moisture content is to calculate and for wet-milling quality at constant air ow rate and
it based on drying air parameters using physically based absolute humidity and variable temperature. Thyagarajan,
models. But the accurate model is dicult to be established Panda, Shanmugam, Rao, and Ponnavaikko (1997) mod-
for the drying process with a long delay is non-linear. elled an air heater plant for a dryer using a NN. Sreekanth,
Ramaswamy, and Sablani (1998) predicted psychometric
parameters using various NN models. Kaminski, Stru-
*
Corresponding author. Tel.: +86 431 5691908. millo, and Tomczak (1998) used a NN for data smoothing
E-mail address: wwf@email.jlu.edu.cn (W. Wu). and for modeling material moisture content and tempera-

0956-7135/$ - see front matter 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.foodcont.2006.05.010
X. Liu et al. / Food Control 18 (2007) 928933 929

ture. Farkas, Remenyi, and Biro (2000a, 2000b) set up a Because of the dierent response characteristics of hid-
NN to model moisture distribution in agricultural xed- den neurons in these two kinds of neural networks, the
bed dryers. It is clear from past literature that NNs are interpolation problems can be solved more eciently with
good for modelling drying process. a BPNN, and the extrapolation problems are better to be
The selection of an appropriate NN topology to predict dealt with an RBFNN.
the drying process is important in terms of model accuracy Since the dierent properties of the BPNN and the
and model simplicity. The architecture of a NN greatly inu- RBFNN are complementary, Nan Jiang, Zhao, and Ren
ences its performance. Many algorithms for nding the opti- (2002) designed a structural modular neural network
mized NN structure are derived based on specic data in a (SMNN) with genetic algorithm and showed that the
specic area of application (Blanco, Delgado, & Pegalajar, SMNN constructed a better inputoutput mapping both
2000; Boozarjomehry & Svrcek, 2001), but predicting the locally and globally. The SMNN combine the neurons in
optimal NN topology is a dicult task since choosing the the generalization capabilities of BPNN and the computa-
neural architecture requires some priori knowledge of grain tional eciency of RBFNN together in one network struc-
drying and/or supposes many trial-and-error runs. ture. Its architecture is shown in Fig. 1, which has three
In this paper, we present a genetic algorithm capable of layers: the input layer which takes in the input data; the
obtaining not only the trained optimal topology of a neural hidden layer which comprises both the sigmoid neurons
network but also the least number of connections necessary and the Gaussian neurons; and the output layer where a
for solving the problem. In the following sections, the tech- linear function is used to combine the BP part and the
niques used in this paper are briey reviewed, and the RBF part.
design of the NN system for predicting the grain drying In this research, we adapt their SMNN for predicting
process is discussed in detail. A grain drying process is used moisture content of grain drying process. The number of
to demonstrate the eectiveness of the neural network. The neurons in the input and output layers are given by the
nal section draws conclusions regarding this study. number of input and output variables in the process. The
inputs of the structure can be variables such as inlet mois-
2. Materials and methods ture content, grain temperatures, and air temperatures,
which are easily measurable. The output of the system is
2.1. Neural network system the moisture content of the grain.

The back-propagation neural network (BPNN) is a mul- 2.2. Design structural modular neural network using GA
tilayer feed-forward network with a back-propagation
learning algorithm. The BPNN is characterized by hidden The network conguration of the SMNN can be trans-
neurons that have a global response. The commonly used formed into two subset selection problems: one is the num-
transfer function in the BPNN is the sigmoid function ber of BP hidden neurons; and the other is the distinct
1 terms nc which are selected from the N data samples as
f sj 1 the centers of RBF hidden neurons.
1 expsj
There are a few types of representation schemes avail-
where sj is the weighted sum of inputs coming to the jth
able for decoding the neural network architecture, such
node.
as the binary coding and the gray scale. In the present
Usually, there is only one hidden layer for the BPNNs as
work, the chromosome in the GAs population is divided
the availability of such a layer is sucient to produce the
into two parts. One part is a xed length chromosome that
set of desired output patterns for all of the training vector
contains the number of BP hidden neuron in binary form.
pairs.
The other part is a variable length chromosome (i.e. real
The radial basis function neural network (RBFNN)
coding) that represents the number and position of the
belongs to the group of kernel function nets that utilize
RBF hidden neurons. The centers of the RBF part are ran-
simple kernel functions as the hidden neurons, distributed
domly selected data point from the training data set and
in dierent neighborhoods of the input space, and whose
the center locations proposed here are also restricted to
responses are essentially local in nature. The RBF produces
be the data sample. The data sample xi is labeled with index
a signicant nonzero response only when the input falls
i (i = 1, 2, . . . , N), then the RBF neurons can be coded as a
within a small-localized region of the input space. The most
common transfer function in an RBFNN is the Gaussian
activation function output
Pn !
2
i1 xi  C ki
/k exp  ; k 1; 2 . . . q 2
b2 BP hidden node RBF hidden node

where xi is the ith variable of input; Cki the center of the kth input
RBF unit for input variable i; and b2 is the width of the kth
RBF unit. Fig. 1. Structural modular neural network architecture.
930 X. Liu et al. / Food Control 18 (2007) 928933

chromosome with integer values ranging from 1 to N. A R1 13 15 14 20 17 11


range was given for the string length, and the string should R2 17 11 12 15 19 10 13 16
only contain distinct terms. So the chromosome is
Ci = Bi [ Ri. For example After the rst step crossover, the two parent chromosomes
are changed to
B1 0 1 1 0 1
B2 0 1 0 1 0 C1 0 1 1 1 0 13 15 14 20 17 11
R1 13 15 10 16 17 11 C2 0 1 0 0 1 17 11 12 15 19 10 13 16
R2 17 11 12 15 19 14 13 20 To increase the diversity, the second step crossover is done
C1 0 1 1 0 1 13 15 10 16 17 11 nally. Therefore, the above two chromosomes become
C2 0 1 0 1 0 17 11 12 15 19 14 13 20 C1 0 1 1 1 0 17 11 12 15 19 10 13 16
where Bi represents the BP part binary code chromosome; C2 0 1 0 0 1 13 15 14 20 17 11
Ri represents the RBF part real code chromosome; and Ci
is the full length chromosome.
2.2.2. Mutation, deletion and addition
According to the validation method, the objective func-
Like the crossover, for each selected chromosome from
tion used in this paper is the root mean squared error
the population, the string of the BP neurons and the string
(RMSE) of the test data which have not been used in the
of the RBF neurons mutate separately. A simple point
training of the neural network. The genetic operators used
mutation is used in Bi part and the operator exchanges with
in this paper are as follows.
a given probability each term in Ri part with a randomly
selected term in the corresponding complementary subset
2.2.1. Crossover of the string. For example, the above C1 will be as follows
For selected two chromosomes (parents) from the pop- after mutation:
ulation, the crossover will be done in two steps: (1) the bin-
ary part string representing the BP neurons (Bi) and the C1 0 1 0 1 0 17 11 12 15 19 48 13 16
real number encoding part string representing the RBF
neurons (Ri) will do crossover separately; (2) the whole The deletion and addition are only for RBF part to allevi-
chromosome does crossover in which BP string and the ate premature loss of allele diversity, which is caused by the
whole RBF string can be switched according to a probabil- variable length crossover in RBF string. The deletion and
ity distribution. Therefore, Bi part uses the traditional one addition operators are applied to each selected string with
single point crossover: a point is selected between 1 and equal probability. For deletion operators, a random num-
L  1 where L is the string length of the BP part. Both ber of terms are removed from the RBF string beginning
Bi strings are severed at this point and the segments to from a randomly selected position. A random number of
the right of this point are switched. And the crossover point terms are added to the end of the RBF string through
is chosen with a uniform probability distribution. For the addition operators. The newly added terms are ran-
example, if the crossover site is 3, after carrying out the rst domly chosen from the complementary subset of the se-
step crossover, the above B1 and B2 will be lected string.
B1 0 1 1 1 0; B2 0 1 0 0 1 The GA to evolve the SMNN structure is presented in
the following:
In the rst step, Ri part uses a variable length crossover
which is like uniform crossover. The common terms in both (1) Randomly choose an initial population of p individ-
RBF part parents are searched and two binary template ual chromosomes Ci (i = 1, 2, . . . , p). Each chromo-
strings are created to mark the common terms in both par- some denes a network with number of b BP
ents. If the corresponding term is the common term, the hidden neurons and number of r RBF hidden neu-
binary bit in the template string will be set to 1, otherwise rons associated RBF center locations.
it will be 0. Secondly, two random numbers of distinct (2) Decode each chromosome. Each chromosome pre-
terms are selected from the RBF parents and exchanged sents one network architecture. Using the Leven-
with each other. For example, the above R1 and R2 will bergMarquardt algorithm to train the network and
do rst step crossover as follows. compute the RMSE value of the training data and
First create two template strings to mark the parents: the testing data for each chromosome Ci. Set the
T1 1 1 0 0 1 1 number of generations Ng for evolution. Set counter
g = 0.
T2 1 1 0 1 0 0 1 0
(3) Taking the RMSE of the testing data as the tness
Then exchange two distinct terms from parent 1 with two f(Ci) (i = 1, 2, . . . , p) value of each individual chro-
distinct terms from the end of parent 2, and keep the com- mosome. Rank tness value of each individual in
mon terms unchanged: the population.
X. Liu et al. / Food Control 18 (2007) 928933 931

(4) Set counter g = 1, apply genetic operators to create 140 T8

ospring: 120
T7
T6
(a) Use roulette wheel selection to produce the 100 T5

temperature/C
reproduction pool. 80 T4
T3
(b) Apply two-step crossover with given probability 60 T2
to two parent chromosomes in the reproduction 40 T1

pool, create two ospring. 20


TU
TM
(c) Apply mutation with given probability to every 0 TL

11
16
21
26
31
36
41
46
51
56
1
6
bit of the ospring. -20
(d) Apply deletion and addition with given probabil- time/h

ity to the RBF part strings of ospring, produce Fig. 3. The grain temperatures (T1T8) and drying-air temperatures (TU,
the new generation. TM, TL).
(e) Decode each chromosome in new generation.
Train each network and compute the new RMSE
values of the training data and the testing data 30

moisture contents/% w.b.


for each new chromosome. 25
(f) Set g = g + 1, if g > Ng, stop. Otherwise, go to 20
step (a).
15
10
2.3. Database preparation inlet MC
5
outlet MC
0
The experiment was carried out on a tower-type mixed-

13
17
21
25
29
33
37
41
45
49
53
57
61
1
5
9
ow grain dryer with high of 26 m, section area of 16 m2 time/h
and solid ow rate from 2.4 to 4.0 m/h (see Fig. 2). The
dryer is quadrate in shape with the air in the drying section Fig. 4. The inlet and outlet moisture contents for training and testing
neural network.
owing through the grain column from the air plenum to
the ambient, and in the reverse direction in the cooling sec-
tion. A grain turn-ow is located midway in the drying In order to study the dynamics of grain drying, about
column. 60 h of data were collected while the dryer operated
The controller of the dryer consists of the temperature under manual performance, with air ow rate from 0.27
sensors, the data acquisition system, and a personal com- to 0.42 m/s, the surrounding temperature from 27 to
puter. The PC communicates with the sensors and the 10 C, and the drying-air temperature from 80 to
grain-discharged motor through a data acquisition card. 125 C. One set data per hour are chose, so there are 60 sets
The rpm of the grain-discharged motor is proportional to data to be used to training and testing the neural network.
05 V input to the driver of the grain-discharged motor. Figs. 3 and 4 show all the input graphs used for training the
NN.

3. Results and discussion

The experiments are carried out with the SMNN algo-


rithm proposed in this paper. For comparison purpose,
the results by using the evolved BPNN alone and by using
the evolved RBFNN alone are also calculated.
In this paper, the inlet moisture content (Min), grain
temperatures (T1T8) and drying-air temperatures (TU,
TM, TL) are taken as the input parameters, while the outlet
moisture content (Mout) as output parameter. Thus the
SMNN aims to nd a mapping f such that Mout = f(T1,
T2, T3, T4, T5, T6, T7, T8, TU, TM, TL, Min). The SMNN
used here has 12 neurons in the input layer and one neuron
in the output layer. The number of the hidden layer is
decided by the GA algorithm proposed in this paper.
The numbers of data for neural network training and
testing are 40 sets and 20 sets, respectively. The initial chro-
Fig. 2. Schematic of the tower-type mixed-ow grain dryer (T1T8 are the mosome length of the BP part is 5. For the RBF part, the
grain temperatures). minimum string length is dened as 2 and the maximum
932 X. Liu et al. / Food Control 18 (2007) 928933

string length is 20. The population size is chosen as 20. The Table 1
probabilities for the crossover and the mutation are 0.5 and MSE of grain drying process prediction
0.02, respectively, and the probability of deletion and addi- Number of MSE of MSE of
tion is taken as 0.04. The above GA parameters are selected hidden neurons training data testing data
after a series of trial and error runs. Since the training set SMNN 6-BP, 10-RBF 0.0298 0.0312
containsP40 distinct terms, the search space therefore con- BPNN 22 0.0304 0.0368
RBFNN 42 0.0309 0.0336
tains 25 20 i
i2 C 40 1:98  10
13
dierent networks. The gen-
eration number is set to be 50.
The evolution of average and minimum MSE of the test-
ing data are shown in Fig. 5, where the average MSE is the
17
average value of MSE in the whole chromosomes for each

moisture content/% w.b.


generation and the minimum MSE is the minimum one in 16

the whole population. The best result is the record of the 15


minimum value for one particular GA run. Fig. 5 indicates 14
that the average MSE decreases through the evolution and 13
the best generalization network is emerged at the 32th gen- 12
eration. For the best performance SMNN, the MSE values 11
on the training data and the testing data are 0.0298 and 10

13
17
21
25
29
33
37
41
45
49
53
57
1
5
9
0.0312, as shown in Fig. 6. The algorithm automatically
time/h
searches for the appropriate network size according to
the given objective. The best SMNN is at the 32th genera- Fig. 7. The predicted outlet moisture contents by SMNN. Solid line:
tion which has least neurons (6 BP neurons and 10 RBF predicted data; dash line: measured data.

neurons). The comparison between the evolved SMNN


0.12
and the evolved BPNN and RBFNN can be seen at Table
average MSE 1. It is clear that even though the generalization error of
minimum MSE evolved BPNN and RBFNN are similar as the evolved
0.1
best result SMNN, the evolved SMNN performs slightly better on
MSE for test data

0.08 the testing data and the complexity of the evolved SMNN
is signicantly reduced compared with the other two
0.06
networks.
The predicted result from simulation test on the mois-
0.04
ture content prediction of grain drying process based on
0.02 the SMNN is shown in Fig. 7. The gure shows that the
accuracy of predicted value is excellent.
0
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49

4. Conclusions
generation

Fig. 5. The average and minimum MSE for testing data in each As would be expected there was a fairly strong inuence
generation. of the NN topologies on the accuracy of the estimation.
Therefore, the selection of the most appropriate NN topol-
ogy was the main issue. In this paper, the SMNN has been
0.12 proposed which comprises sigmoid and Gaussian neurons
0.11 in the hidden layer of the feed-forward neural network.
0.1 testing results
0.09
The GA is used to select the appropriate network architec-
training results
0.08 ture in determining the optimal number of nodes in the hid-
0.07 den layer of the SMNN. Since the GA is a global search
MSE

0.06 method, so it has less probability of being trapped at local


0.05
0.04
minima. It has been demonstrated that the proposed
0.03 SMNN algorithm can automatically determine the appro-
0.02 priate network structure, and the experimental results show
0.01 good performance of the SMNN over the BPNN and the
0
RBFNN.
16
31
46
61
76
91
106
121
136
151
166
181
196
211
226
241
1

This study also shows that neural network modeling can


Epochs
be used to obtain good accurate of moisture content pre-
Fig. 6. The MSE for training and testing data. diction during grain drying process over a wide experimen-
X. Liu et al. / Food Control 18 (2007) 928933 933

tal range. The technological interest of this kind of model- Farkas, I., Remenyi, P., & Biro, A. (2000a). A neural network topology
ing must be related to the fact that it is elaborated without for modelling grain drying. Computers and Electronics in Agriculture,
26, 147158.
any preliminary assumptions on the underlying mecha- Farkas, I., Remenyi, P., & Biro, A. (2000b). Modelling aspects of grain
nisms. The applications of neural networks and genetic drying with a neural network. Computers and Electronics in Agricul-
algorithm can be used for the on-line prediction and con- ture, 29, 99113.
trol of drying process. Huang, B., & Mujumdar, A. S. (1993). Use of neural network to predict
industrial dryer performance. Drying Technology, 11(3), 525541.
Jay, S., & Oliver, T. N. (1996). Modelling and control of drying processes
Acknowledgements using neural networks. In Proceedings of the tenth international drying
symposium (IDS96), Krakow, Poland, 30 July2 August, Vol. B, pp.
This work was elaborated within the project of Precise 13931400.
Drying System of Maize, No. 05EFN217100439 funded Jiang, N., Zhao, Z., & Ren, L. (2002). Design of structural modular neural
by the Ministry of Science and Technology of Peoples networks with genetic algorithm. Advances in Engineering Software, 34,
1724.
Republic of China. Kaminski, W., Strumillo, P., & Tomczak, E. (1998). Neurocomputing
approaches to modelling of drying process dynamics. Drying Techn-
References ology, 16(6), 967992.
Lin, C. T., & Lee, C. S. G. (1995). Neural Fuzzy Systems. Englewood
Blanco, A., Delgado, M., & Pegalajar, M. C. (2000). A genetic algorithm Clis, NJ: Prentice Hall.
to obtain the optimal recurrent neural network. International Journal Sreekanth, S., Ramaswamy, H. S., & Sablani, S. (1998). Prediction of
of Approximate Reasoning, 23, 6783. psychrometric parameters using neural networks. Drying Technology,
Boozarjomehry, R. B., & Svrcek, W. Y. (2001). Automatic design of 16(35), 825837.
neural network structures. Computers and Chemical Engineering, 25, Thyagarajan, T., Panda, R. C., Shanmugam, J., Rao, P. G., &
10751088. Ponnavaikko, M. (1997). Development of ANN model for non-linear
De Baerdemaeker, J., & Hashimoto, Y. (1994). Speaking fruit approach to drying process. Drying Technology, 15(10), 25272540.
the intelligent control of the storage system. In Proceedings of 12th Trelea, I. C., Courtois, F., & Trystram, G. (1997). Dynamic models for
CIGR World Congress on Agricultural Engineering, Vol. 2, Milan, drying and wet-milling quality degradation of corn using neural
Italy, 29 August1 September, 1994, pp. 14931500. networks. Drying Technology, 15(3 and 4), 10951102.

Das könnte Ihnen auch gefallen