Sie sind auf Seite 1von 6

Advances in Engineering Software 40 (2009) 350355

Contents lists available at ScienceDirect

Advances in Engineering Software


journal homepage: www.elsevier.com/locate/advengsoft

Prediction of compressive strength of concretes containing metakaolin


and silica fume by articial neural networks
Mustafa Sardemir *
Department of Civil Engineering, Nigde University, 51200 Nigde, Turkey

a r t i c l e

i n f o

Article history:
Received 25 March 2008
Received in revised form 8 May 2008
Accepted 12 May 2008
Available online 24 June 2008
Keywords:
Metakaolin
Silica fume
Compressive strength
Neural networks

a b s t r a c t
Neural networks have recently been widely used to model some of the human activities in many areas of
civil engineering applications. In the present paper, the models in articial neural networks (ANN) for
predicting compressive strength of concretes containing metakaolin and silica fume have been developed
at the age of 1, 3, 7, 28, 56, 90 and 180 days. For purpose of building these models, training and testing
using the available experimental results for 195 specimens produced with 33 different mixture proportions were gathered from the technical literature. The data used in the multilayer feed forward neural
networks models are arranged in a format of eight input parameters that cover the age of specimen,
cement, metakaolin (MK), silica fume (SF), water, sand, aggregate and superplasticizer. According to these
input parameters, in the multilayer feed forward neural networks models are predicted the compressive
strength values of concretes containing metakaolin and silica fume. The training and testing results in the
neural network models have shown that neural networks have strong potential for predicting 1, 3, 7, 28,
56, 90 and 180 days compressive strength values of concretes containing metakaolin and silica fume.
2008 Elsevier Ltd. All rights reserved.

1. Introduction
In recent years, many countries, kaolin and clay are used for
producing active pozzolanic materials. These pozzolanic admixtures are used for reducing the cement content in mortar and concrete production [1]. Also, the use of pozzolanic materials such as
silica fume (SF), y ash and metakaolin (MK) is necessary for producing high performance concrete. These materials, when used as
mineral admixtures in high performance concrete, can improve
either or both the strength and durability properties of the concrete [2,3]. MK is a thermally activated alumino-silicate materials
obtained by calcining kaolin clay within the temperature range
650800 C [2]. SF is a by-product of the manufacture of silicon
and ferrosilicon alloys [4]. MK has a high pozzolanic activity and
micro ller properties very similar to those of silica fume. Both
MK and SF increase the water demand of the mixes [5]. MK and
SF uses are expected to produce dense and impermeable concrete.
The use of MK and SF in combination with a superplasticizer is
now a usual way to obtain high-strength concretes. Several studies
indicate that the presence of MK and SF in concrete seems to increase the compressive strength as compared to that of conventional concrete. The increase of compressive strength of concretes
with MK and SF accounts for the increasing consumption of this
admixture in concrete. In the study of Poon et al. [2], the two groups
* Tel.: +90 388 225 2302.
E-mail address: msdemir@nigde.edu.tr
0965-9978/$ - see front matter 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.advengsoft.2008.05.002

of specimen series with 12 different mixes have been obtained. It


has been designed that the water-binder ratio are 0.3 and 0.5 for
the rst and second group of mixes, respectively. Each group included mixes with 5%, 10% and 20% replacement by weight of cement with MK and mixes with 5% and 10% replacement by weight
of cement with SF; and a control mix without any mineral admixture. The compressive strength of concretes produced with 10%
MK was signicantly higher when compared to the strength of concretes produced with 10% SF. It was concluded that the employment
of MK and SF content in concrete has a positive effect on increasing
the compressive strength. Wong and Abdul Razak [6] have been obtained the three groups of specimen series with 21 different mixes
with water-binder ratios of 0.27, 0.30 and 0.33. Each group included
mixtures with 0%, 5%, 10% and 15% replacement by weight of cement
with MK and SF. These researchers observed that compressive
strength of concretes increased with decreasing of water-binder ratios. In their study, it was concluded that the compressive strength
values of concrete specimens prepared with 10% MK and SF were
higher than concrete specimens prepared with other mixture ratios.
In the last years, articial neural networks (ANN) technology, a
sub-eld of articial intelligence, are being used to solve a wide
variety of problems in civil engineering applications [714]. The
most important property of ANN in civil engineering problems
are their capability of learning directly from examples. The other
important properties of ANN are their correct or nearly correct response to incomplete tasks, their extraction of information from
noisy or poor data, and their production of generalized results from

351

M. Sardemir / Advances in Engineering Software 40 (2009) 350355

the novel cases. The above-mentioned capabilities make ANN a


very powerful tool to solve many civil engineering problems, particularly problems, where data may be complex or in an insufcient amount [13]. The basic strategy for developing an ANN
system based models for material behavior is to train an ANN system on the results of a series of experiments using that material
[812]. If the experimental results include the relevant information
about the material behavior, then the trained ANN system will contain enough information about materials behavior to qualify as a
material model [912]. Such a trained ANN system not only would
be able to reproduce the experimental results, but also they would
be able to approximate the results in other experiments trough
their generalization capability [812].
The aim of this study is to build models which have two different architectures in ANN system to evaluate the effect of MK and SF
on compressive strength of concrete. For purpose of constructing
this models, 33 different mixtures with 195 specimens of the 1,
3, 7, 28, 56, 90 and 180 days compressive strength results of concretes containing MK and SF used in training and testing for ANN
system were gathered from the technical literature [2,6]. In training and testing of the models constituted with two different architectures the age of specimen (AS), cement (C), metakaolin (MK),
silica fume (SF), water (W), sand (S), aggregate (A) and superplasticizer (SP) were entered as input; while compressive strength (fc)
values were used as output. The models were trained with 130
data of experimental results and then remainders were used as
only experimental input values for testing and values similar to
the experimental results were obtained.
2. Articial neural networks
Articial neural networks (ANN) are computing systems made
up of a number of simple, highly interconnected processing elements, which processes information by their dynamic state response to external inputs [15]. The fundamental concept of
neural networks is the structure of the information processing system [12]. Generally, an ANN are made of an input layer of neurons,
sometimes referred to as nodes or processing units, one or several
hidden layer of neurons and output layer of neurons. The neighboring layers are fully interconnected by weight. The input layer neurons receive information from the outside environment and
transmit them to the neurons of the hidden layer without performing any calculation [16,17]. Layers between the input and output
layers are called hidden layers and may contain a large number
of hidden processing units [14]. All problems, which can be solved
by a perceptron can be solved with only one hidden layer, but it is
sometimes more efcient to use two or three hidden layers. Finally,
the output layer neurons produce the network predictions to the
outside world [16,17]. Each neuron of a layer other than the input
layer computes rst a linear combination of the outputs of the neurons of the previous layer, plus a bias. The coefcients of the linear
combinations plus the biases are called weights. Neurons in the
hidden layer then compute a nonlinear function of their input.
Generally, the nonlinear function is the sigmoid function [12].
According to the information mentioned above, an articial
neuron is composed of ve main parts: inputs, weights, sum function, activation function and outputs. Fig. 1 shows a typical neural
network with input, sum function, sigmoid activation function and
output. The input to a neuron from another neuron is obtained by
multiplying the output of the connected neuron by the synaptic
strength of the connection between them [18]. The weighted sums
of the input components (net)j are calculated by Eq. (1):

netj

n
X

wij oi b:

x1

x2

wi1
1

wi2

wijoi

...

Sum
function

...

xn

win

oj

(net)j
-1

Output

Sigmoid activation
function

Input
Fig. 1. Articial neuron model.

Here (net)j is the weighted sum of the jth neuron for the input received from the preceding layer with n neurons, wij is the weight
between the jth neuron in the preceding layer, oi is the output of
the ith neuron in the preceding layer [911]. The quantity b is called
the bias and is used to model the threshold.
The output signal of the neuron, denoted by oj in Fig. 1, is related to the network input (net)j via a transformation function
called the activation function [18]. The most common activation
functions are ramp, sigmoid, and Gaussian function. In general
for multilayer receptive models as the activation function (f(net)j)
sigmoid function is used. The output of the jth neuron oj is calculated by Eq. (2) with a sigmoid function as follows [911]:

oj f netj

1
:
1 eanetj

Here oj is the output of neuron, a is constant used to control the


slope of the semi-linear region. The sigmoid nonlinearity activates
in every layer except in the input layer [10,11,18]. The sigmoid
function represented by Eq. (2) gives outputs in (0, 1) [911].
In recent years, ANN have been applied to many civil engineering problems with some degree of success. In civil engineering,
neural networks have been applied to the detection of structural
damage, structural system identication, modeling of material
behavior, structural optimization, structural control, ground water
monitoring, prediction of experimental studies, and concrete mix
proportions [14].
The neural network based modeling process determination: (a)
data acquisition, analysis and problem representation; (b) architecture determination; (c) learning process determination; (d)
training of the networks; and (e) testing of the trained network
for generalization evaluation [11,19]. After these processes are carried out, ANN can supply meaningful answers even when the data
to be processed include errors or are incomplete and can process
information extremely rapidly when applied to solve engineering
problems [11,20].

Feed forward

Input

Output
Output
Layer
Input
Layer

1. Hidden
Layer

2. Hidden
Layer

Back propagation

i1

Fig. 2. A typical architecture of a multilayer feed forward neural network.

352

M. Sardemir / Advances in Engineering Software 40 (2009) 350355


Table 2
The values of parameters used in models

N1
AS

N2

N3

MK

N4

SF

N5

fc

N6

Output
layer

N7

N8

SP

N9

Input
layer

Fig. 3. The system used in the ANN-I model.

N11
N21
N12
N22

C
N13

N23

MK

ANN-I

ANN-II

Number of input layer neurons


Number of hidden layer
Number of rst hidden layer neurons
Number of second hidden layer neurons
Number of output layer neuron
Momentum rate
Learning rate
Error after learning
Learning cycle

8
1
10

1
0.7
0.3
0.000195
50.000

8
2
9
8
1
0.7
0.3
0.000069
50.000

tion and passes it onto the neurons of the hidden layer(s), which in
turn pass the information to the output layer. The output from the
output layer is the prediction of the net for the corresponding input
supplied at the input nodes. Each neuron in the network behaves in
the same way as discussed in Eqs. (1) and (2). There is no reliable
method for deciding the number of neural units required for a particular problem. This is decided based on experience and a few trials are required to determine the best conguration of the network
[18]. In this study, the multilayer feed forward type of ANN, as
shown in Fig. 2 is considered. In a feed forward network, the inputs
and output variables are normalized within the range of 01.

N10

Hidden
layer

AS

Parameters

N14
N24
N15

fc

Output
layer

N16
N26

N17
N27

A
N18

N28

SP

Input
layer

N19

1. Hidden
layer

2. Hidden
layer

Fig. 4. The system used in the ANN-II model.

2.1. Feed forward networks


In a feed forward neural network, the articial neurons are arranged in layers, and all the neurons in each layer have connections
to all the neurons in the next layer [12]. However, there is no connection between neurons of the same layer or the neurons which
are not in successive layers. The feed forward network consists of
one input layer, one or two hidden layers and one output layer of
neurons [18]. Associated with each connection between these
articial neurons, a weight value is dened to represent the connection weight [12]. Fig. 2 shows a typical architecture of a multilayer feed forward neural network with an input layer, two hidden
layer, and an output layer. The input layer receives input informaTable 1
The input and output quantities used in ANN models
Data used in training and testing the models

Input variables
Age of specimen (day)
Cement (kg/m3)
Metakaolin (kg/m3)
Silica fume (kg/m3)
Water (kg/m3)
Sand (kg/m3)
Aggregate (kg/m3)
Superplasticizer (l/m3)
Output variable
Compressive strength (MPa)

Minimum

Maximum

1
328
0
0
135
648
1050
0

180
500
100
75
205
725
1087
43

24.50

120.30

120
Predicted compressive strength, MPa .

N25

ANN-I training results


ANN-I testing results

110
100
90

ANN-I training
80 y = 0.9474x + 4.5591
R2 = 0.9875
70
60

ANN-I testing
y = 0.9497x + 3.7759
R2 = 0.9833

50
40
30
20
20

30

40 50 60 70 80 90 100 110 120


Experimental compressive strength, MPa

Fig. 5. Comparison of fc experimental results with training and testing results of


ANN-I.

Predicted compressive strength, MPa .

SF

120

ANN-II training results

110

ANN-II testing results

100
90

ANN-II training

80 y = 0.9712x + 2.5368
R2 = 0.9947

70
60

ANN-II testing
y = 0.9963x + 0.4861
R2 = 0.9832

50
40
30
20
20

30

40 50 60 70 80 90 100 110 120


Experimental compressive strength, MPa

Fig. 6. Comparison of fc experimental results with training and testing results of


ANN-II.

353

M. Sardemir / Advances in Engineering Software 40 (2009) 350355

2.2. The back propagation algorithm


Back propagation algorithm, as one of the most well-known
training algorithms for the multilayer perceptron, is a gradient
descent technique to minimize the error for a particular training

pattern in which it adjusts the weights by a small amount at a time


[911]. The network error is passed backwards from the output
layer to the input layer, and the weights are adjusted based on
some learning strategies so as to reduce the network error to an

Table 3
Testing data sets for comparison of experimental results with testing results predicted from models
Data used in models construction

Compressive strength (MPa)

AS (day)

C (kg/m3)

MK (kg/m3)

SF (kg/m3)

W (kg/m3)

A (kg/m3)

S (kg/m3)

SP (l/m3)

Experimental results

ANN-I

ANN-II

Reference

1
1
1
1
1
1
1
3
3
3
3
3
3
3
3
3
3
3
7
7
7
7
7
7
7
7
7
7
7
28
28
28
28
28
28
28
28
28
28
28
56
56
56
56
56
56
56
90
90
90
90
90
90
90
90
90
90
90
180
180
180
180
180
180
180

475
475
500
425
425
450
450
475
475
500
425
425
450
450
475
475
390
390
475
475
500
425
425
450
450
475
475
390
390
475
475
500
425
425
450
450
475
475
390
390
475
475
500
425
425
450
450
475
475
500
425
425
450
450
475
475
390
390
475
475
500
425
425
450
450

25
0
0
75
0
50
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0

0
25
0
0
75
0
50
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50
0
25
0
0
75
0
50
0
25
0
20.5
0
25
0
0
75
0
50

135
135
150
150
150
165
165
135
135
150
150
150
165
165
150
150
205
205
135
135
150
150
150
165
165
150
150
205
205
135
135
150
150
150
165
165
150
150
205
205
135
135
150
150
150
165
165
135
135
150
150
150
165
165
150
150
205
205
135
135
150
150
150
165
165

1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1087
1087
1081
1081
1050
1050
1050
1050
1050
1050
1050
1087
1087
1081
1081
1050
1050
1050
1050
1050
1050
1050
1087
1087
1081
1081
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1050
1087
1087
1081
1081
1050
1050
1050
1050
1050
1050
1050

720
725
695
680
680
690
685
720
725
695
680
680
690
685
721
716
659
655
720
725
695
680
680
690
685
721
716
659
655
720
725
695
680
680
690
685
721
716
659
655
720
725
695
680
680
690
685
720
725
695
680
680
690
685
721
716
659
655
720
725
695
680
680
690
685

43
43
19
19
19
12
12
43
43
19
19
19
12
12
0.6
0.6
0
0
43
43
19
19
19
12
12
0.6
0.6
0
0
43
43
19
19
19
12
12
0.6
0.6
0
0
43
43
19
19
19
12
12
43
43
19
19
19
12
12
0.6
0.6
0
0
43
43
19
19
19
12
12

35.00
35.00
48.00
38.00
38.00
34.00
32.00
67.00
63.00
63.50
60.50
57.50
59.00
53.00
73.00
67.00
32.60
27.40
76.50
75.50
72.00
80.00
74.50
74.00
70.50
88.20
79.30
45.90
47.00
89.00
88.50
83.50
94.50
98.50
84.50
89.50
103.60
106.50
57.10
54.30
95.00
93.00
84.50
96.50
101.50
87.00
90.50
98.00
96.50
85.50
97.50
104.00
89.00
92.00
112.90
110.20
66.50
67.50
99.00
97.50
87.50
99.50
106.50
92.50
93.50

39.49
40.86
53.95
43.70
40.98
40.75
36.97
58.45
59.78
65.29
59.98
58.05
54.99
51.53
71.52
65.80
32.39
29.52
77.89
79.01
76.44
76.85
77.64
70.53
69.04
84.26
79.87
44.96
41.99
89.25
90.55
84.18
89.58
96.37
83.97
87.69
103.54
103.46
57.96
56.37
92.66
93.91
85.26
92.79
100.16
86.62
91.63
96.06
97.28
87.08
95.28
103.21
87.13
93.13
112.29
110.96
65.75
65.44
97.83
98.52
90.09
97.75
109.45
85.25
95.08

37.77
31.87
50.60
47.01
38.33
37.59
35.52
63.73
60.25
60.06
66.58
56.13
53.55
53.28
79.96
67.43
30.35
27.54
79.18
78.51
72.25
80.88
75.46
68.73
70.21
93.64
81.69
44.25
42.59
90.05
89.24
84.33
90.89
95.83
82.01
88.49
110.39
105.33
55.93
55.28
93.61
92.85
86.25
94.36
98.35
85.02
90.41
96.30
96.05
88.06
97.29
101.17
88.57
92.33
117.40
111.79
64.14
63.97
99.46
100.40
90.10
98.47
107.20
91.44
95.48

[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[2]
[2]
[2]
[2]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[2]
[2]
[2]
[2]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[2]
[2]
[2]
[2]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[6]
[2]
[2]
[2]
[2]
[6]
[6]
[6]
[6]
[6]
[6]
[6]

354

M. Sardemir / Advances in Engineering Software 40 (2009) 350355

acceptable level [16]. The error for rth example is calculated by Eq.
(3):

Er

1X
tj  oj 2 :
2 j

Here tj is the output desired at neuron j and oj is the output predicted at neuron j. As presented in Eqs. (1) and (2) the output oj is
a function of synaptic strength and outputs of the previous layer
[18].
The learning consists of changing the weights in order to minimize this error function in a gradient descent technique. In the
back propagation phase, the error between the network output
and the desired output values is calculated using the so-called generalized delta rule [21], and weights between neurons are updated
from the output layer to the input layer by Eq. (4) [13]

wij m 1 wij m gdj oj bwij t

dj oj t j  oj 1  oj ;
X
dk wkj :

3. Results and discussion


In this study, the error arose during the training and testing in
ANN-I and ANN-II models can be expressed as a root-meansquared (RMS) error and is calculated by Eq. (7) [10,11]

s
1X
RMS
jti  oi j2 :
p i

5
6

Here, the kth layer means the upper layer of the jth layer [13]. The
above operations are repeated for each example and for all the neurons until a satisfactory convergence is achieved for all the examples present in the training set [18]. The training process is
successfully completed, when the iterative process has converged.
The connection weights are captured from the trained network, in
order to use them in the recall phase [13]. For the present study,
a multilayer feed forward network is adopted for training purpose.
The error is reduced using a back propagation algorithm.
2.3. Neural network model
In this study, a multilayered feed forward neural network with a
back propagation algorithm was adopted. The nonlinear sigmoid
function was used in the hidden layer and the cell outputs at the
output layer. As seen in Figs. 3 and 4, two different multilayer articial neural network architectures namely ANN-I and ANN-II were
built. In training and testing of the ANN-I and ANN-II models constituted with two different architectures AS, C, MK, SF, W, S, A, and SP
were entered as input; while fc value was used as output. In the
ANN-I and ANN-II, 130 data of experiment results were used for
training whereas 65 of these data were employed for testing. In
ANN-I model, as seen Fig. 3; one hidden layer was selected. In the
hidden layer 10 neurons were determined due to its minimum
absolute percentage error values for training and testing sets. In
ANN-II model, as seen Fig. 4; two hidden layers were selected. In
the rst hidden layer nine neurons and in the second hidden layer
eight neurons were determined due to its minimum absolute percentage error values for training and testing sets. The limit values
of input and output variables used in ANN-I and ANN-II models
are listed in Table 1. More detail regarding input and output variables can be obtained from literature [2,6]. In the ANN-I and
ANN-II models, the neurons of neighboring layers are fully interconnected by weights. Finally, the output layer neuron produces
the network prediction as a result. Momentum rate and learning
rate values were determined for both to modes and the models
were trained through iterations. The values of parameters used in
ANN-I and ANN-II are given in Table 2. The trained models were

In addition, the absolute fraction of variance (R2) and mean absolute


percentage error (MAPE) are calculated by Eqs. (8) and (9), respectively [10,11,22,23]
2

Here, dj is the error signal at a neuron j, oj is the output of neuron j,


m is the number of iteration, and g, b are called learning rate and
momentum rate, respectively. dj in Eq. (4) can be calculated using
the partial derivative of the error function Er in the output layer
and other layer, respectively, by Eqs. (5) and (6) [13,18]

dj oj 1  oj

only tested with the input values and the results found were close
to experiment results.

R 1

i t i

 oi 2

!
8

;
P
2
i oi


 ti  oi 
  100:
MAPE 

oi

Here t is the target value, o is the output value, p is the pattern.


In the training and testing of ANN-I and ANN-II models, various
experimental data from two different sources [2,6] are used. In the
ANN-I and ANN-II models, 130 data of experiment results were
used for training whereas 65 ones were employed for testing. All
results, obtained from experimental studies [2,6] and predicted
by using the training and testing results of ANN I and ANN II models, for 1, 3, 7, 28, 56, 90 and 180 days fc were given in Figs. 5 and 6,
respectively. The linear least square t line, its equation and the R2
values were shown in these gures for the training and testing
data. Also, inputs values and experimental results with testing results obtained from ANN-I and ANN-II models were given in Table
3. As it is visible in Figs. 5 and 6 the values obtained from the training and testing in ANN-I and ANN-II models are very closer to the
experimental results. The result of testing phase in Figs. 5 and 6
shows that the ANN-I and ANN-II models are capable of generalizing between input and output variables with reasonably good
predictions.
The performance of the ANN-I and ANN-II models for fc is shown
in Figs. 5 and 6, respectively. The statistical values for all the station such as RMS, R2 and MAPE, both training and testing, are given
in Table 4. While the statistical values of RMS, R2 and MAPE from
training in the ANN-I model were found as 2.8664, 99.87% and
3.9604%, respectively, these values were found in testing as
3.1019, 99.85% and 4.1883%, respectively. Similarly, while the statistical values of RMS, R2 and MAPE from training in the ANN-II
model were found as 1.8452, 99.95% and 2.4762%, respectively,
these values were found in testing as 2.9618, 99.86% and
3.5979%, respectively. The best value of R2 is 99.95% for training
set in the ANN-II model. The minimum value of R2 is 99.85% for
testing set in the ANN-I model. All of the statistical values in Table
4 show that the proposed ANN-I and ANN-II models are suitable
and predict the 1, 3, 7, 28, 56, 90 and 180 days fc values very close
to the experimental values.

Table 4
The fc statistical values of proposed ANN-I and ANN-II models
Statistical parameters

RMS
R2
MAPE (%)

ANN-I

ANN-II

Training set

Testing set

Training set

Testing set

2.8664
0.9987
3.9604

3.1019
0.9985
4.1883

1.8452
0.9995
2.4762

2.9618
0.9986
3.5979

M. Sardemir / Advances in Engineering Software 40 (2009) 350355

4. Conclusions
Articial neural networks are capable of learning and generalizing from examples and experiences. This makes articial neural
networks a powerful tool for solving some of the complicated civil
engineering problems. In this study, using these benecial properties of articial neural networks in order to predict the 1, 3, 7, 14,
28, 56, 90 and 180 days compressive strength values of concretes
containing metakaolin and silica fume without attempting any
experiments were developed two different multilayer articial
neural network architectures namely ANN-I and ANN-II. In two
models developed in ANN method, a multilayered feed forward
neural network with a back propagation algorithm was used. In
ANN-I model, one hidden layer was selected. In the hidden layer
10 neurons were determined. In ANN-II model, two hidden layers
were selected. In the rst hidden layer nine neurons and in the second hidden layer eight neurons were determined. The models were
trained with input and output data. Using only the input data in
trained models the 1, 3, 7, 28, 56, 90 and 180 days compressive
strength values of concretes containing metakaolin and silica fume
were found. The compressive strength values predicted from training and testing, for ANN-I and ANN-II models, are very close to the
experimental results. Furthermore, according to the compressive
strength results predicted by using ANN-I and ANN-II models,
the results of ANN-II model are closer to the experimental results.
RMS, R2 and MAPE statistical values that are calculated for comparing experimental results with ANN-I and ANN-II model results
have shown this situation.
As a result, compressive strength values of concretes containing
metakaolin and silica fume can be predicted in the multilayer feed
forward articial neural networks models without attempting any
experiments in a quite short period of time with tiny error rates.
The obtained conclusions have demonstrated that multilayer feed
forward articial neural networks are practicable methods for predicting compressive strength values of concretes containing
metakaolin and silica fume.

References
[1] Vu DD, Stroeven P, Bui VB. Strength and durability aspects of calcined kaolinblended Portland cement mortar and concrete. Cement Concrete Compos
2001;23(6):4718.
[2] Poon CS, Kou SC, Lam L. Compressive strength, chloride diffusivity and pore
structure of high performance metakaolin and silica fume concrete. Constr
Build Mater 2006;20(10):85865.

355

[3] Parande AK, Babu BR, Karthik MA, Deepak Kumaar KK, Palaniswamy N. Study
on strength and corrosion performance for steel embedded in metakaolin
blended concrete/mortar. Constr Build Mater 2008;22(3):12734.
[4] Al-Amoudi OSB, Maslehuddin M, Shameem M, Ibrahim M. Shrinkage of plain
and silica fume cement concrete under hot weather. Cement Concrete Compos
2007;29(9):6909.
[5] Curciol F, Deangelis BA. Dilatant behavior of superplasticized cement pastes
containing metakaolin. Cement Concrete Res 1998;28(5):62934.
[6] Wong HS, Abdul Razak H. Efciency of calcined kaolin and silica fume as
cement replacement material for strength performance. Cement Concrete Res
2005;35(4):696702.
[7] Bai J, Wild S, Ware JA, Sabir BB. Using neural networks to predict workability of
concrete incorporating metakaolin and y ash. Adv Eng Software 2003;34(11
12):6639.
_ Sardemir M. Prediction of rubberized concrete properties using
[8] Topu IB,
articial neural network and fuzzy logic. Constr Build Mater
2008;22(4):53240.
_
[9] Topu IB,
Sardemir M. Prediction of properties of waste AAC aggregate
concrete using articial neural network. Comp Mater Sci 2007;41(1):11725.
_
[10] Topu IB, Sardemir M. Prediction of compressive strength of concrete
containing y ash using articial neural network and fuzzy logic. Comp
Mater Sci 2008;41(3):30511.
[11] Pala M, zbay E, ztas A, Yce MI. Appraisal of long-term effects of y ash and
silica fume on compressive strength of concrete by neural networks. Constr
Build Mater 2007;21(2):38494.
[12] Adhikary BB, Mutsuyoshi H. Prediction of shear strength of steel ber RC
beams using neural networks. Constr Build Mater 2006;20(9):80111.
[13] Ince R. Prediction of fracture parameters of concrete by articial neural
networks. Eng Fract Mech 2004;71(15):214359.
[14] Kewalramani AM, Gupta R. Concrete compressive strength prediction using
ultrasonic pulse velocity through articial neural networks. Auto Constr
2006;15(15):3749.
[15] Raq MY, Bugmann G, Easterbrook DJ. Neural network design for engineering
applications. Comput Struct 2001;79(17):154152.
[16] Demir F. Prediction of elastic modulus of normal and high strength concrete by
articial neural network. Constr Build Mater 2008;22(7):142835.
[17] Mansour MY, Dicleli M, Lee JY, Zhang J. Predicting the shear strength of
reinforced concrete beams using articial neural network. Eng Struct
2004;26(6):78199.
[18] Mukherjee A, Biswas SN. Articial neural networks prediction of mechanical
behavior of concrete at high temperature. Nucl Eng Design 1997;178(1):111.
[19] Wu X, Lim SY. Prediction maximum scour depth at the spur dikes with
adaptive neural networks. In: Neural networks and combinatorial
optimization in civil and structural engineering. Edinburgh: Civil-Comp
Press; 1993. p. 616.
[20] Lippman RP. An introduction to computing with neural nets. In: Articial
neural networks. The computer society theoretical concepts, Washington;
1988. p. 3654.
[21] Rumelhart DE, Hinton GE, William RJ. Learning internal representations by
error propagations. In: Rumelhart DE, McClelland JL, editors. Proceedings
parallel distributed processing. foundations, vol. 1. Cambridge: MIT Press;
1986.
_
[22] Topu IB,
Sardemir M. Prediction of mechanical properties of recycled
aggregate concretes containing silica fume using articial neural networks
and fuzzy logic. Comp Mater Sci 2008;41(1):7482.
[23] Karatas , Szen A, Arcaklioglu E, Ergney S. Modelling of yield length in the
mould of commercial plastics using articial neural networks. Mater Design
2007;28(1):27886.

Das könnte Ihnen auch gefallen