Sie sind auf Seite 1von 6

Construction and Building Materials 23 (2009) 2214–2219

Contents lists available at ScienceDirect

Construction and Building Materials


journal homepage: www.elsevier.com/locate/conbuildmat

Neural networks for predicting compressive strength of structural light


weight concrete
Marai M. Alshihri a, Ahmed M. Azmy b, Mousa S. El-Bisy a,*
a
Civil Engineering Dept., College of Engineering and Islamic Architecture, Umm Al_Qura University, Makkah, Saudi Arabia
b
Civil Engineering Dept., Higher Technological Institute, Ramadan Tenth City, Egypt

a r t i c l e i n f o a b s t r a c t

Article history: Neural networks procedures provide a reliant analysis in several science and technology fields. Neural
Received 13 April 2008 network is often applied to develop statistical models for intrinsically non-linear systems because neural
Received in revised form 18 November 2008 networks behave the advantages of simulating complex behavior of many problems. In this investigation,
Accepted 1 December 2008
the neural networks (NNs) are used to predict the compressive strength of light weight concrete (LWC)
Available online 4 January 2009
mixtures after 3, 7, 14, and 28 days of curing. Two models namely, feed-forward back propagation (BP)
and cascade correlation (CC), were used. The compressive strength was modeled as a function of eight
Keywords:
variables: sand, water/cement ratio, light weight fine aggregate, light weight coarse aggregate, silica fume
Neural networks
Back propagation
used in solution, silica fume used in addition to cement, superplasticizer, and curing period. It is con-
Cascade correlation cluded that the CC neural network model predicated slightly accurate results and learned very quickly
Compressive strength as compared to the BP procedure. The finding of this study indicated that the neural networks models
Light weight concrete are sufficient tools for estimating the compressive strength of LWC. This undoubtedly will reduce the cost
and save time in this class of problems.
Ó 2008 Elsevier Ltd. All rights reserved.

1. Introduction methods. Mathematical models based on experimental data are


called ‘‘free models”, and generally are in regression forms. How-
Lightweight aggregates are broadly classified in to two types: ever, if the problem contains many independent variables, regres-
natural (pumice, diatomite, volcanic cinders, etc.) and artificial sion methods cannot be used because of less accuracy and more
(perlite, expanded shale, clay, slate, sintered PFA, etc.). Lightweight assumptions in regression form (linear, non-linear, exponential,
aggregates can be used to produce low density concretes required etc.). In the recent years, new modeling techniques such as artifi-
for building applications like cladding panels, curtain walls, com- cial neural networks, expert systems as a free model can approxi-
posite flooring systems, and load-bearing concrete blocks [1,2]. mate non-linear and complex relation due to any phenomena and
Structural lightweight concrete has its obvious advantages of trial and error process by learning real record relationship without
higher strength to weight ratio, better tensile strain capacity, lower any presumptions [5,6].
coefficient of thermal expansion, and superior heat and sound Free from such limitations, The NN approach (a class of soft
insulation characteristics due to air voids in the lightweight aggre- computing techniques), has been used recently in predictive mod-
gate. Also the reduction in the dead weight of the construction els for the field of materials engineering [7–19] because of their
materials, by the use of lightweight aggregate in concrete, could re- specific features such as non-linearity, adaptively (i.e., learning
sult in a decrease in cross-section of concrete structural elements from inputs parameters), generalization, and model independence
(columns, beams, plates, and foundation). It is also possible to re- (no priori models needed).
duce steel reinforcement [3,4]. The present study is aimed to evaluate NN models for predic-
Most of the mathematical models used to study the behavior of tion of 3, 7, 14, and 28 days compressive strength of a LWC
concrete mixes consist of mathematical rules and expressions that mixtures.
capture relationship between components of concrete mixes. By
the way, using mathematical models to take and describe experi-
ences from experimental data of concrete mixes behaviors are 2. Materials and methods
most reliable, accurate, scientific, and applicable recommended
2.1. Data (mix design) collocation

The main objective of this study is developing a neural network model to pre-
* Corresponding author. Tel.: +966 500518353. dict the compressive strength of LWC. For this aim, at first it is needed to prepare
E-mail address: melbisy@gmail.com (M.S. El-Bisy). data and construct data base for training and testing the neural network model.

0950-0618/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.conbuildmat.2008.12.003
M.M. Alshihri et al. / Construction and Building Materials 23 (2009) 2214–2219 2215

The present study covers the use of crushed hollow block as lightweight coarse following way: inputs of the neuron are multiplied by the corresponding weights.
aggregate in concrete containing silica fume as supplementary cementations mate- The product is then summed together and applied to a transfer function, to form
rials at different levels namely: 0%, 5%, 10%, and 15% as an addition to cement. The the output. This can be expressed using the following equation:
crushed hollow block aggregate was treated by solution of silica fume and calcium !
X
n
hydroxide (the concentration of silica fume is 10% and 20% and that of calcium z¼f w i xi þ d ð1Þ
hydroxide is 1%). The performance of lightweight concrete made with crushed hol- i¼1
low block as coarse aggregate was studied in terms of compressive strengths for 3,
7, 14, and 28 days. The main variables, experimental program, are described and re- where z is the output from neuron; x1, x2, ...., xn are the input values; w1, w2, ...., wn are
ported in Table 1. the connection weights; d is the bias value; and f is the activation function.
Mix proportions of concrete were designed to select suitable materials (cement, Artificial neural network can be visualized as a set of interconnected neurons
fine aggregate, coarse aggregate, water, etc.) and determine the quantities of these arranged in layers. The input layer contains one neuron for each of the input vari-
ingredients for meeting the desired compressive strength. The economy and perfor- ables. In multi layer network, the output of one layer constitutes the input to the
mance characteristics of the concrete product, thus depends on the proportioning of next layer. For example, in the ANN architecture shown in Fig. 1, this is called
theses ingredients. The procedures adopted for mix proportioning are still empirical feed-forward type of network where computations proceed along the forward
in spite of a considerable amount of work done on the theoretical aspects of mix direction only. The neural network has one input layer, one output layer, and two
proportioning of normal weight and lightweight concretes. hidden layers. The output obtained from the output neurons constitutes the net-
work output.
The connection weights and bias values are initially chosen as a random num-
2.2. Material properties
bers and then fixed by the results of training processes. Many alternative training
processes are available namely back propagation (BP) and cascade correlation
Crushed hollow block was used as lightweight aggregate. In this investigation,
(CC) schemes. The goal of any training algorithm is to minimize the mean square
blocks were broken manually. ASTM D-75 and ASTM C-136 and C-29 were used for
error (MSE) between predicted outputs and observed outputs (in the training data-
sampling, grading, unit weight, and fineness modulus of aggregate. Maximum aggre-
set) and maintaining good generality of the networks. The generality (network per-
gate size was 10 mm. The fine aggregate was confirmed to ASTM C-33 requirements.
formance) is assessed by testing a new dataset. A reasonably good learning process
A locally produced ordinary Portland cement was used in this investigation. The
can be achieved by choosing an appropriate network configuration with regard to
cement content weighed for about 450, 400, and 350 kg/m3, respectively. The water
the number of hidden layers and their hidden neurons.
has been used for mixing and curing of all concrete mixes and specimen’s was clean,
fresh, and free from any impurities.
Silica fume is a fine powder which acts as a microscopic concrete pore filler. It is 2.3.1. Back propagation algorithm
based on a chloride free pozzolanic material consisting of over 90% silicon dioxide. The back propagation learning is an iterated search process which adjusts the
Its addition to the cement mix will yield a concrete especially able to cope with the weights from output layer back to input layer in each run until no further improve-
middle eastern environment. The physical and chemical properties of the silica ment in MSE value is found. The BP algorithm calculates the error, and then used to
fume used in this investigation are reported in Tables 2 and 3, respectively. adjust the weights first in the output layer, and then distributes it backward from
Superplasticizer is a chloride free, superplasticing admixture based on selected the output to hidden and input nodes (Fig. 2). This is done using the steepest gra-
sulphonated naphthalene polymers. It is supplied as a brown solution which in- dient descent principle where the change in weight is directed towards negative
stantly disperses in water. Superplasticizer disperses the fine particles in the concrete of the error gradient, i.e.
mix, enabling the water content of the concrete to perform more effectively. The very
oE
high levels of water reduction possibly allow major increases in strength that is to be Dwn ¼ aDwn1  g ð2Þ
ow
obtained. It was added with dosage 1%, 2%, 3%, and 4% by weight of cement.
The concrete specimens were 150  150  150 mm (6  6  6 in.) cubes to where w is the weight between any two nodes; Dwn and Dwn1 are the changes in
determine the compressive strength of the above mentioned lightweight aggregate this weight at n and n  1 iteration; a the momentum factor; and g is the learning
concrete. All specimens were immersed in water tank for 24 h after casting (after rate.
finishing initial setting time) to complete hydration reaction through different peri- After training is completed, the final connection weights are kept fixed, and new
ods of time. These curing periods of time are 3, 7, 14, and 28 days. At the end of cur- input patterns are presented to the network to produce the corresponding output
ing period, the specimens were removed from water tank and placed it to dry and consistent with the internal representation of the input/output mapping.
were left it under the sun rays for at least 6 h and, then tested. Compressive
strength tests were performed by universal testing machine, equipment that mea- 2.3.1.1. Topology of the BPNN. Every stage of any NN project requires a little trial and
sures the compressive and splitting bending strengths directly according to the error to establish a suitable and stable network for the project. Trial and error may
ASTM specifications ASTM C-39. be extended to building several networks, stopping, and testing the network at dif-
ferent stages of learning and initializing the network with different random
2.3. Model induction from experimental data weights. Each network must be tested, analyzed and the most appropriate network
must be chosen for a particular project.
A neural network consists of many simple elements called neurons which are Before deciding on the topology of the network, it is important to select the re-
grouped together in layers. A neuron has many inputs and a single output. Each in- quired number of input and output parameters. The function of the hidden layer
put has a coefficient, referred to as a weight, assigned to it. A neuron works in the neurons is to detect relationships between network inputs and outputs. There is

Table 1
Mix proportions.

Sand Cement Water Light weight coarse Light weight fine Normal weight fine Silica fume
replacement (%) (kg) (kg) aggregate (kg) aggregate (kg) aggregate (kg)
Used in Used in addition to
solution cement
a. Cement content = 450 kg/m3
0% 29.250 19.013 21.571 33.786 0 1.901 2.925
25% 29.250 19.013 21.571 25.340 12.995 1.901 2.925
50% 29.250 19.013 21.571 16.893 25.989 1.901 2.925
75% 29.250 19.013 21.571 8.447 38.984 1.901 2.925
b. Cement content = 400 kg/m3
0% 26.325 17.112 21.571 33.786 0 1.711 2.632
25% 26.325 17.112 21.571 25.34 12.995 1.711 2.632
50% 26.325 17.112 21.571 16.893 25.989 1.711 2.632
75% 26.325 17.112 21.571 8.447 38.984 1.711 2.632
c. Cement content = 350 kg/m3
0% 23.034 14.972 21.571 33.786 0 1.711 2.303
25% 23.034 14.972 21.571 25.34 12.995 1.711 2.303
50% 23.034 14.972 21.571 16.893 25.989 1.711 2.303
75% 23.034 14.972 21.571 8.447 38.984 1.711 2.303

Mix A: contains first five columns; Mix B: contains first six columns; Mix C contains all seven columns.
2216 M.M. Alshihri et al. / Construction and Building Materials 23 (2009) 2214–2219

Table 2 2.3.2. Cascade correlation algorithm


Physical properties of the silica fume. The cascade correlation (CC) is another network that modifies its own architec-
ture as training progresses. CC network starts with a minimal topology, consisting
Appearance Fine powder only of the required input and output units. This net is trained until no further
Bulk density (kg/m ) 3
300–600 improvement is obtained. The error for each output until is then computed.
Surface area (m2/kg) 18,000–22,000 Next, one hidden unit is added to the net in a two-step process. During the first
Chloride content Nill to BS 5075 step, a candidate unit is connected to each of the input units, but is not connected to
Ignition loss (%) <3% the output units. The weights on the connections from the input units to the candi-
Flammability Non-flammable date unit are adjusted to maximize the correlation between the candidate’s output
and the residual error at the output units. The residual error is the difference be-
tween the target and the computed output, multiplied by the derivative of the out-
put unit’s activation function, i.e. the quantity that would be propagated back from
Table 3
the output units in the back propagation algorithm. When this training is completed,
Chemical compositions of the silica fume.
the weights are frozen and the candidate unit becomes a hidden unit in the net.
Oxides SiO2 Fe2O3 Al2O3 MgO CaO Na2O K2O The second step in which the new unit is added to the net now begins. The new
hidden unit is then connected to the output units, and the weights on the connec-
(%) >90 <1.5 <1.0 <1.0 <1.0 <0.5 <0.5
tions being adjustable. Now all connections to the output units are trained.
A second hidden unit is then added using the same process. However, this unit
receives an input signal from both the input units and the previous hidden unit. All
weights on these connections are adjusted and then frozen. The connections to the
output units are then established and trained. The process of adding a new unit,
training its weights from the input units and the previously added hidden units,
and then freezing the weights, followed by training all connections to the output
units, is continued until the error reaches an acceptable level or the maximum
number of epochs (or hidden units) is reached.
For a more detailed discussion of the learning algorithms, the interested reader
is referred to Hoehfeld and Fahlman [24], Adams and Waugh [25] and Wu [26].

Output 2.4. Pre-processing of data


Layer

Hidden NNs have been shown to be able to process data from a wide variety sources.
Layer (2) They are however, only able to process the data in a certain format. Furthermore
Input the way the data is presented to the network affects the learning of the network.
Layer Hidden Therefore, a certain amount of data processing is required before presenting the
Layer (1) training patterns to the network. In this study a linear scaling was used.
One of the reasons for pre-processing the output data is that a sigmoidal transfer
Fig. 1. Structure of an artificial neural network. function is usually used within the network. Upper and lower limits of output from a
sigmoid transfer function are generally 1 and 0, respectively. Scaling of the inputs to
the range [1, +1] greatly improves the learning speed, as these values fall in the re-
gion of sigmoid transfer function where the output is most sensitive to variations of
desired the input values. It is therefore recommended to normalize the input and output data
x1 ej Output before presenting them to the network. Scaling data can be linear or non-linear,
w1j Error
depending on the distribution of the data. Most common functions are linear and
logarithmic functions. A linear normalization function within the values of 0 to 1 is

x2 S ¼ ðV  V min Þ=ðV max  V min Þ ð3Þ


w2j
z Activation where S is the normalised value of variable; V, Vmin, and Vmax are variable minimum
Neuron
function Output and maximum values, respectively.
j

2.5. Preparation of training and validation data set


Threshold
xj ANNs are data intensive. NNs learn the underlying physics of the system of
wij
interest from the training samples, which are basically the cause-effects samples.
Therefore, the number of training samples significantly influences the network’s
Fig. 2. Neuron weight adjustments. predictive performance [27]. Increasing the number of training samples provides
more information about the shape of the solution surface(s) and thus increases
the potential level of accuracy that can be achieved by the network. Having too
no general rule for selecting the number of neurons in hidden layer. The choice of few data samples will lead to poor generalization by the network. An optimal data
hidden layer size is mainly problem specific and to some extent depends on the set for training would be the one that fully represents the modeling domain and has
number and the quality of training patterns. The number of neurons in a neural net- the minimum number of repetitive samples (i.e. identical inputs with different out-
work must be sufficient for the correct modeling of the problem, but it should be puts) in training. Since nearly 70% of the whole data were randomly chosen for
sufficiently low to ensure generalization [20]. Too large number of hidden neurons model calibration (training) and rest 30% were kept for model validation.
would encourage over-fitting, i.e. modeling the noise in the data or modeling the
data with unnecessarily complex functions. In general, a smooth interpolation be- 2.6. Model assessment
tween training data is sought.
Some authors relate the number of neurons to the number of input and output To examine how close the predicted to the compressive strength of HSC mix-
variables and the number of training patterns [20,21], but these rules cannot be tures, four indices, mean absolute error MAE, root mean square error RMSE, corre-
generalized. Some researches have mentioned that the upper bound for the re- lation coefficient R, and coefficient of efficiency Ef, were employed to evaluate the
quired number of neurons in the hidden layer should be one greater than twice performance of neural networks based on mixtures properties
the number of input units. This rule does not guarantee generalization of the net-
work. To be able to design stable BP network, it would be more appropriate to carry 1X N
MAE ¼ jPi  Oi j ð4Þ
out a parametric study by changing the number of neurons in the hidden layer in N i¼1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
order to test the stability of the network. u
u1 X N
RMSE ¼ t ðPi  Oi Þ2 ð5Þ
N i¼1
2.3.1.2. Ratio of first to second hidden layer neurons. A single layer with an optimum
number of neurons will be sufficient for modeling many practical problems. It has PN  



i¼1 P i  P Oi  O
been suggested that for continuous functions a single hidden layer with a sufficient R ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi ð6Þ
PN  
 2 PN Oi  O

 2
number of neurons will be suitable whilst a second layer may be needed for discon- i¼1 P i  P i¼1
" #," #
tinuous functions [22]. Maier and Dandy [23] have shown that the ratio of 3:1 be- Xn X n X
n
Ef ¼ ðOi  OÞ  2 ðP i  Oi Þ2  2
ðOi  OÞ ð7Þ
tween first and second hidden layer neurons yields lowest root mean square error
i¼1 i¼1 i¼1
RMSE compared to other combination.
M.M. Alshihri et al. / Construction and Building Materials 23 (2009) 2214–2219 2217

0.1 50 40
Train Error
Test Actual
0.08 40 Predicted 35

Compressive Strength (MPa)


30
0.06 30
MSE

25
0.04 20

Error (%)
20
0.02 10
15
0 0
0 4 8 12 16 20 24 28 10
Number of hidden neurons
-10 5
Fig. 3. Performance of BP with different number of hidden neurons.
-20 0
1 6 11 16 21 26 31
Data No.
where Oi is the compressive strength of LWC mixtures, Pi is the predicted value, N is
 is the mean value of observations,
the total number of data points in validation, O
Fig. 5. Comparison of predicted and actual values of 28 days compressive strength
and P is the mean value of predictions.
of tested data for BPNN (8–14–6–4) model.

3. Analysis of results and discussion


performance of the BP networks net work with various number
The compressive strength of LWC mixes are greatly influenced of neurons in one hidden layer. In Fig. 3 the performance of BP net-
by several parameters: sand, light weight coarse aggregate, light works with various number of neurons in one hidden layer is mea-
weight fine aggregate, water/cement ratio, and the percentage of sured by the MSE. MSE is the vertical axis of the graph. In this
silica fume used in solution and used in addition to cement. Conse- network, as can be seen from Fig. 3, 14 neurons in the one hidden
quently developing a standard mix design procedure for LWC layer results in a stable and optimum network. The compressive
mixes requires an extensive understanding of the relation between strength prediction efficiencies of almost all cases of a three layer
these parameters and the properties of the resulting matrix. The
input for LWC models has been chosen as sand S, water/cement ra-
tio WC, light weight fine aggregate LFA, light weight coarse aggre- 40
gate LCA, silica fume used in solution SFS, silica fume used in
Predicted Compressive Strength (MPa)

addition to cement SFC, superplasticizer Su, and curing period CP, 35


and the outputs are strength of LWC after 3, 7, 14, and 28 days.
The data were collected from only one source so aggregate quality, 30
cement characteristics, curing conditions, and loading speed were
assumed to be constant. After identifying the parameters that have 25
a significant effect on the compressive strength of LWC mixtures to
develop NN-based model: the input layer with eight neurons and 20
output layer with four neurons. The input layer includes S, WC,
LFA, LCA, SFS, SFC, Su, and CP. The output layer is compressive 15
strength of LWC after 3, 7, 14, and 28 days.
To design a stable BPNN based on the determined input num- 10
ber, the parametric study carried out by changing the number of 10 15 20 25 30 35 40
Actual Compressive Strength (MPa)
neurons in the hidden layers (one hidden layer and two hidden lay-
ers) in order to test the stability of the network. Fig. 3 shows the Fig. 6. Scatter of forecasted and experimental values of 28 days compressive
strength of trained data for BPNN (8–14–6–4) model.

50 40
Error
Actual 40
40 Predicted 35
Predicted Compressive Strength (MPa)
Compressive Strength (MPa)

30 35
30

25 30
Error (%)

20
20
25
10
15
20
0
10

-10 15
5

-20 0 10
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 10 15 20 25 30 35 40
Data No. Actual Compressive Strength (MPa)

Fig. 4. Comparison of predicted and actual values of 28 days compressive strength Fig. 7. Scatter of forecasted and experimental values of 28 days compressive
of trained data for BPNN (8–14–6–4) model. strength of tested data for BPNN (8–14–6–4) model.
2218 M.M. Alshihri et al. / Construction and Building Materials 23 (2009) 2214–2219

Table 4
Performance of NN models.

ANN model RMSE (MPa) Maximum (%) MAE (%) Minimum (%) Ef (%) R
a. Training data
BP (8-14-4) 3.875 9.721 4.02 0.15 88.87 0.964
BP (8-14-6-4) 2.492 5.45 2.22 0.061 91.16 0.972
CC (8-6-4) 2.363 5.057 1.927 0.0341 93.4 0.974
b. Test data
BP (8-14-4) 3.812 9.132 3.92 0.134 88.92 0.975
BP (8-14-6-4) 2.325 5.23 1.987 0.053 91.165 0.977
CC (8-6-4) 2.289 5.01 1.797 0.042 93.662 0.982

BPNN having different neurons in the hidden layer were found The results indicated that the CC neural net work has several
more than 86%. Overall, it could easily be observed that BPNN hav- advantages over the BPNN: it learns very quickly, the network
ing architecture 8–14–4 (eight neurons in input and 14 neurons in determines its own size and topology, it retains the structures it
first hidden layers, and four neurons in output layer) for one hid- has built even if the training set changes, and it requires no back
den layer case produced the best result in this study. propagation of error signals through the connections of the
A further test on whether additional second hidden layer could network.
improve the network performance was carried out. In this test, the
number of 14 neurons in the first hidden layer was fixed and var- 4. Conclusions
ious number of neurons in the second hidden layer were used.
BPNN having a structure 8–14–6–4 (eight neurons in input and Concrete is a highly non-linear material, so modeling its behav-
14 neurons in first hidden layers, six neurons in second hidden ior is a difficult task. The neural network is a good tool to model
layer and four neurons in output layer) produced optimum result non-linearly systems. In this paper, the application of two NN
for the available inputs. Findings of this study were very similar types namely, back propagation and cascade correlation in the esti-
to that found by Maier and Dandy [23] ‘‘to yield lowest RMSE com- mation of 3, 7, 14, and 28 days compressive strength of a LWC mix-
pared to other combination, a ratio between first and second hid- tures have been outlined.
den layer nodes should not exceed of 3:1”. The study includes the manipulation of the tested samples at
For the developed four-layer BPNN (8-14-6-4) model men- the laboratory to train and to validate the neural networks. The
tioned earlier, the results of trained and tested instances for the conventional models of compressive strength of LWC mixtures
compressive strength prediction are shown in Figs. 4 and 5. The are developed on the basis of data collected. The data set is divided
predicted results are fairly close to the corresponding actual mea- into two data files: a training file, and a test file. Promising results
surements. Figs. 6 and 7 represent the scatter diagram of predicted were obtained using both the neural networks model but it was
and experimental values of compressive strength for trained and observed that the cascade correlation neural network model per-
tested instances. The prediction can be seen as fairly close to the form better compared to back propagation net work ((8–14–6–4)
corresponding actual values of compressive strength. For trained model). Also, the cascade correlation neural network model has
data, it can be observed that a maximum absolute error of 5.45%, several advantages over the BPNN (8–14–6–4) model such as it
a minimum absolute error of 0.061% and the mean absolute error learns very quickly, and the network determines its own size and
of 2.22% were obtained for compressive strength prediction. Also topology.
for tested data, it can be observed that a maximum absolute error Material engineers may use these models to predict the stability
of 5.23%, a minimum absolute error of 0.053% and the mean abso- of LWC mixtures and avoid conducting costly experimental tests
lute error of 1.987% were obtained for compressive strength pre- that require specialized equipments and expertise.
diction. The correlation coefficients of 0.972 and 0.977 were
obtained for the training and testing data of compressive strength References
prediction. The results indicate that the 3, 7, 14, and 28 days com-
pressive strength of light weight concrete (LWC) mixtures can be [1] Short A, Kinniburgh W. Lightweight concrete. 3rd ed. London: Applied Science
Publishers; 1978.
predicted using the four-layer BPNN (8–14–6–4) model based on
[2] ACI committee 213 R-87. Guide for structural lightweight aggregate concrete.
the sand, light weight coarse aggregate, light weight fine aggregate, ACI manual of concrete practice, part 1, American Concrete Institute,
water, cement, and the percentage of silica fume used in solution Farmington Hills; 1987.
and used in addition to cement. [3] Topcu IB. Semi-lightweight concretes produced by volcanic slugs. Cem Concr
Res 1997;27(1):15–21.
The strength of LWC based on BP scheme was repeated by using [4] Al-Khaiat H, Haque MN. Effect of initial curing on early strength and physical
CC scheme of training. This was done to see whether the training properties of lightweight concrete. Cem Concr Res 1998;28(6):859–66.
improves by adopting a different algorithm. Results of the testing [5] Pooliyadda SP, Dais WPS. Neural networks for predicting properties of concrete
with admixtures. Constr Build Mater 2001;15(7):371–9.
of this network are shown in Table 4 and they indicated almost [6] Ramezanianpour AA, Davapanah A. Concrete properties estimation and mix
same performance of the CC training method (maximum error of design optimization based on neural networks, world conference on concrete
5.057%, a minimum error of 0.0341%, and the average error of materials and structures (WCCNS), Kuala Lumpur, Malaysia; 2002.
[7] Kiewicz KJ et al. HPC strength prediction using artificial neural network. J
1.927%). For trained data, it can be observed that a maximum abso- Comput Civil Eng 1995;9(4).
lute error of 5.45%, a minimum absolute error of 0.061%, and the [8] Sergio L, Mauro S. Concrete strength prediction by means of neural network.
mean absolute error of 2.22% were obtained for compressive Constr Build Mater 1997;11(2):93–8.
[9] Yeh IC. Modeling of strength of high performance concrete using artificial
strength prediction. Also for tested data, it can be observed that a neural networks. Cem Concr Res 1998;28(12):1797–808.
maximum absolute error of 5.01%, a minimum absolute error of [10] Yeh IC. Design of high performance concrete mixture using neural networks
0.042%, and the mean absolute error of 1.797% were obtained for and nonlinear programming. J Comput Civil Eng 1999;13(1):36–42.
[11] Oh JW, Lee IW, Kim JT, Lee GW. Application of neural networks for
compressive strength prediction. The correlation coefficients of
proportioning of concrete mixes. ACI Mater J 1999;96(1):61–7.
0.974 and 0.982 were obtained for the training and testing data [12] Ni HG, Wang JZ. Prediction of compressive strength of concrete by neural
of compressive strength prediction. networks. Cem Concr Res 2000;30:1245–50.
M.M. Alshihri et al. / Construction and Building Materials 23 (2009) 2214–2219 2219

[13] Dias WPS, Pooliyadda SP. Neural networks for predicting properties of [20] Swingler K. Applying neural networks a practical guide. New York,
concretes with admixtures. Constr Build Mater 2001;15:371–9. London: Academic Press; 1996.
[14] Mostofi D, Samaee A. HPC strength prediction using artificial neural network. [21] Rogers JL. Simulating structural analysis with neural networks. ASCE J Comput
In: Conference proceeding, first international concrete and development Civil Eng 1994;8(2):252–2265.
conference, Tehran, IRAN; 2001. [22] Goh AT. Some civil engineering applications of neural networks. Proc Inst Civil
[15] Lee SC. Prediction of concrete strength using artificial neural network. Eng Eng Struct Build 1994;104:463–9.
Struct 2003;25:849–57. [23] Maier HR, Dandy GC. The effect of internal parameters and geometry on the
[16] Kim JI, Kim DK, Feng MQ, Yazdani F. Application of neural networks for performance of back propagation neural networks: an empirical study.
estimation of concrete strength. J Mater Civil Eng 2004;16(3):257–64. Environ Modell Softw 1998;13(2):193–209.
[17] Nahhas TM, El-Bisy MS. Neural networks for predicting properties of high [24] Hoehfeld M, Fahlman SE. Learning with limited numerical precision using the
strength concrete. Al-Azhar Univ, Civil Eng Res Mag Cem 2005;27(2): cascade correlation learning algorithm. IEEE Trans Neural Networks
474–86. 1992;3(4):602–11.
[18] Mirza FAM, El-Bisy MS. Predictions of compressive strength of structural [25] Adams A, Waugh S. Function evaluation and the cascade correlation
lightweight concrete using artificial neural networks. In: 5th International architecture. Perth (Western Australia): ICNN; 1995. p. 1126, 1129.
conference of engineering computational technology, Spain; 2006. [26] Wu JK. Neural networks and simulation methods. New York: Marcel Dekker;
[19] Alshihri M, Azmy AM, El-Bisy MS. Application of neural networks in the 1994.
prediction of compressive strength of high strength concrete. Al-Azhar Univ, [27] Flood I, Kartam N. Neural networks in civil engineering I: principles and
Civil Eng Res Mag Cem 2007;29(2):573–89. understanding. ASCE J Comput Civil Eng 1994;8(2).

Das könnte Ihnen auch gefallen