Sie sind auf Seite 1von 6

ACI MATERIALS JOURNAL

TECHNICAL PAPER

Title no. 96-M21

Prediction of Cement Degree of Hydration Using Artificial


Neural Networks
by Adnan A. Basma, Samer A. Barakat, and Salim Al-Oraimi
This paper presents the development of a computer model for the
prediction of cement degree of hydration . The model is established
by incorporating large experimental data sets using the neural
networks (NNs) technology. NNs are computational paradigms,
primarily based of the structural formation and the knowledge
processing faculties of the human brain. Initially, the degree of
hydration was estimated in the laboratory by preparing portland
cement paste with the water-cement ratio (w/c) ranging from 0.2 to
0.6, curing times from 0.25 to 90 days and subjected to curing
temperatures from 3 to 43 C (37 to 109 F). A total of 390 specimens
were tested, thus producing 195 data points divided into five sets.
The networks were trained using data in Set 1, 2, and 3. Once the
NNs have been deemed fully trained, verification of the performance
is then carried out using Set 4 and 5 of the experimental data, which
were not included in the training phase. The results indicated that
the NNs are very efficient in predicting concrete degree of hydration
with great accuracy using minimal processing of data.
Keywords: curing; hydration; models.

INTRODUCTION
Hydraulic cements are defined as cements that not only
harden by reacting with water, but also form a water-resistant product. Cements derived from calcination of gypsum or
carbonates, such as limestone, are nonhydraulic because
their products of hydration are not resistant to water. Anhydrous portland cement does not bind sand and gravel; it
acquires the adhesive property only when mixed with water.
This is because the chemical reaction of cement with water,
commonly referred to as the hydration of cement, yields
products that possess setting and hardening characteristics.
The compounds of portland cement are nonequilibrium
products of high temperature reactions and are therefore in a
high-energy state. When cement is hydrating, the
compounds react with water to acquire stable low-energy
states, and the process is accompanied by the release of
energy in the form of heat. The heat of cement hydration is
of great significance in concrete technology. In some cases,
it can be a hindrance, while in other cases, it can help. The
reactions between cement compounds and water tend to raise
the temperature of concrete. This rise in temperature generally subjects the freshly hardened concrete to both thermal
and drying shrinkage.
It is currently well-recognized that drying shrinkage of
concrete is a property that is affected by several parameters,
such as the elastic properties of the paste and aggregate and
their shrinkage, as well as the restraining imposed by the
aggregate and unhydrated cement.1,2 The shrinkage of paste
is also influenced by the relative humidity, drying time,
water content, degree of hydration , and admixture.1-3
However, and as stated by Almudaiheem,4 the complexity
of the subject is such that the effect of the mix design parameters on equilibrium drying shrinkage of concrete, and the
ACI Materials Journal/March-April 1999

fundamental shrinkage parameters of the paste are not well


understood. Moreover, the uncertainties associated with
parameters affecting the drying shrinkage of concrete make
it difficult to estimate exactly how much shrinkage will
occur. Consequently, this paper will introduce a computer
model based on the neural networks (NNs) technology to
assess one particular variable that affects concrete shrinkage,
namely the degree of hydration.
RESEARCH SIGNIFICANCE
This work will demonstrate that the NNs modeling of
concrete degree of hydration is effective, accurate, and
simple to implement. Only minor processing of the data is
necessary to obtain the degree of hydration for a given set of
easily accessible conditions with minimal computation time
(< 5 msec). This expediency and relatively high accuracy is
the most significant advantage of using NNs. The experimental time needed to evaluate the degree of hydration is
generally several orders of magnitude greater.
THEORETICAL BACKGROUND
To evaluate the drying shrinkage of concrete, Pickett2
derived a model from the elastic theory based on mix composition and material properties. This model is expressed by

----c = ( 1 V a )
p

(1)

where c /p is the ratio of the shrinkage of concrete to that of


the paste, Va is the volume fraction of the aggregate, and is
expressed as
3 ( 1 c )
= ---------------------------------------------------------Ec
1 + c + 2 ( 1 2 a ) ----Ea

(2)

with and E being, respectively, Poissons ratio and


modulus of elasticity, while subscripts c and a stand for
concrete and aggregate, respectively. Pickett2 found that a
constant value of = 1.7 showed good agreement with
shrinkage of paste, even though he realized that it should
vary. Almudaiheem,4 on the other hand, introduced a modified model in which he replaced Va , a , and Ea by VR , R , and
ER , respectively, in Eq. (1) and (2), where the subscript R
ACI Materials Journal, V. 96, No. 2, March-April 1999.
Received June 23, 1997, and reviewed under Institute publication policies. Copyright
1999, American Concrete Institute. All rights reserved, including the making of copies
unless permission is obtained from the copyright proprietors. Pertinent discussion including
authors closure, if any, will be published in the January-February 2000 ACI Materials
Journal if the discussion is received by October 1, 1999.

167

Adnan A. Basma is an associate professor of civil engineering at Sultan Qaboos


University. He received his BS, MS, and PhD from the University of Mississippi in
1980, 1981, and 1985, respectively. His research interests include reliability-based
studies and neural networks as applied to structural/geotechnical engineering.
Samer A. Barakat is an assistant professor of civil engineering at Jordan University
of Science and Technology (JUST). He received his MS from JUST in 1989, and his
PhD from the University of Colorado, Boulder, Colo., in 1994. His research interests
include reliability-based structural optimization, earthquake structural resistance,
and composite materials.
Salim Al-Oraimi is an assistant professor of civil engineering and Dean of Student
Affairs at Sultan Qaboos University. He received his PhD from the University of
Wales, UK. His research interests include fiber reinforced concrete, fracture mechanics,
and computer applications to structural engineering.

Fig. 2Schematic diagram of typical neural network


structure with two hidden layers.

Fig. 1Schematic diagram of single neuron.


denotes the restraining phase. The volume of restraining can
be calculated by4
1
V R = ---------------------------------- ( 1 Va ) + Va
[ 1 + c ( w c ) ]

(3)

where c is the density of concrete, w/c is the water-cement


ratio, and is the degree of hydration. In Eq. (3), the degree
of hydration is the only parameter that needs to be estimated.
This will be done here by conducting an experimental work,
as will be seen in a latter section. Once the experimental data
is obtained, the NNs are first trained for a specific paradigm
using part of the data. The remainder of the data, to which the
NNs were not initially exposed, is then used to test the accuracy of the developed model. A general description of the
NNs and the experimental program follows.
DEFINITION OF NEURAL NETWORKS (NNs)
NNs could be defined as information processing structures
that consist of many simple processing elements, i.e.,
neurons, with dense parallel interconnections. The connections
between the neurons are called synapses. Each neuron
receives weighted inputs from many other neurons and
communicates its outputs to many other neurons by using an
activation function f (Fig. 1). Thus, information is represented by a massive cross-weighted neurons interconnections. NNs might be single or multilayered. The single-layer
168

NNs present processing units of the NNs, which take inputs


from the outside of the network, and their outputs go to the
outside of the network; otherwise, the NNs are considered as
multilayers.5,6
The basic methodology of NNs consists of three stages:
network training, testing, and implementation. The weights
are adjustable through the programming training process,
while the training effect is called learning. The learning
process is done by giving weights and biases computed from
a set of training data or by adjusting the weights according to
a certain condition. The initial weights and biases joining
nodes of the input layer, hidden layers, and output layer are
commonly assigned randomly. The weights and biases are
changed incessantly for the output of the network to match
the required data value. As the input data are passed through
the hidden layers, a sigmoidal activation function is generally used. During the training procedure, the data are
selected uniformly. A specific pass is completed when all
data sets have been processed. Generally, several passes are
required to attain a desired level of prediction accuracy. The
final sets of weights and biases comprise the long-term
memory, or synapses of the respective events. Consequently,
learning corresponds to determining the weights and biases
associated with the connections in the networks. The NNs
model thus determines the structure (adaptively, incrementally, and automatically) when the input data is presented.
Currently, there are several learning algorithms available in
the literature. However, the most commonly used is the
back-propagation paradigm.7 This paradigm works as
follows. The data consist of input-output pairs. The network
will produce an output vector A(:, j) for each input vector X(:, j).
Thus, application of R input vectors would produce an output
matrix A with S rows and R columns. In the case of multilayered NNs, layers whose output becomes the network
output are called output layers. All other layers (with the
exception of the input layer) are termed hidden layers. A
typical three-layer network is shown in Fig. 2. This network
has R inputs (R = 3), S1 neurons in hidden Layer 1 (S1 = 3)
and S2 neurons in hidden Layer 2 (S2 = 1) with a constant
input B fed as biases for each neuron. Observe that the
outputs of each intermediate layer are the inputs of the
following layers. Therefore, if X is the input and A1 is the
output of hidden Layer 1, then A1 will be the input and A2
ACI Materials Journal/March-April 1999

Fig. 4Tan-sigmoidal transformation function.


Fig. 3Log-sigmoidal transformation function.
the output of hidden Layer 2. This implies that A2 = Y is the
final output of the network and is calculated by
Y = f2 { W2 [ f1 ( W1 X + B1 ) ] + B2 }

(4)

where f1 and f2 are the transformation function that are


selected to best suit the used data.
The discrepancy, or delta, between the actual and desired
behavior of the network is determined by subtracting the
output vector A from the target or desired vector T. Under the
delta rule, the post-trial change in weight Wij of a connection
between the input and output is estimated by
W ij = ( T j A j )X i

(5)

where represents the trial-independent learning rate. The


weights are continuously adjusted over each training epoch
(or iteration) until the difference between the output and
target values reaches a desired limit or until all training
preset numbers of epochs or iterations are completed.
As the name implies, the concept behind the back-propagation is to propagate the errors back through the system
based on observed discrepancies. As the weights are
adjusted, they are received by the processing elements or
neurons to produce an output through an activation or transformation function. Several activation functions are
currently available. However, the two most commonly used
are the log-sigmoidal and tan-sigmoidal transformations.
The log-sigmoidal function receives inputs (which may have
any value between minus and plus infinity) and maps them
into the range of 0 to +1. The tan-sigmoidal function, on the
other hand, maps inputs into the range 1 to +1. Such functions prevent the input signal from growing infinitely as they
are successively summed and passed on to other neurons.
Furthermore, they introduce nonlinearity into the network,
without which the network output will be severely limited to
linear combinations. The respective graphical representation
ACI Materials Journal/March-April 1999

of the log-sigmoidal and tan-sigmoidal activation functions


are shown in Fig. 3 and 4.
EXPERIMENTAL WORK TO DETERMINE DEGREE
OF HYDRATION
The degree of hydration was estimated in the laboratory
by conducting five sets of tests. In Set 1, 2, and 3, portland
cement paste was prepared, respectively, at the watercement ratio (w/c) of 0.3, 0.4, and 0.6, while w/c in Set 4 and
5 were, respectively, 0.2 and 0.5. For all five sets, several
specimens were prepared and cured for 0.25, 0.5, 1, 3, 7, 14,
28, and 90 days at curing temperatures of 3, 13, 23, 33, and
43 C (37, 55, 73, 91, and 109 F) with the exception of 0.25
day that was not cured at 3 C. For each set of w/c ratios, two
specimens were prepared and then averaged to assess .
Consequently, this laboratory work used a total of 390 specimens. The nonevaporated water content wn was used as a
measure to evaluate the degree of hydration. The degree of
hydration was estimated by the following
w
= -------nw nu

(6)

where wnu is the nonevaporable water content at complete


hydration, which was equal to 0.23 g/g of cement.
To determine the nonevaporable water content at a
specific age, small specimens (about 1.5-mm thick) were cut
from the original sample (50-mm cube) and immersed in
methanol to stop hydration. After about 7 days in methanol,
the specimens were oven dried at 105 + 3 C (221 5 F), then
ground into fine particles, and a weighted sample was ignited
at 1000 C (1832 F). The nonevaporable water content wn per
g of cement was determined from the following equation
w1 w2
w n = ----------------w2

(7)

169

Fig. 5Measured degree of hydration with w/c = 0.3 for


Set 1 (deg F = 1.8 C + 32).

Fig. 7Measured degree of hydration with w/c = 0.6 for


Set 3 (deg F = 1.8 C + 32).

Fig. 6Measured degree of hydration with w/c = 0.4 for


Set 2 (deg F = 1.8 C + 32).

Fig. 8Measured degree of hydration with w/c = 0.2 for


Set 4 (deg F = 1.8 C + 32).

where w1 is the oven-dried weight, and w2 is the weight after


ignition. The degree of hydration was then calculated by Eq.
(5). Fig. 5 to 9 show, respectively, the average measured
values of (from two tested specimens) for Set 1 to 5. As
can be noted from these figures, increases with temperature and curing time, while w/c has a minor effect. In general,
however, the degree of hydration was found to remain
almost constant for specimens cured for 28 and 90 days.

Consequently, this model is expected to map and transform


the input or causative variable space (curing period, temperature, and w/c) into the output or target variable space
(degree of hydration ). Once this is achieved, the data in
these sets are thus assumed to have been combined and
encased in the derived model to the degree that they can be
replaced by the model.
The software was used to perform the necessary computation. The complete listing of the subroutine used can be seen
in the Appendix.* For the purpose of this research work,
several single and multilayered NNs with various activation
functions were used to determine the most appropriate model

NEURAL NETWORKS SOLUTION


As stated earlier, five sets of experimental data were
obtained. Set 1, 2, and 3 (shown respectively in Fig. 5, 6, and
7), with 117 points, were initially used to train the NNs and
produce a model from a certain paradigm recognition.
170

*The Appendix is available in xerographic or similar form from ACI headquarters,


where it will be kept permanently on file, at a charge equal to the cost of reproduction
plus handling at time of request.

ACI Materials Journal/March-April 1999

Table 1(a)Connection weights and biases for


hidden Layer 1
Connection weights for
Hidden neuron

Tc

tc

w/c

Bias

1
2

+0.45498 +0.83475 +0.42987


+0.63014 +0.32891 +0.65910

+0.84907
+0.29735

-0.03594

+0.32447

-0.20280

-0.01644

Table 1(b)Connection weights and biases for


hidden Layer 2
Connection weights for*
Hidden neuron
1

HD1-N1
1.22227

HD1-N2

HD1-N3

0.05960

2.46489

Bias
0.06590

HD = hidden layer; N = neuron.

Table 2Regression of predicted -values by


neural networks on target experimental values

Fig. 9Measured degree of hydration with w/c = 0.5 for


Set 5 (deg F = 1.8 C + 32).

Neural
network stage

No. of
data
points

Training
Testing

117
78

Training and
testing

195

r2

95 percent
confidence
interval

Y = 0.973X + 0.011
Y = 0.877X + 0.049

0.973
0.877

0.081
0.130

Y = 0.929X + 0.029

0.929

0.107

Regression

model*

*Regression

analysis performed on data in Fig. 11.


Note: X = by neural networks; Y = experimental value of .

Fig. 10Variation of sum-squared error and learning rate


with training iterations.

determined as = 0.95 and = 0.02. Training was carried


out for 35,000 epochs or until the average sum square errors
over all training epochs was minimized. Training time on a
personal computer was less than 15 min. The progress of the
networks training was monitored by observing the learning
rate and the output sum-square error after each training
epoch. Figure 10 shows the training progress of the final
network. This figure was developed using sample-moving
averages of the network output errors obtained at the end of
each training epoch. The asymptotic shape of the curve
implies that the network learning was notably complete by
the end of the training. Furthermore, this figure indicates that
approximately 20,000 passes were required for convergence.
The final weights and biases produced by the NNs are listed
in Table 1(a) and (b). The accuracy of the NNs model
obtained in the training stage was tested versus Sets 4 and 5
of the experimental data (Fig. 8 and 9).

to predict the degree of hydration. The best suited NNs architecture was:
1. Three-way back-propagation with adaptive learning
rate and momentum;
2. Three input neurons (w/c, curing period tc, and temperature Tc);
3. Two hidden layers, with three neurons in the first and
one neuron in the second hidden layer;
4. One output neuron (degree of hydration );
5. Tan-sigmoidal function in hidden Layer 1 (f1); and
6. Log-sigmoidal function in hidden Layer 2 (f2).
This architecture can be seen schematically in Fig. 2.
The training stage and the associated NNs analyses were
carried out with adaptive momentum factor and learning
rate . The optimal values of these latter parameters were

PREDICTION ACCURACY OF NEURAL


NETWORKS
To check the accuracy of the NNs solution, the final
adopted model was called upon to recall the data used to train
stage, i.e. data Set 1 to 3. Furthermore, the prediction accuracy of the networks was tested against the data in Set 4 and 5.
It should be stressed that all of the data in these latter sets were
initially withheld from the NNs. In a similar fashion as in the
recall test, the input values from these sets of data were
presented to the model to perform the necessary calculations
and produce corresponding outputs. Figure 11 shows a comparison between the experimental values of and the recalled
and predicted values by the NNs. The closeness of the points
to the equality line serves only to indicate the validity of the
NNs model. A regression analysis of the points in this figure
was performed, and the results are listed in Table 2. The high

ACI Materials Journal/March-April 1999

171

data sets in such a way that very little prior assumption about
the relationship between the data attributes is needed. As long
as the important parameters are present in the data analyzed,
the training process will enhance the most fundamental relationship(s) on the models long-term memory. Any combination of the data attributes will invoke the appropriate reaction
from the memory. The final proof of applicability of the NNs
model is provided through its ability to predict output values
from data that it had never encountered.
Based on the results of this investigation, it was concluded
that the performance of the NNs was superior. The model,
including two hidden layers with three nodes per layer, and
the curing period, temperature, and w/c as input variables
were found to be very successful in predicting the degree of
hydration .
REFERENCES

Fig. 11Recalled and predicted values by neural


networks versus experimental data.
values of the correlation coefficient further substantiate the
NNs accuracy.
SUMMARY AND CONCLUSIONS
The work presented herein uses NNs technology to model
and predict concrete degree of hydration. By using such a
technology, the final model is said to have encapsulated the

172

1. Carlson, R. W., Drying Shrinkage of Concrete as Affected by Many


Factors, Proceedings, ASTM, V. 38, Part 2, 1939, pp. 419-437.
2. Pickett, G., Effect of Aggregate on Shrinkage of Concrete and
Hypothesis Concerning Shrinkage, ACI JOURNAL, Proceedings V. 52,
No. 5, 1956, pp. 581-590.
3. Pihlajavaara, S. E., A Review of Some of the Main Results of a
Research on Aging Phenomena of ConcreteEffect of Moisture on
Concrete, Cement and Concrete Research, V. 4, No. 1, 1974, pp. 761-771.
4. Almudaiheem, J. A., An Improved Model to Predict the Ultimate
Drying Shrinkage of Concrete, Magazine of Concrete Research, V. 44,
No. 159, 1992, pp. 81-85.
5. Garson, G. D., Interpreting Neural-Network Connection Weights,
AI Expert, V. 6, No. 7, 1991, pp. 47-51.
6. Simpson, P. K., Artificial Neural System, Pergamon Press, Inc., New
York, 1990, 659 pp.
7. McClelland, J. L., and Rumelhart, D. E., Exploration in Parallel
Distributed Processing, MIT Press, Cambridge, Mass., 1988.

ACI Materials Journal/March-April 1999

Das könnte Ihnen auch gefallen