Sie sind auf Seite 1von 9

CHAPTER 2

REVIEW OF RELATED LITERATURE

2.1 BASIC CONCEPTS

2.1.1 Load-Deflection Behavior Prediction

Predicting deflection of reinforced concrete slab is complex due to wide variety of factors and
parameters that affect its load-deflection behavior. In fact, the deflection of an RC slab is
always incorporated in the design to assess the possibility of excessive deflections occurring
in a structure that can develop to cracking and reduce the overall structural integrity.
However, in terms of academic researches, many publications and codes have introduced
short and long way to approximate deflections in structures. Some of the experience
could be directly adopted, but some must be modified in order to make it more accurate.
Most importantly, deflection prediction models should be periodically improved in order
to cope up with material and environmental changes.

Several techniques have been identified for load-deflection behavior prediction. These
techniques can be classified into empirical hand calculation methods, and methods based
upon neural network (NN) approaches. Deflection prediction by empirical hand
calculation method may be the fastest way to approximate deflection. However, in general,
ANN methods based on observed data predict the deflection with greater accuracy.

2.1.2 Empirical Hand Calculation Method

Deflection calculations for slabs are complex, even when linear elastic behavior is assumed.
The serviceability limit state often governs the design of slender reinforced concrete
members. A reinforced concrete slab structure is thin relative to the span length, and thus
prone to serviceability problems. The deflections of concrete members are closely linked
to the extent of cracking and to the degree to which the cracking capacity is exceeded. The
point at which cracking occurs is determined by the moment induced in
the slab and the tensile strength of the concrete. Tension reinforcement has a minor effect on
the deflection of heavily reinforced members but is highly significant in lightly
reinforcement members such as slabs. Empirical Hand Calculation Method follows the
procedures to compute for approximate deflections specified in structural code.

2.1.3 ANN Methods

ANN methods are mostly used for prediction in structural engineering as it provides rapid
solutions to problems difficult for human brains to evaluate and analyze. This technique
was used because in previous studies it was found to be a quick and reliable alternative to
lengthy experimental testing or detailed calculations. The advantage of the ANN is to
study the relationship between the inputs and outputs by a non-statistical approach. These
ANN based methodologies do not compel any predefined mathematical models. If same or
similar input patterns are encountered, ANN model will come up with an output with
minimum errors. In engineering, ANN has two main functions: Pattern classifiers and as
non-linear adaptive filters. As a model of human brain, neural network has an adaptive
system. By adaptive, it means that each variable is changed during the operation and it is
deployed for solving the problem at hand which is called the training phase. An artificial
neural network is developed with a systematic step-by-step procedure that enhances a
standard commonly known as the learning rule. The input and output training data is
important for the network as it carries the information which is essential in knowing the
significant features of the data. ANN is a system or structure that receives an input,
process the data, and provides an output. An error is composed from the difference of the
desired response and the real system output. An ANN model that has lots of dataset will
have smaller error than a model with lesser dataset. The inaccuracy in the data is fed back to
the system which makes all adjustments to the variables in a systematic approach
(learning rule). The process is repeated until a desired output is acceptable. It is important that
the performance of the system relies heavily on the information data.
2.2 CONCEPTS RELATED TO THE METHODOLOGIES

2.2.1 Artificial Neural Network

Neural networks are types of processing information systems whose architecture are
designed based on the how the brain works. It stores data among individual neurons and
processed it in parallel manner without needing algorithmic instructions unlike other
programming systems that use digital and serial manner. Neural Networks are trained to
extract the relationships of the input parameters to their resulting conclusions. (Tully,
1997)

Input Hidden Output


Layer Layer Layer

Input Outside
Neurons Neurons

(Connection Weights)

Figure 2.1: Simplified Neural Network Model (Tully 1997)

There are three basic components in each network illustrated in the figure:
1. Input neurons, which represent the parameters of the problem,
2. Hidden layers, which connect and weigh the connection of input neurons and
output neurons,
3. Output neurons, which is the output of the problem.

10
Neural networks consist of different tools. Among these tools, it had been reported as
self-cognition pattern that model the cause-effect relationships of a particular problem
without exploring the underlying rational used to model the behaviors. (Hegazy et al.,
1996) Based on the complexity of the model, neural network can be composed of a single
layer or many layers. Multi-layer neural networks may contain one or more middle
layers. These middle or hidden layers consist of neurons that have no direct connections
whether in inputs or output; rather, they are used to refine training by adjusting connection
weighs. These connection weighs are applied at the links connecting the inputs to the
outputs and they associate the effects of the inputs on each output. (Tully, 1997) Neural
networks trained using unsupervised training, which correct solutions are not provided, are
usually capable of self-organization and independent classification of the input data; that
is, the network itself must decide how it will classify or partition the input data. (Caudill and
Butler, 1990)

2.2.1.1 Feed-Forward Back Propagation

Back propagation networks are training algorithms in which patterns recognized by the
network are associated through the layers, and thus the information flows in one direction at a
time, either forward or backward. It networks require at least three ayes in order to work
correctly, and training is conducted in supervised manner. (Tully, 1997)

2.2.2 Empirical Mode Decomposition

Empirical Mode Decomposition (EMD) adaptively and locally decomposes any non-
stationary time series in a sum of Intrinsic Mode Functions (IMF) which represent zero-
mean amplitude and frequency modulated components. The EMD represents fully data-
driven, unsupervised signal decomposition and does not need any a priori defined basis
system (A. Zeiler et al 2010).

According to Huang et al, these IMFs satisfies two conditions: (1) in the whole data set, the
number of extrema and the number of zero crossings must either equal or differ at most
by one; and (2) at any point, the mean value of the envelope defined by the local maxima
and the envelope defined by the local minima is zero.

11
(Darryll Pines, et. al 2006) Having defined the concept of an intrinsic mode function, we
have to divide the vibration signal into its family of intrinsic modes. Firstly, the signal
must have at least two extrema, one maximum and one minimum; secondly, the
characteristic time scale is defined by the time lapse between the extrema; and thirdly, if the
data were totally devoid of extrema, but contained infection points, then the signal could
be differentiated one or more times to reveal the extrema. Final results can be obtained
by integrating all components. Thus, essential to applying this new method is obtaining the
intrinsic oscillatory modes of a vibration signal. A schematic of the process to identify
intrinsic modes of a signal is summarized in Fig. 2.2. By virtue of the IMF definition, the
decomposition method can simply use the envelopes defined by the local maxima and
minima separately. Once the extrema are identified, all the local maxima are connected by a
cubic spline function to define the upper envelope curve, shown in Fig.
2.2. One simply has to repeat the procedure for the local minima to produce the lower
envelope. The upper and lower envelopes should cover all the data between them. The
mean of the upper and lower extrema curves is designated as

𝑋𝑚𝑎𝑥 + 𝑋𝑚𝑖𝑛

𝑋𝑚1 =
2
Subtracting the mean from the original signal generates the first estimate of an intrinsic
mode function.

𝑋(𝑡) − 𝑋𝑚1 (𝑡) = ℎ1


This operation is named sifting which leads to the third curve displayed in Fig. 2.2. While
ℎ1 represents a first estimate of the first intrinsic mode function, further sifting is usually
needed. By performing successive siftings a better estimate of the first intrinsic mode
function can be obtained. The next sifting assumes that h1 is the signal so that the next
estimate becomes

ℎ1 − 𝑋𝑚11 (𝑡) = ℎ11

12
Figure 2.2: Sifting Process to Obtain Intrinsic Mode Functions
After k siftings we define the first intrinsic mode to be

𝑐1(𝑡) = ℎ1𝑘
When performing the sifting process there are a number of issues to be concerned with,
including the elimination of riding waves and the smoothing of uneven amplitudes. One
must pay particular attention to performing too many siftings and creating spurious
amplitudes that have no physical meaning. In addition, Huang points out that one must pay
particular attention to cubic spline fittings near the boundaries. Thus, Huang defines a
measure for terminating the sifting process by defining the standard deviation between two
successive siftings to be
(ℎ1(𝑘−1)(𝑡) − ℎ1(𝑘)(𝑡))2
]

S.D. = ∑[ ℎ1(𝑘−1)(𝑡) ∗ ℎ1(𝑘−1)(𝑡)


Typical values of S.D. are 0.2 and 0.3. To obtain the second and subsequent intrinsic
mode functions, the residual signal can be computed as

𝑋 (𝑡) − 𝑐1 (𝑡) = 𝑟1 (𝑡)


13
The residual 𝑟1, now becomes the new data that can be subjected to the same sifting
process to extract more intrinsic mode functions. This process is repeated to obtain

𝑐2 through 𝑐𝑛 intrinsic mode functions. By summing up the entire intrinsic mode


functions it is possible to represent a vibration signal as

𝑋(𝑡) = � 𝑐𝑖 (𝑡) + 𝑟𝑛

Thus, the original data X(t) has been divided into n empirical modes, 𝑐𝑛, plus a residue,

𝑟𝑛 .
Based on the methods above, the complex data of the load deflection will be transformed
to small and finite data, which makes it easier to analyze. Hence, adding all the forecasts for
decomposed data for load deflection should produce a more accurate actual forecast.

2.2.2.1 Comparison of EMD with Fourier and Wavelet Transform

Fourier Transform (FFT) have several shortcomings, the first of these is the inability of the
Fourier to transform to accurately represents functions that have non-periodic components
that are localized in time or space, such as transient impulses. This is due to the FFT being
based on the assumption that the signal to be transformed is periodic in nature and infinite
in length. Another deficiency is its inability to provide any information about the time
dependence of a signal, as results are averaged over the entire beneficial to be able to acquire
a correlation between the time and frequency domains of a signal. This is often the case when
monitoring mechanical vibrations (Darryll Pines, et. al 2006).

Wavelet Transform, like the Fourier Transform, performs decomposition in a fixed basis of
functions. (Hassan, 2015) The Wavelet Transform, on the other hand, is well-suited to
handle non-stationary data, but it is poor at processing nonlinear data. Additionally, the
basic functions used in FFT and Wavelet methods are fixed, and do not necessarily match
varying nature of signals and this will lead to the loss of some useful information in the
signal. Since potential field data are in general nonlinear and non-stationary in nature, we
expect limitations in processing the data using FFT or Wavelet methods.

14
EMD makes no assumptions regarding the composition of signal; rather it uses a spline
interpolation between the maxima and minima in order to clearly define the IMF. Since
IMF’s can change over time, therefore it is suited to nonlinear signals than either FFT or
Wavelets. This makes EMD particularly attractive when analyzing the researcher’s raw
data and thus, is the strategic choice of the team.

2.2.3 Matrix Laboratory (MATLAB) as an ANN Tool

Figure 2.3: MATLAB Sample Interface

The name MATLAB stands for matrix laboratory. It was originally written to provide
easy access to matrix software developed by the LINPACK (linear system package) and
EISPACK (Eigen system package) projects. Today, its’ engine incorporates by
LINPACK and BLAS Libraries, known for its helpful matrix computations. It is a high
performance language for technical computing, visualization and programming
environment. It has sophisticated data structures, contains built-in editing and debugging
tools, and supports object-oriented programming. It also has a powerful built-in routine
that enables different calculations. (Houcque, 2005)

Millions of engineers and scientist use MATLAB. It can be used for a range of applications,
deep and machine learning, signal processing and communications, image

15
and video processing, control systems, test and measurement, computational finance and
computational biology.

Interactive system of MATLAB does not require dimensioning unlike other programs that
will take time to follow non-interactive languages. As usage of MATLAB rises, it evolves
over the years. It is not only a standard in structural tool for advanced courses in mathematics,
engineering and sciences, but also a widely used tool in industry for high productivity
research, development and analysis.

16

Das könnte Ihnen auch gefallen