Sie sind auf Seite 1von 15

©Civil-Comp Press, 2006.

Proceedings of the Fifth International Conference


Paper 65 on Engineering Computational Technology,
B.H.V. Topping, G. Montero and R. Montenegro,
(Editors), Civil-Comp Press, Stirlingshire, Scotland.

A Comparison of ANN Models for


Local Scour around a Pier
D.S. Jeng†, S.M. Bateni‡ and E. Lockett†
† School of Civil Engineering
University of Sydney, Australia
‡ Department of Civil and Environmental Engineering
University of Alberta, Canada

Abstract
Since the mechanism of flow around a pier-type structure is complicated, it is
difficult to establish a general empirical model to estimate the local scour around a
pier. Interestingly, each empirical formula yields a good agreement for a particular
data set, but being inapplicable to other databases. In this study, an alternative
approach, artificial neural networks (ANN), is proposed to estimate the equilibrium
and time-dependent scour depth with numerous reliable data base. Numerous ANN
models, multi-layer perception using back propagation algorithm (MLP/BP), radial
basis using orthogonal least-squares algorithm (RBF/OLS) and Bayesian neural
Network (BNN) were used. The equilibrium scour depth was modelled as a function
of five variables; flow depth, mean velocity, critical flow velocity, mean grain
diameter and pier diameter. The time variation of scour depth was also modelled in
terms of equilibrium scour depth, equilibrium scour time, scour time, mean flow
velocity and critical flow velocity. The training and testing data are selected from
the experimental data of several valuable references.

Keywords: neural networks, bridge pier, back propagation algorithm, orthogonal


least square algorithm, scour depth.

1 Introduction
Placing a hydraulic structure in either a river or marine environment will alter the
flow patterns in the vicinity of the structure. The changes to the flow pattern cause
an increase in sediment movement leading to the phenomenon of scour.
Understanding the phenomenon of bridge pier scour is of paramount concern to the
hydraulics engineering profession as without this detailed knowledge bridge failures
can occur, resulting in loss of life and devastating destruction (Figure 1). From a
purely economic standpoint, businesses of all sizes depend on major interstates, city
streets and rural roads to move products and services. Therefore, where roads and

1
bridges are temporarily or permanently closed due to damage sustained because of
scour, the economy will suffer.
Given the importance of understanding the stability of hydraulic structures
exposed to scour, extensive research has been conducted on the mechanisms and
dynamics of scour and scour patterns around different objects [6, 19]. Blodgett [1]
studied 383 bridge failures caused by catastrophic floods. Approximately half of
these failures were caused by local scour. Although some of the scour was attributed
to the increased local and contraction scour, due to accumulation of ice and debris, a
large portion resulted from erroneous prediction of scour depth during engineering
design. Among these, 86% of the 577,000 bridges in the National Bridge Registry
(NBI) of America are built over waterways. More than 26,000 of these bridges have
been found to be scour critical, meaning that the stability of the bridge foundation
has been, or could be, affected by the removal of bed materials.

Figure 1: Examples of scour around piers (a) scour and bed degradation at Kaoping
Bridge, Taiwan, (b) failure of Kaoping Bridge, Taiwan due to combination of
general and local scour [11].

The depth of scour is an important parameter for determining the minimum depth
of foundations as it reduces the lateral capacity of the foundation. It is for this reason
that extensive experimental investigation has been conducted in an attempt to
understand the complex process of scour and to determine a method of predicting
scour depth for various pier situations. To date, no generic formula has been
developed that can be applied to all pier cases to determine the extent of scour that
will develop. Numerous empirical formulae have been presented to estimate
equilibrium scour depth at bridge piers. These approaches are summarised in Table
1. Each approach varies significantly, highlighting the fact that there is a lack of
knowledge in predicting scour depth and that a more universal solution would be
beneficial. It is the lack of knowledge in predicting scour depth for all pier
conditions that has led to the undertaking of this paper.
In this study, an alterative approach, Artificial Neural Network models, will be
used to estimate the scour depth around piers. Numerous neural network models
will be outlined and some numerical examples will be used to demonstrate the
capacity of NN models..

2 Local Scour around a pier


Equilibrium scour depth around a circular pier in a steady flow over a bed of
uniform, spherical and cohesionless sediment depends on numerous groups of
variables such as flow characteristics, sediment characters, and pier geometry. The

2
Reference Proposed Theories
Lauesen and Toch [9] d se = 1.35D 0.7Y 0.3
Shen [18] 0.619
⎛ UD ⎞
d se = 0.00022 ⎜ ⎟
⎝ ν ⎠
Hancu [5] 1/ 3
d se ⎡ 2U ⎤ ⎡ U c2 ⎤
= 2.42 ⎢ − 1⎥ ⎢ ⎥
D ⎣ U c ⎦ ⎢⎣ gD ⎥⎦
Breusers et al. [2] d se ⎛U ⎞ ⎛Y ⎞
=2f ⎜ ⎟ tanh ⎜ ⎟
D ⎝ Uc ⎠ ⎝D⎠
Melville & Sutherland [15] d se
= Kl K d K y K a K s
D
US DOT [20] −0.65 ⎛ 0.43
d se ⎛Y ⎞ U ⎞
= K3 ⎜ ⎟ ⎜⎜ ⎟⎟
D ⎝D⎠ ⎝ gY ⎠
Melville and Chiew [14] d se = K yD Kl K d

Table 1: Different approaches for the prediction o scour depth

scour situation involves an approach flow over a loose bed and a complex three
dimensional flow field at the pier. The basic similitude requirements for
hydraulically modeling the simplest of pier-scour situations are difficult to satisfy.
Scour depth at a pier (as shown in Figure 2), depends on variables characterizing the
fluid, flow, bed sediment, and pier. Thus, the following functional relationship can
describe scour depth [4].

d s e = f ( ρ , μ ,U , Y , g , d 50 ,U c , D) , (1)
where ρ = fluid density; μ = fluid dynamic viscosity; U = average velocity of
approach flow; Y = flow depth; g = gravitational acceleration; d 50 = particle mean
diameter; U c = critical value of U associated with initiation of motion of particles
on bed surface; D = diameter of the pier and d se = equilibrium scour depth.
The eight independent variables in (1) are reducible to a set of five non-
dimensional parameters. If ρ , U , and D are chosen as repeating variables, the
following relationship describes scour depth normalized with pier diameter:
d se U U Y D ρUD
= Ψ( , , , , ). (2)
D Uc gY D d 50 μ

3
Figure 2: Flow and local scour around a circular pier.

A choice of other repeating variables would result in a different set of non-


dimensional parameters. However, as pointed out by Ettema et al. [4], the non-
dimensional parameters in (2) mainly control scour process around bridge piers.
Although the parameters which can affect scour depth, as shown in (1) and (2), have
been selected, but other parameters such as flow direction, geometry of pier can also
be included later [15]. Also, the process of local scour at bridge piers is time
dependent. Peak flood flows may last only a number of hours or a few days in the
field, while have insufficient time to generate equilibrium depth. Thus, according to
Melville and Chiew [14], the relation between the depth of local scour at a bridge
pier ( d s ) at a particular time ( t ) in a steady flow can be written
d s = f ( ρ , μ ,U , Y , D, g , d 50 ,U c , t , t e ) , (3)
where t e is time for equilibrium depth of local scour to develop. Based on (3),
Melville and Chiew [14] presented the following formula to predict local scour
depth ( d s )
1.6
ds ⎧ U ⎛t ⎞
= exp⎨− 0.03 c ln⎜⎜ ⎟⎟ . (4)
d se ⎩ U ⎝ te ⎠
According to (4), the relationship between d s and its dependent parameters can
be written
d s = f (d se ,U ,U c , t , t e ) . (5)
Now, (5) can further be written in the following non-dimensional form
ds U t
= f ( , ). (6)
d se U c te

3 Artificial Neural Network Models


3.1 Multi layer perception network (MLP/BP)
A typical configuration for a multilayer perceptron, a special class of artificial neural
network that will be used in this research, is shown in Figure 3. It resembles a

4
model, where a set of data ( x1 , x 2 ,.... ) is first fed directly in the network through the
input layer, and subsequently, the multi layer perceptron produces an expected result
y in the output layer. The number of hidden layers establishes the complexity of
the network, because a greater number of hidden layers increases the number of
connections in the ANN. The issue of determining the correct number of hidden
layers required to solve a specific task remains an open problem. The number of
nodes in each layer is evaluated by trial and error.

Figure 3: Structure of typical MLP model.

In summary, each node multiplies every input by its interconnection weight,


sums the product, and then passes the sum through a transfer function to produce its
result. This transfer function is usually a steadily increasing S-shape curve called a
sigmoid function. Under this threshold function, the output y j from the j-th neuron
in a layer is
1
yj = f ( ∑wij x i ) = (7)
1 + e ∑ ij i
−( w x)

where wij = weight of the connection joining the j-th neuron in a layer with the i-th
neuron in the previous layer and xi = values of the i-th neuron in the previous layer.
The ANNs are trained with a training set of input and known output data.
Many learning examples are repeatedly presented to the network, and the process is
terminated when either this difference is less than a specified value or the number of
training epochs excesses the specified epoch number. At this stage ANN is
considered as trained. The back propagation algorithm based upon the generalized
delta rule proposed by Rumelhart et al. [17] was used to train the ANN in this study.

3.2 Radial basis network (RBF/OLS)


The RBF network is similar in topology to the MLP network. Figure 4 shows a
schematic diagram of a general RBF network with N , L , and M nodes in the input,
hidden and output layers, respectively. It shows the N -dimensional input patterns
[X] is being mapped to M-dimensional outputs [Z], with nodes in the adjacent layers
exhaustively connected. The nodes in the hidden layer are each specified by a

5
transfer function f , which transforms the incoming signals. For the j-th input
pattern X pJ , the response of the j-th hidden node y j is of the form
p
X −U j
yj = f{ } (8)
2σ 2j
where • =Euclidian norm; U j = center of the j-th radial basis function f ; and σ =
spread of the RBF that is indicative of the radial basis from the RBF center within
which the function value is significantly different from zero.
f W11
U1, 1
Zp 1

Xp1

f WKJ
Xp J UJ, J
Zp K

XpN WL1

Input Layer f WML


UL, L
Zp M
f : Transfer function
U, RBF parameters Hidden Layer Output Layer
WkJ: Weight of output layer connection
p referes to the pth pattern (p =1,2...,N) where N
is the number of patterns in the training set

Figure 4: Radial basis function.

The network output is given by a linear weighted summation of the hidden


node responses at each node in the output layer. The output for k-th node on the
output layer z pk is computed as
L
z pk = ∑yj =1
j wkj (9)

where wkj = weight connection between hidden and output nodes. From several
possible radial basis functions, the most common choice is the Gaussian. The
Gaussian RBF center of the j-th hidden node can be specified by the mean U j and
the deviation σ j .

Training an RBF involves two stages: (1) determining the basis functions on the
hidden layer nodes and (2) the output layer weights. Fitting the RBF function
involves finding suitable RBF centers and spreads. A variety of techniques have
been evolved to optimize the number of RBF centers. The present study employed
the minimum description length algorithm [10] to optimize the parameters of the
RBF networks.

3.3 Bayesian Neural Network (BNN)


In normal regression methods, the analysis begins with the prior choice of a
relationship (usually linear) between the output and input variables. A neural

6
network is capable of realizing a greater variety of non-linear relationships of
considerable complexity. The data are presented to the network in the form of input
and output parameters, and the optimum non-linear relationship is found by
minimizing a penalized likelihood. In fact, the network tests many kinds of
relationships in its research for an optimum fit. As in regression analysis, the results
then consist of a specification of the function, which in combination with a series of
coefficients (called weights), relates the inputs to the outputs. The search for the
optimum representation can be computer intensive, but once the process is
completed (that is, the network has been trained), the estimation of outputs is very
rapid. The neural network can be susceptible to over-fitting. We have used a
Bayesian framework [12] to control this problem.
MacKay [12, 13] developed a Bayesian framework for neural networks. This
framework allows quantitative assessment of the relative probabilities of models of
different complexity, and quantitative errors can be applied to the predictions of the
models. This work has been applied to the complex problem of predicting scour
depth around bridge piers and its time variation in the study reported herein.
Figure 5 shows the structure of the neural network used in our model. A set of
data ( x1 , x 2 ,.... ) is first fed directly into the network through the input layer and,
subsequently, the Bayesian neural network produces an expected result ( y ) in the
output layer. The output ( y ) is determined by the architecture of the network.
To predict the output, that is the scour depth, hidden nodes were used between
the inputs and the output so that more complex relationships could be expressed.
The transfer function relating the inputs to the i-th hidden nodes is given by
hi = tanh( ∑ wij(1) x j + θ i(1) ) . (10)
j

The relationship between the hidden nodes and the output is linear, that is
y = ∑ wi( 2) hi + θ ( 2) . (11)
i

The coefficient w and biases θ of these equations are determined in such a


way as to minimize the energy function, as explained later. Because the hyperbolic
function is a non-linear function, a non-linear relationship can be predicted using
this model.
Both the input and output variables were first normalized within the range 0 to
1 as follows
x − x min
xN = , (12)
x max − x min
where x N is the normalized value of x , x max is the maximum value and x min is the
minimum value of each variable of the original data. This normalization is not
essential to the neural network approach, but allows the network to be trained better.
Using the normalized data, the coefficients (weights) w and bias θ were
determined in such a way as to minimize the following energy function [12]
M ( w) = βE D + ∑ α c E w( c ) . (13)
c

7
The minimization was implemented using a variable metric optimizer. The
gradient of M (w) was computed using a back-propagation algorithm [17]. The
energy function consists of the error function, E D and regularization, E w . The error
function is the sum-squared error as follows
1
E D ( w) = ∑
2 m
( y ( x m ; w) − t m ) 2 , (14)

where {x m , t m } is the data set, x m represents the inputs, t m represents the targets,
and m is a label of the pair. The error function E D is smallest when the model fits
the data well, that is when y ( x m ; w) is close to t m . The coefficients w and biases
θ , shown in previous equations, make up the parameter vector w . A number of
regularizers E w(c ) are added to the data error. These regularizers favour functions
y ( x; w) , which are smooth functions of x. The simplest regularization method uses
1
a single regularizer, E w = ∑ wi2 . A slightly more complicated regularization
2
method, known as the automatic relevance determination model [12], is used in this
study. Each weight is assigned to a class c, depending on which neurons it connects.
For each input, all the weights connecting that input to the hidden nodes are in a
single class. The biases of the hidden nodes are in another class, and all the weights
from the hidden nodes to the outputs are in a final class. E w(c ) is defined as the sum
of the squares of the weights in class c [12] as follows
1 2
Ew(c ) ( w) = ∑ wi . (15)
2 i∈ c
This additional term favours small values of w and decreases the tendency of a
model to over-fit noise in the data set. The control parameters α c and β , together
with the number of hidden nodes, determine the complexity of the model. These
hyper-parameters define the assumed Gaussian noise level σ v2 = 1 / β and the
assumed weight variances, σ w2 ( c ) = 1 / α ( c ) . The noise level inferred by the model is
σv. The parameter α has the effect of encouraging the weights to decay. Therefore,
a high value of σ w implies that the particular input parameter explains a relatively
large amount of the variation in the output. Thus, σ w is regarded as a good
expression of the significance of each input though not of the sensitivity of the
output to that input. The values of the hyper-parameters are inferred from the data
using the Bayesian methods given in Mackay [12]. In this method, the hyper-
parameters were initialized to values chosen by the operator and the weights were
set to small initial values. The objective function M (w) was minimized to a chosen
tolerance and the values of the hyper-parameters were then updated using a
Bayesian approximation given in Mackay [12] The M (w) function was minimized
again, starting from the final state of the previous optimization, and the hyper-
parameters were updated again.

8
3.4 Development of ANN Models
Three ANN models namely MLP/BP, RBF/OLS and BNN were developed using the
same input variables. The current study used thirteen sets of data to predict
equilibrium scour depth [4, 5, 8]. The whole data set consisting of 263 data points
which was divided into two parts randomly – a training or calibration set consisting
180 data points and a validation or testing set consisting of 83 data points. The data
reported by Melville and Chiew [14], Kothyari et al. [8] and Oliveto and Hager [16]
were used to predict scour depth at a particular time t. The whole data set consisting
of 1700 data points was divided into two parts randomly; a training set consisting of
1138 data points, and a validation or testing set consisting of 562 data points. Details
of the database are available in Jeng et al. [7]. The ranges of different parameters
involved in this study are given in Table 2.

Parameters Range
Flow depth ( Y ) 0.02 – 0.7 (m)
Flow mean velocity ( U ) 0.165 – 1.503 (m/s)
Grain mean diameter ( d 50 ) 0.2 – 7.8 (mm)
Critical flow velocity ( U c ) 0.222 – 1.652 (m/s)
Pier diameter ( D ) 0.01 – 1 (m)
Equilibrium scour depth ( d s e ) 0.004 – 0.440 (m)

Scour time ( t ) 0.5 – 67500 (min)


Equilibrium scour time ( t e ) 45 – 67500 (min)
Flow mean velocity ( U ) 0.154 – 1.270 (m/s)
Critical flow velocity ( U c ) 0.222 – 1.299 (m/s)
Equilibrium scour depth ( d s e ) 0.004 – 0.321 (m)
Scour depth ( d s ) 0.0005 – 0.321 (m)

Table 2b. Range of different input - output parameters used for the estimation of
time-dependent scour dept

The performance of all ANN configurations was assessed based on calculating


the mean absolute error (MAE), and the root mean square error (RMSE). The
coefficient of determination, R 2 , of linear regression line between the predicted
values from the neural network model and the desired output was also used as a
measure of performance. The three statistical parameters used to compare the
performance of the various ANN configurations are:

9
N
1
MAE =
N
∑O
i =1
i − ti , (16)

∑ (O i − ti ) 2
RMSE = i =1
, (17)
N
N

∑ (O i − ti ) 2
R2 = 1− i =1
N
, (18)
∑ (O
i =1
i − Oi ) 2

where Oi and t i are target and network output for the ith output, and O i is the
average of target outputs, and N is the total number of events considered.
The ANN configuration that minimized the two error measures described in
the previous section (and optimum R2) was selected as the optimum. The whole
analysis was repeated several times.
In this study, two types of MLP/BP models were developed- (1) single hidden-
layer ANN models consisting of only one hidden layer; and (2) multiple hidden-
layer ANN models consisting of two hidden layers. The task of identifying the
number of neurons in the input and output layers is normally simple, as it is dictated
by the input and output variables considered to model the physical process. But as
mentioned, the number of neurons in the hidden layer(s) can be determined through
the use of trial-and-error procedure [3]. The optimal architecture was determined by
varying the number of hidden neurons (from 1 to 20), and the best structure was
selected. The training of the ANN models was stopped when either the acceptable
level of error was achieved or when the number of iterations exceeded a prescribed
maximum of 3000. The learning rate of 0.05 was also used.

3.5 Data presentation


How the data are presented for training is one of the most important aspects of
neural network method. Often this can be done in more than one way. The best
configuration being determined by trial and error methodology. It can also be
beneficial to examine the input/output patterns or data sets that the network finds
difficult to learn. Therefore, two combinations of data were considered as inputs.
Three of eight parameters namely fluid density, fluid dynamic viscosity, and
gravitational acceleration are constant in all experiments. Therefore, the first
combination involves just five of the eight parameters as the input pattern and the
equilibrium scour depth ( d se ) as the output pattern and the second combination
includes the five non-dimensional parameters, and normalized equilibrium scour
depth ( d se / D ) as the input and output patterns. Both of the mentioned combinations
of inputs have been used for the two ANN types. This enables a comparison of the
performance of MLP and RBF models for these two combinations of data.
Also, two combinations of data were used to predict time-dependent scour depth.
The first combination involves five parameters as the input pattern and the time-

10
dependent scour depth ( d s ) as the output pattern and the second combination
includes two non-dimensional parameters and the relative scour depth ( d s / d se ) as
the input and output patterns respectively.

4 Results and discussion


4.1 Equilibrium scour depth prediction using the original data set
In this section, the original data is used to establish MANN models. The results of
MAE and RMSE of two ANN models are presented in Figure 5. The MLP models
had very small RMSE during training [ranging from 0.0056 m to 0.0327 m].
However, the value was slightly higher during validation (0.0078 m to 0.0457 m).
The models showed consistently good correlation throughout the training and testing
(> 0.7 for all models). The MLP configuration that included one hidden layer and 16
neurons within that layer gave the minimum error, and was selected as the optimum
model. (Figure 5(a)&(b)).

0.06
MLP/ BP 0.07 MLP/ BP
RBF/ OLS RBF/ OLS
0.05
0.06

0.04 0.05

0.04
0.03
MAE

R M SE

0.03
0.02
0.02
0.01
0.01

0 0
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
Number of hidden nodes Number of hidden nodes

(a) MAE (b) RMSE


Figure 5: Error variation as a function of number of hidden nodes in MLP/BP and
RBF/OLS networks.

In the RBF model, the center selection process found an appropriate tolerance
value of 0.005 and the radial basis spread of 1. For this value of tolerance to be
achieved, 50 significant regressors were required, and they were appropriately and
automatically selected by the algorithm in sequence. Thus, the catchment model
based on the RBF network was composed of 50 nodes in its hidden layer [Figure
5(a) &(b)]. As it is illustrated in Figure 5, increase of number of hidden nodes,
intensifies significantly the ability of ANN to predict values of interest.
It must also be noticed that the reliability of forecasted values does not only
depend on the ANN structure (which need to be carefully chosen through the
training validation process), but also on the input data. To get reliable results, the
input data always need to be trustworthy, too. Thus, the two models are compared
using the aforementioned data set.

11
Figures 6 and 7 suggest that RBF model has a lower training error compared
with MLP [Figures 6(a) & 7(a)], but its validation error becomes greater than MLP
[Figures 6(b) &7(b)]). In other words, MLP/BP validation results have a lower
scatter than RBF.

0.45
Training of MLP/ BP model
0.4 MAE = 0.00219
0.3
RMSE = 0.00567 Validation of MLP/BP model
Predicted equilibrium scour depth (m

2 MAE = 0.0051

Predicted equilibrium scour depth (m)


0.35 R = 0.9941
0.25 RMSE =0.0078
0.3 2
R = 0.9879
0.2
0.25

0.2 0.15

0.15
0.1
0.1
0.05
0.05

0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed equilibrium scour depth (m) Observed equilibrium scour depth (m)

(a) Training (b) Validation


Figure 6: Plot of observed and predicted equilibrium scour depth with original data
set using MLP model (a) Training (b) Validation

0.45 0.35
Training of RBF / OLS model Validation of RBF / OLS model
MAE = 0.0011 MAE = 0.018
0.4
RMSE = 0.0052 0.3 RMSE = 0.025
Predicted equilibrium scour depth (m

2
0.35 R = 0.9948 2
Predicted equilibrium scour depth (m

R = 0.842
0.25
0.3

0.25 0.2

0.2
0.15
0.15
0.1
0.1
0.05
0.05

0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Observed equilibrium scour depth (m) Observed equilibrium scour depth (m)

(a) training (b) validation


Figure 7: Plot of observed and predicted equilibrium scour depth with original data
set using RBF model (a) Training (b) Validation.

To predict the equilibrium scour depth, the network configuration that included
two hidden layers and 15 neurons within each hidden layer gave the minimum error
(graph not shown here). The network configuration consisting of one hidden layer
and 14 neurons within that layer gave the minimum error to predict the temporal
variation of scour depth (graph not shown here). These two models were selected as
the optimum models.

12
Two combinations of the data were used to predict time-dependent scour depth.
Figure 8 shows training and validation results for the first and second combinations
of the data, respectively. The Bayesian model yielded much better results when
analyzed with the dimensional set of data.

0.3
(b)

Predicted Time-Dependent Scour Depth (m)


0.25 (a)
0.25
Predicted Equilibrium Scour Depth (m)

0.2
0.2

2
R2 = 0.973
0.15 R = 0.981
0.15

0.1
0.1

0.05
0.05

0 0
0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed Equilibrium Scour Depth (m) Observed Time-Dependent Scour Depth (m)

Figure 8: Comparison between observed and predicted values by committee models;


(a) Equilibrium scour depth and (b) Time-dependent scour depth.

4.2 Comparison ANN models with existing equilibrium scour depth


prediction equations
To evaluate the accuracy of neural network models in predicting equilibrium scour
depth, a comparison between the new model and eight of the existing formulae was
undertaken using the same 83 observed data set as in Table 3. The new model gives
improved predictions of scour depth. For the best existing method [19], the MAE =
0.0361-m, compared to MAE = 0.0051-m for the new model. Corresponding values
of the coefficient of determination are 0.6300 and 0.9879. Figure 9 compares
equilibrium scour depth values estimated using the present model, the U.S. DOT
method [20] and Melville and Chiew [13].
.
Approaches Error R2
MAE RMSE
Lauesen and Toch [8] 0.0497 0.0727 0.5288
Shen [17] 0.0444 0.0640 0.5815
Hancu [5] 0.0720 0.1188 0.3284
Breusers et al. [2] 0.0613 0.1039 0.1221
Melville & Sutherland [14] 0.0761 0.1326 0.5350
U.S. DOT [19] 0.0361 0.0463 0.6300
Melville and Chiew [13] 0.0393 0.0519 0.6074
ANN (MLP/BP) 0.0051 0.0078 0.9879
ANN (BNN) 0.0066 0.0093 0.99

Table 3: Performance indices of various approaches

13
0.3 0.3
U. S. DOT (1993) Melville and Chiew (1999)
Present study (MLP/BP)

Predicted time-dependent scour depth (m)


ANN (MLP/ BP)
Predicted equilibrium scour depth (m)

0.25 0.25

0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed equilibrium scour depth (m) Observed time-dependent scour depth (m)

Figure 9: Comparison between ANN method and (a) U.S. DOT [20] and (b)
Melville and Chiew [13] predictions.

6 Conclusions
In this study, an alternative approach, artificial neural network, is proposed for the
estimation of local scour around a bridge pier. Three different ANN models were
outlined in this paper. The study includes the manipulation of the collected
laboratory data to train and to validate the networks. It shows that the neural
networks approach predicts scour depth much more accurately than the existing
methods.
It also includes the utilization of past theoretical bases of scour problem to
obtain the dominant parameters of the problem. The selection of input variables to
the network has a large impact on the model accuracy; therefore, based upon
dominant parameters, two combinations of them; original and non-dimensional data
set were used for the analysis of networks. The analyses show that raw data produce
better results than the transformed data.

References
[1] B.A. Blodgett, “Countermeasures for Hydraulic Problems at Bridges”, Federal
HWY Administration US Department of Transportation, 1, 10-23, 1978
[2] H.N.C. Breusers, G. Nicollet, H. W. Shen, “Local scour around cylindrical
piers,“ Journal of Hydraulcs Research, 15 (3), 211-252, 1977.
[3] R.C. Eberhart, R.W. Dobbins, “Neural Network PC tools: A practical guide”,
Academic, San Diego, 1990.
[4] R. Ettema, B.W. Melville, B. Barkdoll, “Scale effect of pier- scour
experiments” Journal of Hydraulic Engineering, ASCE, 124(6), 639-642,
1998.
[5] S. Hancu, “Sur le calcul des affouillements locaux dams la zone des piles des
ponts,” Proc., 14th IAHR Congress, Paris, France, 3,. 299-313, 1971.
[6] J.B. Herbich, “Seafloor Scour: Design Guidelines For Ocean-Founded
Structures,” Marine Technology Society, New York, 1984.

14
[7] D.-S. Jeng, S. M. Bateni, E. Lockett, “Neural Network assessment for scour
depth around bridge piers”, Research Report No R855, School of Civil
Engineering, The University of Sydney, Sydney, Australia, November, 2005,
http://www.civil.usyd.edu.au/publications/2005rreps.shtml#r855.
[8] U.C. Kothyari, R.J. Grade, K.G. Ranga Raju, “Temporal variation of scour
around circular bridge piers,” Journal of Hydraulic Engineering, ASCE,
118(8), 1091-1106, 1992.
[9] E. M. Laursen, A. Toch, “Scour around bridge piers and abutments,” Bulletin
No. 4, Iowa Road Res. Board, 1956.
[10] A. Leonardis, H. Bischof, “An efficient MDL-based construction of RBF
networks,” Neural Networks, 11, 963-973, 1998
[11] C. Lin, “Analysis of Disasters Of Cross-River Bridge Foundations, In The
Western Area Of Taiwan And Establishment Of Data Base System For Their
Protection Works,” Research Report, department of Civil Engineering,
National Chung-Hisng University, Taichi ung, Taiwan, ROC. (in Chinese),
1998
[12] D.J.C. MacKay, “Bayesian interpolation Neural Computation,” 4(3), (1992),
415 – 447, 1992
[13] D.J.C. MacKay, “A practical Bayesian framework for backpropagation
networks,” Neural Computation, 4(3), 448 – 472, 1992.
[14] B.W. Melville, Y.M. Chiew, “Time scale for local scour depth at bridge
piers,” Journal of Hydraulics Engineering, ASCE, 125(1), 59-65, 1999
[15] B.W. Melville, A.J. Sutherland, “Design method for local scour at bridge
piers,” Journal of Hydraulics Engineering, ASCE, 114(10), 1210-1226, 1988.
[16] G. Oliveto, G., W.H. Hager, “Temporal evolution of clear-water pier and
abutment scour,” Journal of Hydraulics Engineering, ASCE, 128(9), 811-820,
2002.
[17] D.E. Rumelhart, G. Hinton, R. Williams, “Learning internal representations by
error propagation,” Parallel distributed processing: Exploration in the
microstructure of cognition, D. Rumelhart and J. McClelland, eds., 1, MIT
Press, Cambridge, Mass., 318-362, 1986.
[18] H.W. Shen, “Scour near piers”. In: River Mechanics, II, Chap. 23, Ft. Collins,
Colo, 1971.
[19] B. M. Sumer, J. Fredsoe, “The Mechanics of Scour in the Marine
Environment,” World Scientific, 2002.
[20] U.S. DOT, “Evaluating scour at bridges,” Hydraul. Eng. Circular No.18,
FHWA-IP-90-017, Fed. Hwy. Admin., U.S. Dept. of Transp., McLean, VA,
1971.

15

Das könnte Ihnen auch gefallen