Beruflich Dokumente
Kultur Dokumente
Abstract
Since the mechanism of flow around a pier-type structure is complicated, it is
difficult to establish a general empirical model to estimate the local scour around a
pier. Interestingly, each empirical formula yields a good agreement for a particular
data set, but being inapplicable to other databases. In this study, an alternative
approach, artificial neural networks (ANN), is proposed to estimate the equilibrium
and time-dependent scour depth with numerous reliable data base. Numerous ANN
models, multi-layer perception using back propagation algorithm (MLP/BP), radial
basis using orthogonal least-squares algorithm (RBF/OLS) and Bayesian neural
Network (BNN) were used. The equilibrium scour depth was modelled as a function
of five variables; flow depth, mean velocity, critical flow velocity, mean grain
diameter and pier diameter. The time variation of scour depth was also modelled in
terms of equilibrium scour depth, equilibrium scour time, scour time, mean flow
velocity and critical flow velocity. The training and testing data are selected from
the experimental data of several valuable references.
1 Introduction
Placing a hydraulic structure in either a river or marine environment will alter the
flow patterns in the vicinity of the structure. The changes to the flow pattern cause
an increase in sediment movement leading to the phenomenon of scour.
Understanding the phenomenon of bridge pier scour is of paramount concern to the
hydraulics engineering profession as without this detailed knowledge bridge failures
can occur, resulting in loss of life and devastating destruction (Figure 1). From a
purely economic standpoint, businesses of all sizes depend on major interstates, city
streets and rural roads to move products and services. Therefore, where roads and
1
bridges are temporarily or permanently closed due to damage sustained because of
scour, the economy will suffer.
Given the importance of understanding the stability of hydraulic structures
exposed to scour, extensive research has been conducted on the mechanisms and
dynamics of scour and scour patterns around different objects [6, 19]. Blodgett [1]
studied 383 bridge failures caused by catastrophic floods. Approximately half of
these failures were caused by local scour. Although some of the scour was attributed
to the increased local and contraction scour, due to accumulation of ice and debris, a
large portion resulted from erroneous prediction of scour depth during engineering
design. Among these, 86% of the 577,000 bridges in the National Bridge Registry
(NBI) of America are built over waterways. More than 26,000 of these bridges have
been found to be scour critical, meaning that the stability of the bridge foundation
has been, or could be, affected by the removal of bed materials.
Figure 1: Examples of scour around piers (a) scour and bed degradation at Kaoping
Bridge, Taiwan, (b) failure of Kaoping Bridge, Taiwan due to combination of
general and local scour [11].
The depth of scour is an important parameter for determining the minimum depth
of foundations as it reduces the lateral capacity of the foundation. It is for this reason
that extensive experimental investigation has been conducted in an attempt to
understand the complex process of scour and to determine a method of predicting
scour depth for various pier situations. To date, no generic formula has been
developed that can be applied to all pier cases to determine the extent of scour that
will develop. Numerous empirical formulae have been presented to estimate
equilibrium scour depth at bridge piers. These approaches are summarised in Table
1. Each approach varies significantly, highlighting the fact that there is a lack of
knowledge in predicting scour depth and that a more universal solution would be
beneficial. It is the lack of knowledge in predicting scour depth for all pier
conditions that has led to the undertaking of this paper.
In this study, an alterative approach, Artificial Neural Network models, will be
used to estimate the scour depth around piers. Numerous neural network models
will be outlined and some numerical examples will be used to demonstrate the
capacity of NN models..
2
Reference Proposed Theories
Lauesen and Toch [9] d se = 1.35D 0.7Y 0.3
Shen [18] 0.619
⎛ UD ⎞
d se = 0.00022 ⎜ ⎟
⎝ ν ⎠
Hancu [5] 1/ 3
d se ⎡ 2U ⎤ ⎡ U c2 ⎤
= 2.42 ⎢ − 1⎥ ⎢ ⎥
D ⎣ U c ⎦ ⎢⎣ gD ⎥⎦
Breusers et al. [2] d se ⎛U ⎞ ⎛Y ⎞
=2f ⎜ ⎟ tanh ⎜ ⎟
D ⎝ Uc ⎠ ⎝D⎠
Melville & Sutherland [15] d se
= Kl K d K y K a K s
D
US DOT [20] −0.65 ⎛ 0.43
d se ⎛Y ⎞ U ⎞
= K3 ⎜ ⎟ ⎜⎜ ⎟⎟
D ⎝D⎠ ⎝ gY ⎠
Melville and Chiew [14] d se = K yD Kl K d
scour situation involves an approach flow over a loose bed and a complex three
dimensional flow field at the pier. The basic similitude requirements for
hydraulically modeling the simplest of pier-scour situations are difficult to satisfy.
Scour depth at a pier (as shown in Figure 2), depends on variables characterizing the
fluid, flow, bed sediment, and pier. Thus, the following functional relationship can
describe scour depth [4].
d s e = f ( ρ , μ ,U , Y , g , d 50 ,U c , D) , (1)
where ρ = fluid density; μ = fluid dynamic viscosity; U = average velocity of
approach flow; Y = flow depth; g = gravitational acceleration; d 50 = particle mean
diameter; U c = critical value of U associated with initiation of motion of particles
on bed surface; D = diameter of the pier and d se = equilibrium scour depth.
The eight independent variables in (1) are reducible to a set of five non-
dimensional parameters. If ρ , U , and D are chosen as repeating variables, the
following relationship describes scour depth normalized with pier diameter:
d se U U Y D ρUD
= Ψ( , , , , ). (2)
D Uc gY D d 50 μ
3
Figure 2: Flow and local scour around a circular pier.
4
model, where a set of data ( x1 , x 2 ,.... ) is first fed directly in the network through the
input layer, and subsequently, the multi layer perceptron produces an expected result
y in the output layer. The number of hidden layers establishes the complexity of
the network, because a greater number of hidden layers increases the number of
connections in the ANN. The issue of determining the correct number of hidden
layers required to solve a specific task remains an open problem. The number of
nodes in each layer is evaluated by trial and error.
where wij = weight of the connection joining the j-th neuron in a layer with the i-th
neuron in the previous layer and xi = values of the i-th neuron in the previous layer.
The ANNs are trained with a training set of input and known output data.
Many learning examples are repeatedly presented to the network, and the process is
terminated when either this difference is less than a specified value or the number of
training epochs excesses the specified epoch number. At this stage ANN is
considered as trained. The back propagation algorithm based upon the generalized
delta rule proposed by Rumelhart et al. [17] was used to train the ANN in this study.
5
transfer function f , which transforms the incoming signals. For the j-th input
pattern X pJ , the response of the j-th hidden node y j is of the form
p
X −U j
yj = f{ } (8)
2σ 2j
where • =Euclidian norm; U j = center of the j-th radial basis function f ; and σ =
spread of the RBF that is indicative of the radial basis from the RBF center within
which the function value is significantly different from zero.
f W11
U1, 1
Zp 1
Xp1
f WKJ
Xp J UJ, J
Zp K
XpN WL1
where wkj = weight connection between hidden and output nodes. From several
possible radial basis functions, the most common choice is the Gaussian. The
Gaussian RBF center of the j-th hidden node can be specified by the mean U j and
the deviation σ j .
Training an RBF involves two stages: (1) determining the basis functions on the
hidden layer nodes and (2) the output layer weights. Fitting the RBF function
involves finding suitable RBF centers and spreads. A variety of techniques have
been evolved to optimize the number of RBF centers. The present study employed
the minimum description length algorithm [10] to optimize the parameters of the
RBF networks.
6
network is capable of realizing a greater variety of non-linear relationships of
considerable complexity. The data are presented to the network in the form of input
and output parameters, and the optimum non-linear relationship is found by
minimizing a penalized likelihood. In fact, the network tests many kinds of
relationships in its research for an optimum fit. As in regression analysis, the results
then consist of a specification of the function, which in combination with a series of
coefficients (called weights), relates the inputs to the outputs. The search for the
optimum representation can be computer intensive, but once the process is
completed (that is, the network has been trained), the estimation of outputs is very
rapid. The neural network can be susceptible to over-fitting. We have used a
Bayesian framework [12] to control this problem.
MacKay [12, 13] developed a Bayesian framework for neural networks. This
framework allows quantitative assessment of the relative probabilities of models of
different complexity, and quantitative errors can be applied to the predictions of the
models. This work has been applied to the complex problem of predicting scour
depth around bridge piers and its time variation in the study reported herein.
Figure 5 shows the structure of the neural network used in our model. A set of
data ( x1 , x 2 ,.... ) is first fed directly into the network through the input layer and,
subsequently, the Bayesian neural network produces an expected result ( y ) in the
output layer. The output ( y ) is determined by the architecture of the network.
To predict the output, that is the scour depth, hidden nodes were used between
the inputs and the output so that more complex relationships could be expressed.
The transfer function relating the inputs to the i-th hidden nodes is given by
hi = tanh( ∑ wij(1) x j + θ i(1) ) . (10)
j
The relationship between the hidden nodes and the output is linear, that is
y = ∑ wi( 2) hi + θ ( 2) . (11)
i
7
The minimization was implemented using a variable metric optimizer. The
gradient of M (w) was computed using a back-propagation algorithm [17]. The
energy function consists of the error function, E D and regularization, E w . The error
function is the sum-squared error as follows
1
E D ( w) = ∑
2 m
( y ( x m ; w) − t m ) 2 , (14)
where {x m , t m } is the data set, x m represents the inputs, t m represents the targets,
and m is a label of the pair. The error function E D is smallest when the model fits
the data well, that is when y ( x m ; w) is close to t m . The coefficients w and biases
θ , shown in previous equations, make up the parameter vector w . A number of
regularizers E w(c ) are added to the data error. These regularizers favour functions
y ( x; w) , which are smooth functions of x. The simplest regularization method uses
1
a single regularizer, E w = ∑ wi2 . A slightly more complicated regularization
2
method, known as the automatic relevance determination model [12], is used in this
study. Each weight is assigned to a class c, depending on which neurons it connects.
For each input, all the weights connecting that input to the hidden nodes are in a
single class. The biases of the hidden nodes are in another class, and all the weights
from the hidden nodes to the outputs are in a final class. E w(c ) is defined as the sum
of the squares of the weights in class c [12] as follows
1 2
Ew(c ) ( w) = ∑ wi . (15)
2 i∈ c
This additional term favours small values of w and decreases the tendency of a
model to over-fit noise in the data set. The control parameters α c and β , together
with the number of hidden nodes, determine the complexity of the model. These
hyper-parameters define the assumed Gaussian noise level σ v2 = 1 / β and the
assumed weight variances, σ w2 ( c ) = 1 / α ( c ) . The noise level inferred by the model is
σv. The parameter α has the effect of encouraging the weights to decay. Therefore,
a high value of σ w implies that the particular input parameter explains a relatively
large amount of the variation in the output. Thus, σ w is regarded as a good
expression of the significance of each input though not of the sensitivity of the
output to that input. The values of the hyper-parameters are inferred from the data
using the Bayesian methods given in Mackay [12]. In this method, the hyper-
parameters were initialized to values chosen by the operator and the weights were
set to small initial values. The objective function M (w) was minimized to a chosen
tolerance and the values of the hyper-parameters were then updated using a
Bayesian approximation given in Mackay [12] The M (w) function was minimized
again, starting from the final state of the previous optimization, and the hyper-
parameters were updated again.
8
3.4 Development of ANN Models
Three ANN models namely MLP/BP, RBF/OLS and BNN were developed using the
same input variables. The current study used thirteen sets of data to predict
equilibrium scour depth [4, 5, 8]. The whole data set consisting of 263 data points
which was divided into two parts randomly – a training or calibration set consisting
180 data points and a validation or testing set consisting of 83 data points. The data
reported by Melville and Chiew [14], Kothyari et al. [8] and Oliveto and Hager [16]
were used to predict scour depth at a particular time t. The whole data set consisting
of 1700 data points was divided into two parts randomly; a training set consisting of
1138 data points, and a validation or testing set consisting of 562 data points. Details
of the database are available in Jeng et al. [7]. The ranges of different parameters
involved in this study are given in Table 2.
Parameters Range
Flow depth ( Y ) 0.02 – 0.7 (m)
Flow mean velocity ( U ) 0.165 – 1.503 (m/s)
Grain mean diameter ( d 50 ) 0.2 – 7.8 (mm)
Critical flow velocity ( U c ) 0.222 – 1.652 (m/s)
Pier diameter ( D ) 0.01 – 1 (m)
Equilibrium scour depth ( d s e ) 0.004 – 0.440 (m)
Table 2b. Range of different input - output parameters used for the estimation of
time-dependent scour dept
9
N
1
MAE =
N
∑O
i =1
i − ti , (16)
∑ (O i − ti ) 2
RMSE = i =1
, (17)
N
N
∑ (O i − ti ) 2
R2 = 1− i =1
N
, (18)
∑ (O
i =1
i − Oi ) 2
where Oi and t i are target and network output for the ith output, and O i is the
average of target outputs, and N is the total number of events considered.
The ANN configuration that minimized the two error measures described in
the previous section (and optimum R2) was selected as the optimum. The whole
analysis was repeated several times.
In this study, two types of MLP/BP models were developed- (1) single hidden-
layer ANN models consisting of only one hidden layer; and (2) multiple hidden-
layer ANN models consisting of two hidden layers. The task of identifying the
number of neurons in the input and output layers is normally simple, as it is dictated
by the input and output variables considered to model the physical process. But as
mentioned, the number of neurons in the hidden layer(s) can be determined through
the use of trial-and-error procedure [3]. The optimal architecture was determined by
varying the number of hidden neurons (from 1 to 20), and the best structure was
selected. The training of the ANN models was stopped when either the acceptable
level of error was achieved or when the number of iterations exceeded a prescribed
maximum of 3000. The learning rate of 0.05 was also used.
10
dependent scour depth ( d s ) as the output pattern and the second combination
includes two non-dimensional parameters and the relative scour depth ( d s / d se ) as
the input and output patterns respectively.
0.06
MLP/ BP 0.07 MLP/ BP
RBF/ OLS RBF/ OLS
0.05
0.06
0.04 0.05
0.04
0.03
MAE
R M SE
0.03
0.02
0.02
0.01
0.01
0 0
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
Number of hidden nodes Number of hidden nodes
In the RBF model, the center selection process found an appropriate tolerance
value of 0.005 and the radial basis spread of 1. For this value of tolerance to be
achieved, 50 significant regressors were required, and they were appropriately and
automatically selected by the algorithm in sequence. Thus, the catchment model
based on the RBF network was composed of 50 nodes in its hidden layer [Figure
5(a) &(b)]. As it is illustrated in Figure 5, increase of number of hidden nodes,
intensifies significantly the ability of ANN to predict values of interest.
It must also be noticed that the reliability of forecasted values does not only
depend on the ANN structure (which need to be carefully chosen through the
training validation process), but also on the input data. To get reliable results, the
input data always need to be trustworthy, too. Thus, the two models are compared
using the aforementioned data set.
11
Figures 6 and 7 suggest that RBF model has a lower training error compared
with MLP [Figures 6(a) & 7(a)], but its validation error becomes greater than MLP
[Figures 6(b) &7(b)]). In other words, MLP/BP validation results have a lower
scatter than RBF.
0.45
Training of MLP/ BP model
0.4 MAE = 0.00219
0.3
RMSE = 0.00567 Validation of MLP/BP model
Predicted equilibrium scour depth (m
2 MAE = 0.0051
0.2 0.15
0.15
0.1
0.1
0.05
0.05
0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed equilibrium scour depth (m) Observed equilibrium scour depth (m)
0.45 0.35
Training of RBF / OLS model Validation of RBF / OLS model
MAE = 0.0011 MAE = 0.018
0.4
RMSE = 0.0052 0.3 RMSE = 0.025
Predicted equilibrium scour depth (m
2
0.35 R = 0.9948 2
Predicted equilibrium scour depth (m
R = 0.842
0.25
0.3
0.25 0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Observed equilibrium scour depth (m) Observed equilibrium scour depth (m)
To predict the equilibrium scour depth, the network configuration that included
two hidden layers and 15 neurons within each hidden layer gave the minimum error
(graph not shown here). The network configuration consisting of one hidden layer
and 14 neurons within that layer gave the minimum error to predict the temporal
variation of scour depth (graph not shown here). These two models were selected as
the optimum models.
12
Two combinations of the data were used to predict time-dependent scour depth.
Figure 8 shows training and validation results for the first and second combinations
of the data, respectively. The Bayesian model yielded much better results when
analyzed with the dimensional set of data.
0.3
(b)
0.2
0.2
2
R2 = 0.973
0.15 R = 0.981
0.15
0.1
0.1
0.05
0.05
0 0
0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed Equilibrium Scour Depth (m) Observed Time-Dependent Scour Depth (m)
13
0.3 0.3
U. S. DOT (1993) Melville and Chiew (1999)
Present study (MLP/BP)
0.25 0.25
0.2 0.2
0.15 0.15
0.1 0.1
0.05 0.05
0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0 0.05 0.1 0.15 0.2 0.25 0.3
Observed equilibrium scour depth (m) Observed time-dependent scour depth (m)
Figure 9: Comparison between ANN method and (a) U.S. DOT [20] and (b)
Melville and Chiew [13] predictions.
6 Conclusions
In this study, an alternative approach, artificial neural network, is proposed for the
estimation of local scour around a bridge pier. Three different ANN models were
outlined in this paper. The study includes the manipulation of the collected
laboratory data to train and to validate the networks. It shows that the neural
networks approach predicts scour depth much more accurately than the existing
methods.
It also includes the utilization of past theoretical bases of scour problem to
obtain the dominant parameters of the problem. The selection of input variables to
the network has a large impact on the model accuracy; therefore, based upon
dominant parameters, two combinations of them; original and non-dimensional data
set were used for the analysis of networks. The analyses show that raw data produce
better results than the transformed data.
References
[1] B.A. Blodgett, “Countermeasures for Hydraulic Problems at Bridges”, Federal
HWY Administration US Department of Transportation, 1, 10-23, 1978
[2] H.N.C. Breusers, G. Nicollet, H. W. Shen, “Local scour around cylindrical
piers,“ Journal of Hydraulcs Research, 15 (3), 211-252, 1977.
[3] R.C. Eberhart, R.W. Dobbins, “Neural Network PC tools: A practical guide”,
Academic, San Diego, 1990.
[4] R. Ettema, B.W. Melville, B. Barkdoll, “Scale effect of pier- scour
experiments” Journal of Hydraulic Engineering, ASCE, 124(6), 639-642,
1998.
[5] S. Hancu, “Sur le calcul des affouillements locaux dams la zone des piles des
ponts,” Proc., 14th IAHR Congress, Paris, France, 3,. 299-313, 1971.
[6] J.B. Herbich, “Seafloor Scour: Design Guidelines For Ocean-Founded
Structures,” Marine Technology Society, New York, 1984.
14
[7] D.-S. Jeng, S. M. Bateni, E. Lockett, “Neural Network assessment for scour
depth around bridge piers”, Research Report No R855, School of Civil
Engineering, The University of Sydney, Sydney, Australia, November, 2005,
http://www.civil.usyd.edu.au/publications/2005rreps.shtml#r855.
[8] U.C. Kothyari, R.J. Grade, K.G. Ranga Raju, “Temporal variation of scour
around circular bridge piers,” Journal of Hydraulic Engineering, ASCE,
118(8), 1091-1106, 1992.
[9] E. M. Laursen, A. Toch, “Scour around bridge piers and abutments,” Bulletin
No. 4, Iowa Road Res. Board, 1956.
[10] A. Leonardis, H. Bischof, “An efficient MDL-based construction of RBF
networks,” Neural Networks, 11, 963-973, 1998
[11] C. Lin, “Analysis of Disasters Of Cross-River Bridge Foundations, In The
Western Area Of Taiwan And Establishment Of Data Base System For Their
Protection Works,” Research Report, department of Civil Engineering,
National Chung-Hisng University, Taichi ung, Taiwan, ROC. (in Chinese),
1998
[12] D.J.C. MacKay, “Bayesian interpolation Neural Computation,” 4(3), (1992),
415 – 447, 1992
[13] D.J.C. MacKay, “A practical Bayesian framework for backpropagation
networks,” Neural Computation, 4(3), 448 – 472, 1992.
[14] B.W. Melville, Y.M. Chiew, “Time scale for local scour depth at bridge
piers,” Journal of Hydraulics Engineering, ASCE, 125(1), 59-65, 1999
[15] B.W. Melville, A.J. Sutherland, “Design method for local scour at bridge
piers,” Journal of Hydraulics Engineering, ASCE, 114(10), 1210-1226, 1988.
[16] G. Oliveto, G., W.H. Hager, “Temporal evolution of clear-water pier and
abutment scour,” Journal of Hydraulics Engineering, ASCE, 128(9), 811-820,
2002.
[17] D.E. Rumelhart, G. Hinton, R. Williams, “Learning internal representations by
error propagation,” Parallel distributed processing: Exploration in the
microstructure of cognition, D. Rumelhart and J. McClelland, eds., 1, MIT
Press, Cambridge, Mass., 318-362, 1986.
[18] H.W. Shen, “Scour near piers”. In: River Mechanics, II, Chap. 23, Ft. Collins,
Colo, 1971.
[19] B. M. Sumer, J. Fredsoe, “The Mechanics of Scour in the Marine
Environment,” World Scientific, 2002.
[20] U.S. DOT, “Evaluating scour at bridges,” Hydraul. Eng. Circular No.18,
FHWA-IP-90-017, Fed. Hwy. Admin., U.S. Dept. of Transp., McLean, VA,
1971.
15