Beruflich Dokumente
Kultur Dokumente
PAR
James-Alexandre Goulet
Suisse
2012
Acknowledgements
I would like to acknowledge the contribution of all collaborators I had the opportunity to
work with during my doctoral studies. I also thank my family, friends and colleagues for their
continuous support over years. I would like to give special thanks to:
Close collaborators
Prof. Ian F. C. Smith (EPFL, Switzerland)
Dr. Clotaire Michel (ETHZ, Switzerland)
Sylvain Coutu (Ph.D. candidate, EPFL, Switzerland)
External collaborations
Prof. James M. H. Brownjohn (University of Shefeld, UK)
Prof. Luc Chouinard (McGill University, Canada)
Prof. William OBrien (University of Texas at Austin, USA)
Prof. Alain Nussbaumer (EPFL, Switzerland)
Prof. Franklin Moon (Drexel University, USA)
Prof. Branko Glisic (Princeton, USA)
Prof. Benny Raphael (NUS, Singapore)
Key discussions
Prof. Michael Faber (DTU, Denmark)
Prof. James L. Beck (Caltech, USA)
Prof. Eugen Brhwiler (EPFL, Switzerland)
Prof. Prakash Kripakaran (University of Exeter, UK)
Prof. Nizar Bel Hadj Ali (University of Gabs, Tunisia)
Dr. Sandro Saitta (External lecturer, EPFL, Switzerland)
iii
Acknowledgements
Master students
Romain Pasquier (Ph.D. student, M.Sc., EPFL, Drexel1 )
Marie Texier (M.Sc., EPFL, McGill2 )
Olivier Egger (M.Sc. candidate, EPFL, Princeton3 )
Alban Nguyen (M.Sc. candidate, EPFL)
Interns
Emma Hoffman
Youssef Mezdani (M.Sc., EPFL)
Proofread
Ian F. C. Smith
Romain Pasquier
Austin Ivey
Belinda Bates
1 Master project conducted in collaboration with Drexel University, USA
2 Master project conducted in collaboration with McGill University, Canada
3 Master project conducted in collaboration with Princeton University, USA
iv
Acknowledgements
Support
My family
Dr. Alexis Kalogeropoulos
Daphn Dethier
IMAC-laboratory and EPFL colleagues
Software
Part of the work done in this thesis used an academic license of the software ANSYS
from ANSYS inc.
Abstract
Most infrastructure in the western world was built in the second half of the 20th century.
Transportation structures, water distribution networks and energy production systems are
now aging and this leads to safety and serviceability issues. In situations where conservative
routine assessments fail to justify adequate safety and serviceability, advanced structural evaluation methodologies are attractive. These advanced methodologies employ measurements
to help understand more accurately structural behavior. A better understanding typically
results in more accurate reserve capacity evaluations along with other advantages. Many of
the approaches available originate from the elds of statistics, signal processing and control
engineering, where it is common to assume that modeling errors can be treated as Gaussian
noise. Such an assumption is not generally applicable to civil infrastructure because in these
systems, systematic biases in models can be very important and their effects often vary with
location. Most importantly, little is known of the dependencies between the errors.
This thesis includes a proposal for a model-based data-interpretation methodology that builds
on the concept of probabilistic falsication. This approach can identify the properties of
structures in situations where there are aleatory and systematic errors, without requiring
denitions of dependencies between uncertainties. Prior knowledge is used to bound the
ranges of parameters to be identied and to build an initial set of possible model instances.
Then, predictions from each model are compared with measurements gathered on-site so that
inadequate model instances and model classes are falsied using threshold bounds. These
bounds are dened using measurement and modeling uncertainties. The probability of dis
carding a valid model instance is regulated using the Sidk
correction to account for multiple
measurements.
A new metric called expected identiability quanties probabilistically, the utility of monitoring interventions. Expected identiability quanties the effect of hypotheses and choices
such as the uncertainty level, model-class renement, measurement locations, measurement
types and sensor accuracy. Results show that using too many measurements may decrease
data-interpretation performance.
Probabilistic model falsication, expected identiability and measurement-system design
methodologies are applied to several full-scale case studies. The work shows that datainterpretation is limited by factors such as robustness with respect to inaccurate uncertainty
denitions and the exponential complexity of exploring high-dimensional solution spaces.
Paths for tackling these issues are proposed as guidance for future research.
vii
Abstract
Keywords
System identication, falsication, uncertainties, infrastructure diagnosis, data-interpretation,
measurement-system design, bridge monitoring, performance evaluation
viii
Rsum
La majorit des infrastructures des pays industrialiss ont t construites dans la seconde
partie du 20me sicle. Ces infrastructures de transport, de distribution deau et de production
dnergie vieillissent et montrent aujourdhui des signes de dtrioration. Durant les dernires
annes, ces signes ont remis en cause lutilisation et la scurit de ces ouvrages. Lorsque les
contrles traditionnels ne sufsent pas justier la scurit et la fonctionalit des infrastructures, des mthodes avances bases sur la mesure du comportement structural des ouvrages
peuvent permettre de mieux comprendre leur fonctionnement et leur capacit relle. La majorit de ces mthodes avances vise la minimisation des diffrences entre les valeurs mesures
in situ et celles prdites laide de modles. Plusieurs de ces techniques dinterprtation des
donnes proviennent du domaine de la statistique, du traitement de signal et du contrle
des systmes, o il est courant de traiter les erreurs de modlisation comme des variables
alatoires, Gaussiennes et indpendantes. Cette hypothse sur les erreurs nest pas toujours
satisfaite lorsquelle est applique aux ouvrages du gnie civil car les relations de dpendance
entre les incertitudes sont souvent inconnues.
Cette thse prsente une mthode dinterprtation probabiliste base sur la falsication de
modles. Lapproche permet lidentication des proprits des structures, sans avoir faire
dhypothse concernant la dpendance entre les incertitudes. Les connaissances disposition
sont utilises an de gnrer un ensemble de modles pouvant expliquer le comportement
de louvrage tudi. Les valeurs prdites par chacun des modles sont compares avec les
mesures an dcarter les modles dpassant des valeurs seuils. Les seuils de falsication sont
dnis par la combinaison des incertitudes provenant des mesures ainsi que des modles et
Rsum
Mots-cls
Identication des systmes, falsication, incertitudes, diagnostic des infrastructures, interprtation des donnes, systmes de mesures, mesure des structures, valuation de performance
Contents
Acknowledgements
iii
Abstract (English/Franais)
vii
List of gures
xiv
List of tables
xxii
Notation
xxv
xxix
Introduction
1 Literature review
11
14
15
19
19
19
21
24
25
26
26
27
1.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
31
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
xi
Contents
2.3 Uncertainty dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
36
37
38
38
38
40
40
42
42
47
49
50
2.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
53
54
55
57
57
58
58
3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
4 Measurement-system design
61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
62
63
4.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
66
4.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
69
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
5 Case studies
xii
53
71
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
73
74
80
5.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
81
Contents
5.3 Langensand Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
82
83
88
90
153
157
Contents
7.2
7.3
7.4
7.5
160
161
161
162
162
163
164
164
165
165
169
Bibliography
183
Academic Curriculum
185
List of Publications
187
xiv
List of Figures
1
2
3
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.1 A comparison between the state of knowledge in the eld of sensing technologies and
in data interpretation. The relative size of each circle describes the extent of the current
state of knowledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Likelihood function based on a 2 -norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Generalized Gaussian distribution for {1, 2, 10, }. Adapted from [225]. . . . . . . . . .
1.4 Comparison of the probability contained in rectangular and ellipsoidal condence regions when varying the correlation between two random variables. In all three cases, the
correlation used to compute the ellipsoidal region is set to 0.9. However, the correlation
used to generate realizations of X is a) 0.9, b) 0.4 & c) -0.9. Only the rectangular regions
include in all situations a proportion of the sample at least equal to the target (0.95). . . .
1.5 Comparison of the area enclosed in rectangular (a) and ellipsoidal (b) condence regions
including a target probability content of 0.95. The correlation used to compute the
ellipsoidal region is set to 0. However, the correlation used to generate realizations of X is
0.9. The area enclosed in the ellipsoidal region is 15% larger than the region dened by
rectangular bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 pdf of a random variable X , where a condence interval bounded by Tl ow and Thi g h
contains a probability ]0, 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Uncertainty on a variable X can be represented by probability bounds (Tl ow and Thi g h )
without dening a probability density function. . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Propagation of model-parameter uncertainties using Monte Carlo sampling. Adapted
from [106] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Curvilinear distribution used to describe a uniform distribution where bounds are inexactly dened. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.10 Central-composite design for three parameters where each axis represents the normalized
parameter range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.11 Conceptual illustration of random walk sampling. . . . . . . . . . . . . . . . . . . . . . . .
1.12 Timeline of key contributions made in scientic elds related to data-interpretation.
These contributions are a non-exhaustive survey of the vast literature available in each
eld. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.13 Schematic comparison of current data-interpretation techniques. . . . . . . . . . . . . . .
1
2
3
4
9
12
12
17
18
20
21
24
24
27
28
29
29
2.1 The combined probability density function describes the outcome of the random variable
Uc,i , i.e. the combination of modeling and measurement uncertainties. . . . . . . . . . . . 33
xv
List of Figures
2.2 Threshold denition. Threshold bounds Tl ow,i and Thi g h,i are found separately for each
combined uncertainty for a probability 1/2 . When these threshold bounds are projected on the bi-variate pdf, they dene a rectangular boundary that is used to separate
candidate and falsied models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Error dependency between degrees of freedom in a nite element beam model. If errors
are random (a), predicted displacements at any location are independent of each other
and appear to vary around the real displacement. On the other hand, systematic effects
introduce dependencies in the error structure (b). For example, if the boundary condition
is not adequately modeled, the displacement may be biased at several location. . . . . . . 36
2.4 Propagation of secondary-parameter uncertainty in a physical model to obtain prediction
uncertainty. Uncertainties in secondary model-parameter values are propagated through
]) while primary parameters are kept to their most likely
the template model g([,
Several thousand evaluations of the template model are made to compute the
values .
prediction uncertainty due to uncertain secondary-parameter values. . . . . . . . . . . . . 37
2.5 An initial set of model instances is generated based on a grid, where the template model is
evaluated using each parameter set. The grid is dened using the minimal and maximal
bounds for parameter ranges, dened based on engineering judgment. Discretization
intervals are also provided to specify the sampling density. . . . . . . . . . . . . . . . . . . 39
2.6 Initial model set organized in a n p -parameter grid used to explore the space of possible
solutions. inadequate model instances are falsied using Equation 2.8 and models that
are not falsied are classied as candidate models. . . . . . . . . . . . . . . . . . . . . . . . 39
2.7 Two-dimensional likelihood function used to generate parameter samples. The likelihood L cm is maximal when the observed residuals o,i are within threshold bounds
[Tl ow,i , Thi g h,i ]. This example was created using Equation 2.11 with a shape function
parameter = 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.8 Flowchart describing error-domain model falsication . . . . . . . . . . . . . . . . . . . . . 42
2.9 Composite beam cross-section. The structure studied is a simply supported, ten meter
long, composite beam. The beam is modeled by shell elements using the nite-element
method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.10 Combined uncertainty probability density function for mid-span and quarter-span comparison points. For each comparison point, uncertainties are separated in modeling
Umod el ,i and measurement Umeasur e,i uncertainties. These are subtracted from each
other to obtain the combined uncertainties Uc,1 and Uc,2 . . . . . . . . . . . . . . . . . . . 44
2.11 The two combined uncertainty pdf s are presented in a bivariate probability density
function. This bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i ) including a target probability = 0.95. Minimal and maximal bounds for each location are
found numerically by satisfying Equation 2.14 while minimizing the area enclosed in the
projection of the threshold bounds. Model instances are falsied if, for any comparison
point, the difference between predicted and measured values (gi () y i ) are outside the
rectangular threshold bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.12 Representation of the initial model set with candidate and falsied models. By comparing
the difference between model predictions and measurements with threshold bounds, 58
model instances out of 100 are falsied. The 42 candidate models are outlined the shaded
region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
xvi
List of Figures
2.13 Exploration of the model instance space using random walk MCMC. The same 42 candidate models were found however it required 22% less samples than with an exhaustive
grid sampling. The vertices between samples corresponds to the path followed by the
random walk. The starting point is highlighted by a circle. . . . . . . . . . . . . . . . . . . . 47
2.14 An illustration of the effects of model simplications and omissions on time-domain
structural identication. Graph a) represents the true displacement of a structure over
time. When monitoring a system, noise is usually recorded in addition to the system
behavior (b). Simulations of the system behavior are inevitably inexact (c). When comparing the time-domain measured and predicted signals (d), a bias is present. In such a
situation, the residual cannot be described using stationary Gaussian noise. . . . . . . . . 48
2.15 Two-dimensional -norm likelihood function generated from Equation 2.13 using
= 100. The projection of vertical walls on the horizontal plane corresponds to threshold
bounds. Large values of can be used to approximate the -norm likelihood function.
This can be used as an alternative to 2 -norm Bayesian inference. . . . . . . . . . . . . . . 50
2.16 Posterior pdf for the illustrative example presented in 2.6.1. This posterior pdf is
computed using the -norm likelihood function presented in Figure 2.15. The prior
distribution is set as uniform over the range of 190-212 GPa for steel and 15-45 GPa for
concrete Youngs modulus. The candidate model region found corresponds to the set
found using error-domain model falsication. . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Schematic representation of the inclusion of modeling and measurement errors in the
generation of simulated measurements. Modeling error is added to the predicted behavior
of a model instance to obtain the assumed true behavior. Simulated measurements (y s,i )
are obtained by adding a measurement error to the assumed true behavior obtained
previously. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Illustration of the process of simulating measurements based on the predictions of model
instances and on uncertainties. Note that the generation of simulated measurements
include the correlation between expected residual pdf s. . . . . . . . . . . . . . . . . . . . . 55
3.3 Qualitative reasoning description used to dene the uncertainty correlation. The correlation value is presented on the horizontal axis. The vertical axis corresponds to the
probability of occurrence a given correlation value depending on its qualitative description, "low", "moderate", "high", "positive" and "negative". . . . . . . . . . . . . . . . . . . 56
3.4 a) Example of empirical cumulative distribution function (cdf ), F C M (nC M ), used to compute the expected size of the candidate model set. F C M (nC M ) depends on {,Uc , n m , },
the target identication reliability, uncertainties, the measurement conguration used
and uncertainty dependencies. In this example, there is a probability = 0.95 of falsifying at least 60% of the models (F 1C M (0.95) = 40%) and a probability = 0.50 of falsifying
at least 75% of the models (F 1C M (0.50) = 25%). b) Effect of uncertainties, dependencies and target identication reliability on F C M (nC M ). There is no unidirectional trend
associated with the choice of measurement congurations. . . . . . . . . . . . . . . . . . . 58
3.5 Flowchart representing the steps involved in the computation of the expected identiability. These metrics quantify the utility of monitoring for better understanding the behavior
of a system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
xvii
List of Figures
4.1 Schematic representation of the phenomena involved in the design of measurement systems. The total number of candidate models decreases as the number of measurements
increases until the point where additional observations are not useful (solid curve). Over
instrumentation is due to the combined effects of the increased amount of information
and threshold adjustments (dashed curves). . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Conceptual example used to illustrate the situation where additional measurements can
lead to over instrumentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 Flowchart summarizes steps involved in the optimization of measurement-system performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 An example of the growth in the number of iterations required by the Greedy algorithm
compared with the solution space growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Schematic representation of the effect of the number of measurement locations on the
mode match criterion. The initial conguration using all sensors result in a 100% mode
match (a), the same match may be possible with a conguration using less sensors
(b), and a tradeoff between information and costs may be xed with fewer sensors (c).
The mode-match criterion quanties the capacity to nd a correspondence between
predicted and measured mode-shapes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.6 Histograms representing the relative frequency of usage of each sensor. a) This gure
represents the result obtained after the rst greedy optimization loop. Measurement
locations used in a frequency less than q are remove during the subsequent greedy
optimization loop. b) This gure represents the nal sensors removed after two or more
optimization loops. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.7 General framework describing the measurement-system optimization methodology. . . . 70
5.1 True and idealized cantilever beams. Parameters to be identied using the idealized
beam are the Youngs modulus (E ) and the value of vertical force applied (F ). . . . . . . . 73
5.2 Comparison of parameter values identied using least-squares parameter identication
and Bayesian inference with the correct parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, there are no systematic errors and uncertainties are rightfully assumed to be
independent. The labels correct and biased identication apply to Bayesian inference. . 76
5.3 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set. . . . . . 76
5.4 Comparison of parameter values identied using least-squares parameter identication
and Bayesian inference with the true parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, uncertainties are wrongfully assumed to be independent. The labels correct
and biased identication apply to Bayesian inference. . . . . . . . . . . . . . . . . . . . . . 78
5.5 Comparison of parameter values identied using Bayesian inference with the true parameter value for Youngs modulus (E ) and vertical force (F ). The number of measurements
varies from 1 (a) to 50 (d). The shaded area represents the region including 95% certainty
credible regions obtained when varying the correlation from 0 to 0.99, for all covariance
terms simultaneously. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
xviii
List of Figures
5.6 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set. . . . . . 79
5.7 Comparison of parameter values identied using least-squares and Bayesian inference
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model
predictions and in measurement error estimation are not recognized. The labels correct
and biased identication apply to Bayesian inference. . . . . . . . . . . . . . . . . . . . . . 79
5.8 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model
predictions and in measurement error estimation are not recognized. . . . . . . . . . . . . 80
5.9 Langensand Bridge elevation representation. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.10 Langensand Bridge cross section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.11 Test-truck layout for load cases 1 to 5 (Phase 1) for the Langensand Bridge . . . . . . . . . 83
5.12 Langensand Bridge nite-element model (Phase 1). . . . . . . . . . . . . . . . . . . . . . . 84
5.13 Relative importance of each primary parameter on the model predictions of the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.14 Correlation between predictions for Langensand Bridge due to uncertainties in secondaryparameter values. Results obtained from the Langensand Bridge model do not reect the
common assumption of independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.15 Example of uncertainty relative importance for the Langensand Bridge. Excepted for
strain, the dominant uncertainty sources are model simplication, secondary-parameter
uncertainty and measurement repeatability. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.16 Pairwise comparison of parameters found in the candidate model set for the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.17 Example of uncertainty relative importance when using a surrogate model to evaluate the
initial model set. The contribution of the surrogate model approximation (uncertainty
source no.7) is small compared with other sources of uncertainties. . . . . . . . . . . . . . 90
5.18 Accelerometer layout for the Langensand Bridge. Each triangle represents a recording
point. Labels Ref. represent reference sensors. . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.19 Average power spectral density function (PSD) of the recordings made on the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.20 Mode shapes of the Langensand Bridge computed from ambient vibration monitoring.
Large oscillations on the left side of the structure corresponds to walkway vibration modes. 92
5.21 Comparison of two recordings taken in the centre of the Langensand bridge along the 3
axes, with and without trafc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.22 Relative importance of uncertainty sources for the Langensand Bridge. The dominant
component of the combined uncertainty is the measurement variability. . . . . . . . . . . 96
5.23 Correlation between the predicted frequencies of different natural modes. Predictions
are obtained by varying secondary-parameter values . . . . . . . . . . . . . . . . . . . . . . 97
5.24 Mode shapes computed from the Langensand Bridge model and used for the identication 98
5.25 Comparison of the model prediction scatter and measured value for the rst two frequencies for Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.26 Pairwise comparison of parameters found in the candidate model set for the Langensand
Bridge. Candidate models were found using dynamic data. . . . . . . . . . . . . . . . . . . 99
xix
List of Figures
xx
List of Figures
5.58 MAC value relative frequency quantifying the correspondence between predicted and
measured mode shapes for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.59 Schematic representation of parameters to be identied for Tamar Bridge. . . . . . . . . . 133
5.60 Primary-parameter relative importance for each mode of the Tamar Bridge . . . . . . . . 134
5.61 Tamar Bridge uncertainty relative importance. . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.62 Comparison of model prediction scatters with measured values for global and torsional
modes (modes number 1, 2, 3, 10, 12 and 14). . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.63 Comparison of model prediction scatters with measured values for vertical bending deck
modes (modes number 4, 5, 8, 11, 13, 16 and 18). . . . . . . . . . . . . . . . . . . . . . . . . 138
5.64 Pairwise representation of the candidate model set parameter values for Tamar Bridge. . 139
5.65 Cumulative distribution function (F C M ) showing the probability of a maximum candidate model set size for Tamar Bridge. The polygonal sign corresponds to the number of
candidate models obtained using on-site measurements. . . . . . . . . . . . . . . . . . . . 140
5.66 Measurement-system design multi-objective optimization results for Tamar Bridge. The
expected number of candidate models is reported for a probability = 0.50 (F 1C M (50%)).141
5.67 The effect of the number of acceleration sensors on mode match criterion used to associate predicted and measured mode shapes. A minimum of 16 sensors are necessary to
satisfy a mode match Q,l /r e f , where = 0.99. . . . . . . . . . . . . . . . . . . . . 142
5.68 Optimized accelerometer layout Q opt for Tamar Bridge obtained using existing modeshape data. This conguration with 16 sensors corresponds to the layout identied in
Figure 5.67. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.69 General framework for the detection of leaks in pressurized pipe networks. . . . . . . . . . 144
5.70 Schematic representation of the water distribution network studied . . . . . . . . . . . . . 146
5.71 Typical hourly averaged water consumption measured over one day . . . . . . . . . . . . . 147
5.72 Examples of simulated measurements for Lausanne fresh-water distribution network. . . 148
5.73 Relation between the expected number of candidate leak scenarios and the number of
ow measurements used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.74 Relation between the radius including all leak scenarios and the number of ow measurements used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.75 Optimized sensor conguration using 14 ow velocity measurements. . . . . . . . . . . . 150
5.76 Expected number of candidate leak scenarios identied for several leak intensities. . . . . 150
5.77 Relation between the expected number of candidate leak scenarios and the number of
ow measurement points used, for a leak level of 25 L/min. . . . . . . . . . . . . . . . . . . 151
7.1 Schematic representation of the relationship between the number of measurements used
for data interpretation and the probability of committing a Type-I diagnosis error, in case
of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.2 Schematic representation of the relationship between the number of non-redundant
measurements used for data interpretation and the probability of a Type-II diagnosis
error, in case of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.3 Schematic representation of the relationship between the number of measurements used
during data interpretation and the probability of either a Type-I or a Type-II error, in case
of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.4 Two-dimensional stochastic elds representing the Youngs modulus spatial variability in
concrete bridge decks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
xxi
List of Figures
7.5 Two-dimensional likelihood function (Equation 7.1), used to generate parameter samples
on the limit separating candidate and falsied models. The likelihood is maximal when
the observed residuals o,i are equal to threshold bounds [Tl ow,i , Thi g h,i ]. This example
was created for shape functions = 10, = 0.8. . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.6 Comparison of model instance space exploration using Grid-based random-walk and
falsication-limit sampling. The vertices between samples correspond to the path followed by the random walk. The shaded area is the candidate model set identied in the
example presented in 2.6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.7 Sensor interaction relative importance quantifying the contribution of single sensor
removal compared to multiple sensors removal. . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.8 Future work in relation with the general framework for structural evaluation of existing
structures presented in Figure 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.9 Future perspectives for measurement system and test setup design where the objective functions are money invested versus return on investments in terms of savings on
maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.10 Framework representing steps leading to a measurement system overall cost optimization.167
A.1 Extended uniform distribution that included several orders of uncertainty . . . . . . . . . 169
xxii
List of Tables
1.1 Relations between right and wrong diagnosis in hypothesis testing. . . . . . . . . . . . . . 15
2.1 Modeling and measurement uncertainty sources for the beam example . . . . . . . . . . . 43
5.1 Summary of aspects covered by each case-study . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2 Summary of the identication methodology comparison on the basis of their capacity to
provide correct identication for the cantilever beam example. . . . . . . . . . . . . . . . . 81
5.3 Static measurements taken on the Langensand Bridge. Mean and standard deviation
represent the measurement variability obtained by repeating each load case three times. 83
5.4 Ranges and discretization intervals for parameters to be identied on the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 Secondary-parameter uncertainties for Langensand Bridge . . . . . . . . . . . . . . . . . . 85
5.6 Other uncertainty sources for Langensand Bridge . . . . . . . . . . . . . . . . . . . . . . . . 86
5.7 Comparison of frequency prediction ranges computed using the initial and the candidate
model sets. For these ve modes, predictions made using the candidate model set lead to
the reduction in prediction ranges from 55% to up to 82% compared to the predictions
made using the initial model set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.8 Observed frequencies ( f in Hz) with their standard deviation ( same unit) for the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.9 Secondary-parameter uncertainties for the identication of the Langensand Bridge using
dynamic data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.10 Other uncertainty sources for the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . 96
5.11 Qualitative evaluation of uncertainty correlation between comparison points for a each
uncertainty source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.12 Optimized measurement congurations are shown by a vertical set of symbols for a
given sensor type and location. The cost of the load-test along with the expected number
of candidate models computed for probability of 95% are reported for each conguration.109
5.13 Relative error in predicted natural frequencies (%) and losses in the MAC criteria due to
model simplications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.14 Relative error in natural frequencies (%) due to the exclusion of secondary structuralelements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.15 Values for parameters (3 for each parameter) used to create the initial model set for the
Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.16 Secondary-parameter uncertainties for Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . 124
5.17 Other uncertainty sources for Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . . . . . . 124
5.18 Summary of observed frequencies for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . . 133
5.19 Secondary-parameter uncertainties for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . 135
xxiii
List of Tables
xxiv
Notation
Latin capital letters
B
D
H0
H1
I
L
L
N
N
P
Q
Q
T
T
U
X
W
Bayes factor
Data
Null hypothesis
Alternative hypothesis
Vector containing discretization intervals
Subset of simulated mode shapes
Likelihood function
Number of loops
Gaussian distribution
Probability
Subset of sensors
The true value for a quantity
Threshold lower and upper bounds
Multidimensional domain where threshold bounds are dened
Uncertainty source described by a random variable
Random variable
Weighting matrix
Time-domain data
Model class
Set of simulated mode shapes
Norm
Number
Matrix containing predicted values for several comparison points
Predicted value returned by a model
Vector containing dummy variables used for optimizing sensor congurations
Time
Matrix containing error realizations for several comparison points
Random integer number
Measured value
xxv
Notation
Greek capital letters
Parameter domain
Error domain
Covariance matrix
Vector containing quantities used during the computation of expected identiability
metrics
Random variable describing quantities used during the computation of
expected identiability metrics
Mode shape
Mode shape correspondence matrix
MAC
q
2
Indices
0
a
c
CM
CR
CS
i
k
l
L
lc
l ow, hi g h
m
xxvi
Notation
o
opt
p
PR
q
s
sp
y
Acronyms
AVM
cdf
COV
DOF
EUD
FDD
FEM
MAC
pdf
PSD
QR
ex
T
H
fX
FX
F X1
Diagonal matrix
For all
Such that
Variation
Proportional to
An element of
Probability distribution
Exponential function
Transpose
Complex conjugate transpose
Probability density function (pdf ) of a random variable X
Cumulative distribution function (cdf ) of a random variable X
Inverse cumulative distribution function of a random variable X
Gamma function
Argument x that returns the minimum value for y(x)
Intersection of sets
Subset
xxvii
Notation
{, }
[, ]
], [
|...|
|
x
x
x
xEy
R
N
O
xxviii
A set
A vector, a matrix or an interval
An interval excluding bounds
Absolute value
Conditional probability
Mean of x
Best estimate of x
True or correct value for x
x 10 y
Set of real numbers
Set of natural numbers
Big O notation
Candidate model
set
Comparison
point
Condence
interval / region
Correlation
Credible interval /
region
Refers to the interval dened by T1 and T2 for a Bayesian posterior pdf, within which the true quantity for a variable X should
lie with a condence , P (T1 X T2 ) = . When X is a vector
of variables, the term interval is replaced by region.
Dependency
Relationship between either quantities or random variables. Dependancies can occur over time, space and for several quantities
studied simultaneously at a same location.
Error
Error structure
Expected
identiability
Expected residual
Falsication
Mode match
Posterior pdf
Primary
parameter
Observed residual
Secondary
parameter
System
identication
Systematic error
A bias error that either remains constant or that varies in a predictable manner.
Template model
Threshold bounds
Uncertainty
xxx
Introduction
System identication involves comparing models with measurements to identify the properties of systems. System identication plays a crucial role in the context of diagnosis, evaluation,
repair and replacement of civil infrastructure and other complex systems.
Percent of GDP
5
going to
infrastructure
4
05
Years
20
00
20
95
19
90
19
85
19
19
80
Introduction
Figure 2 report the result of a recent study on the maturity of infrastructure in several countries
[163]. It shows that most western countries have infrastructure in advanced stage of maturity.
The OECD noted that by 2030 ...a larger effort will need to be directed towards maintenance
and upgrading of existing infrastructure and to getting infrastructure to work more efciently
[164]. This challenge involves prioritizing budget expenditures and investments by improving
the way structures are currently being evaluated.
Emerging
Maturing
Mature
0
Australia
United Kingdom
Canada
Nordic
South Europe
United States
France
Benelux
Germany
Japan
Ranking
4
6
8
10
Other Asia
12
14
10
Mexico
China
Latin America
20
30
40
50
60
70
80
Degree of maturity
A general framework for evaluation of existing structures is presented in Figure 3. The rst
step is to perform limit-state verications using simplied conservative models. If at this level
the performance is adequate, no intervention is required because the structure conservatively
meets requirements. Otherwise, conservative provisions of simplied models can be reduced
using on-site investigations. Monitoring data can be used to improve knowledge related to
the behavior of structures (i.e. system identication). If these rened models lead to the
satisfaction of requirements, again, no intervention is required. Otherwise, decision makers
may either opt for interventions on structures, for performing additional site investigations
or further rening limit-state verications using reliability analyses. When the path of the
reliability analysis is chosen, the outcome provides indications about the necessity of interventions. An alternative is again to perform further site investigations in order to reduce the
conservatism in behavior models.
The goal of this framework is to provide a safe way to evaluate the condition of structures
while avoiding performing unnecessary interventions. This is justied not only by the direct
cost of strengthening interventions and replacements; it is also supported by the indirect
societal costs due to the unavailability of infrastructure. For instance, Xie and Levinson [238]
estimated that the indirect costs of re-routing trafc during the reconstruction of the I-35W
Bridge (USA) after its collapse in 2007, ranged from US$ 71 000 to US$ 220 000 per day.
When structural verications made using conservative practices do not meet requirements,
2
Introduction
START
Scope of this thesis
Limit-state verification
of an existing structure
Adequate
performance?
Site
investigation
YES
Adequate
performance?
NO
Refine reliability
analysis
YES
Adequate
performance?
Interventions
required
NO
YES
No interventions
Figure 3: General framework for evaluation of existing structures. Adapted from [12].
Introduction
What is an appropriate probabilistic framework for structural identication and
how does this framework inuence the analysis and design of monitoring strategies?
What are the consequences on system identication of having incomplete information about uncertainty dependencies?
Objectives
1. Propose a system identication methodology that can be used in situations where the
dependencies between uncertainties are only partially dened.
2. Propose a metric to quantify the utility of measurements for obtaining new knowledge
about a system physical properties.
3. Propose a methodology compatible with objectives 1 & 2 to analyze and design measurementsystem congurations.
4. Test methodologies with data obtained on full-scale civil structures and systems.
Outline
The scientic questions and objectives derived from them are addressed in ve chapters.
The rst presents a literature survey of elds related to system identication. It shows that
concepts issued from hypothesis testing can be used to identify probabilistically the properties
of systems when uncertainty dependencies are incompletely dened. Hypothesis falsication
is thus the core of the work presented here. The scientic contribution of the thesis builds
around this central idea as illustrated conceptually in Figure 4 where each new chapter builds
on all previous ones in a nested conguration.
Chapter 1: Literature review
Chapter 2: Error-domain model falsification
Chapter 3: Expected identifiability
Chapter 6
Conclusion
& limitations
Chapter 7
Future work
Chapter 2 presents the error-domain model falsication approach. Chapter 3 describes how
this methodology is used to propose a new quantitative metric (expected identiability) predicting, whether measuring is likely to improve our understanding of a system. Chapter 4 builds
on the expected identiability metric to provide a methodology to evaluate the performance
and to design efcient measurement systems, including detection of over-instrumentation.
4
Introduction
Chapter 5 presents applications of the methodologies on an illustrative example and on fullscale civil systems. Finally, Chapter 6 contains conclusions and discusses the limitations of
approaches proposed. Promising concepts requiring further research are presented in Chapter
7.
1 Literature review
Summary
This chapter presents a literature review covering aspects of system identication and diagnostics related to civil infrastructure. Research made in other
closely related domains is also presented. This includes elds such as probability
interpretations, measurement-system design and high dimensional solution
space sampling. This review shows that current system identication methodologies either rely on assumptions that are seldom fullled for civil structures or
they sometimes rely on subjective choices not tied to a systematic methodology.
Practical applications
Figure 1.1: A comparison between the state of knowledge in the eld of sensing technologies
and in data interpretation. The relative size of each circle describes the extent of the current
state of knowledge.
values = [1 , . . . , n p ] found using a least-squares t correspond to the most likely ones ().
The variable n p refers to to the number of parameters to be identied. The simplest form of
least-squares t is presented in Equation 1.1, where n m is the number of measurements. This
methodology was developed in parallel by Gauss and Legendre in the 19th century [215, 220].
Other weighed versions of this methodology are also used to equalize the effect of amplitude
[155].
= arg min
nm
2
gi () y i
(1.1)
i =1
10
P (|y) =
P (y|)P ()
P (y)
(1.2)
Likelihood functions
In most formulations reported in the literature, the likelihood function P (y|) is based on a
2 -norm criterion (see, Equation 1.3). Tarantola [225] mentioned that Because of its simplicity,
the least-squares criterion (2 -norm criterion) is widely used for the resolution of inverse problems, even if its basic underlying hypothesis (Gaussian uncertainties) is not always satised.
Furthermore, the popularity of likelihood functions based on 2 -norm is in part due to the
large number of observable phenomena that actually follow this distribution. Equation 1.3
presents the simple form of a likelihood function P (y|) that is based on a 2 -norm criterion.
In this Equation, g() is a vector containing model predictions and y a vector containing measurements. is a covariance matrix containing uncertainties and correlation coefcients for
each location where predicted and measured values are compared. Such 2 -norm likelihood
11
1
P (y|) const . exp (g() y)T 1 (g() y)
2
(1.3)
Likelihood
|xx 0 |
( )
(1.4)
Probability
11/
e
f (x, ) =
2 (1/)
Figure 1.3: Generalized Gaussian distribution for {1, 2, 10, }. Adapted from [225].
The modern Bayesian inference scheme presented here was popularized in the 1960s [36, 55,
70, 104, 110]. Its rst applications to structural identication dates from the 1990s [7, 19, 116].
Since these pioneering works, an extensive amount of applications and extensions can be
found in the literature [4749, 83, 84, 91, 93, 118, 153, 182, 197, 221, 241, 243, 245]. In most
applications on civil structures, authors assumed that the error structure for the modeling and
12
Model-class selection
Bayesian methods have been extended for the selection of model classes. Such approach
compares the relative credibility of several model classes (g(), h(), . . .) based on observed
data y. For instance, when comparing two model classes g() and h(), the Bayes factor can
be computed as the ratio of the likelihood of each model B = P (y|g())/P (y|h()) [24, 110].
When B > 1, data favors the model class g() over h(). Otherwise, h() is favored over g().
With this methodology, model classes having more parameters are automatically penalized
because their posterior pdf is spread over more dimensions. Several authors [20, 109] drew
parallels between this intrinsic penalizing factor and Ockhams razor also known as the parsimony principle [205]. However, such a methodology only provides relative information about
model classes plausibility. Therefore, regardless of whether the models compared are right
or wrong, one model class is shown to be either superior or equal to others. This a direct
consequence of using the law of total probability 1 with a nite number of model classes.
Therefore, if wrong choices of model classes are initially provided, this type of approach may
not be able to detect it. Further formal descriptions of Bayesian inference and model class
selection are presented in references [130, 225].
Identiability
Ljung [131] described identiability as a criterion that determines if an identication procedure leads to unique values for parameters. Katafygiotis and Beck [116] also applied the
concepts of identiability for structural identication. Their approach builds on the work
of Bellman and strm [22]. They proposed that a system is locally identiable if there are
maxima in the posterior pdf based on the difference between predicted and measured values.
A system is dened to be globally identiable if there is a single maximum. More details on
Bayesian inference and on identiability can be found in [242].
13
In this framework, is the tolerance to reject a model. If , all parameter sets are
accepted. If = 0, the algorithm only accepts the model predicting exactly measured data. In
case of high dimensional problems, predicted (g()) and measured (y) values are replaced
by summary statistics. Sampling methodologies based on Markov chain Monte Carlo are
presented by Marjoram et al. [140]. The method has also been extended for model class
selection [229]. However, Robert et al. [187] recently argued that these methodologies may not
yet provide trustworthy posterior probabilities of models. Approximate Bayesian Computation methodologies have a high potential because they do not require a likelihood function.
However, they are sensitive to the choice of summary statistic [139, 172].
Reject H0
Fail to reject H0
H0 is true
Type-I error
Right diagnosis
H0 is false
Right diagnosis
Type-II error
Methodologies related to hypothesis testing has been used by Mahadevan et al. [111, 146, 183]
15
[211] corrections. Corrected alpha values that can be used to test hypotheses
and Sidk
are presented in Equation 1.5 where the Bonferonni correction is the rst term of the Taylor
Sidk
Bonferonni
1/N
=1 (1 )
/N
(1.5)
Sidk
correction can be used to dene coverage regions for a target . Figure 1.4 compares
the probability contained in rectangular and ellipsoidal coverage regions commonly used in
multivariate hypothesis testing [146, 183, 217]. In this example realizations of the a bi-variate
Gaussian random variable X N (, ) are generated for a mean = [0, 0]T and a variance
= [1, 1]T . In plots presented in Figure 1.4 a), b) and c) the correlation coefcient dening
16
-1
-1
-1
-1
-1
-1
Figure 1.4: Comparison of the probability contained in rectangular and ellipsoidal condence
regions when varying the correlation between two random variables. In all three cases, the
correlation used to compute the ellipsoidal region is set to 0.9. However, the correlation used
to generate realizations of X is a) 0.9, b) 0.4 & c) -0.9. Only the rectangular regions include in
all situations a proportion of the sample at least equal to the target (0.95).
For a Gaussian random variable, the smallest region possible including a probability is
bounded by the Mahalanobis distance D M (x) dened in Equation 1.6, where x is a vector
containing a realization of X . Even if the size of the region dened by the Mahalanobis
distance is minimal, its computation requires the denition of the correlation coefcients in
the covariance matrix . In order to calculate the Mahalanobis distance, the correlation is set
to 0.9 for all three scenarios a), b) and c).
D M (x)2 = (x )T 1 (x )
(1.6)
The probability that realizations of X are included in the rectangular and ellipsoidal region, P M
and P T respectively, is expressed in Equations 1.7 and 1.8. In Equation 1.7, 2 (n m ) is the value
of a chi-squared distribution having n m degrees of freedom, found for a target cumulative
probability = 1 = 0.05. The rectangular coverage region dened by threshold bounds
P M = P (D M (x)2 2 (n m ))
(1.7)
17
m
P T = P (i =1
Tl ow,i x i i Thi g h,i )
(1.8)
For each scenario a), b) and c), 1000 realizations x = [x 1 , x 2 ]T of X are generated. The ellipsoidal region bounded by the solid line (i.e. the Mahalanobis distance) includes a proportion
P M = 0.95 of the realizations of X , only when the correlation is correctly evaluated (i.e. scenario
a)). In Figure 1.4 b) & c), the ellipsoidal region (supposed to include 95% of the realizations
of X ) only include 64% and 42% respectively. For all three scenarios, the rectangular regions
bounded by dashed lines, contains a proportion P T of the realizations of X at least equal to
the target . Square regions lead to less precise results than when using the ellipsoidal bound
-1
-1
-1
-1
Figure 1.5: Comparison of the area enclosed in rectangular (a) and ellipsoidal (b) condence
regions including a target probability content of 0.95. The correlation used to compute the
ellipsoidal region is set to 0. However, the correlation used to generate realizations of X is 0.9.
The area enclosed in the ellipsoidal region is 15% larger than the region dened by rectangular
bounds.
18
19
Confidence interval
Figure 1.6: pdf of a random variable X , where a condence interval bounded by Tl ow and
Thi g h contains a probability ]0, 1].
Bayesian probabilities
Bayesian probabilities assign degree of beliefs to events that can be either random or epistemic.
Bayesian probabilities were mainly popularized by Laplace who stated that probability theory
is nothing but common sense reduced to calculation [125]. In the 1950s, Jaynes [104] also
widely contributed to their development with the maximum entropy principle used to dene
pdf s when only partial knowledge is available. This principle says that the best pdf to represent
our knowledge is the one with the largest entropy [210]. Shafer [208] describes Bayesian
probabilities as ...a special case of the theory of evidence where all beliefs can be expressed in
the form of probabilities.
The same notation is used for both frequentists and Bayesian estimators because both approaches are compatible and can be used together: Bayesian estimators can be evaluated
from a frequentist perspective, and vice versa [218]. ISO guidelines [105] mention that using
Bayesian probability can be as reliable as a frequentist evaluation, especially in a situations
where a frequentist evaluation is based on a small number of statistically independent observations.
Even if they are not limited to it, this interpretation of probabilities is closely related to
Bayesian conditional probability where prior knowledge is updated using evidence. Bayesian
3 Epistemic uncertainties are associated with a lack of knowledge and simplications.
20
Imprecise probabilities
The development of imprecise probabilities was motivated by a desire to separate the way
random and epistemic uncertainties are represented. Several authors [21, 6769, 75, 152, 161,
207] suggested that when dealing with epistemic uncertainties, no information is available to
dene an unique probability density function. Researchers such as Ferson [66] argued that
the maximum entropy principle cannot be justied in real-life problems. For instance, if one
only knew the upper and lower bounds that a variable X can take, it is not right to suppose
that any value of X between these bounds has an equal probability of occurrence (i.e. uniform
distribution). Figure 1.7 presents how the uncertainty on a variable X can be described by
probability bounds (Tl ow and Thi g h ). Note that no probability distribution is dened.
Probability bounds
Figure 1.7: Uncertainty on a variable X can be represented by probability bounds (Tl ow and
Thi g h ) without dening a probability density function.
Other approaches such as the Dempster-Shafer theory [68, 85, 208] can be used to describe
incomplete knowledge without using frequentist or Bayesian probabilities. There is no consensual agreement within the scientic community regarding which probability interpretation
is best.
Sensor resolution
Sensor resolution describes the unreducible variations that one can expect when using a
measurement device. Note that measurement devices often have to be corrected for systematic
errors such as temperature effects and long-term drift. This type of uncertainty is one of the
21
Model simplication
When modeling complex systems such as civil-structures, omissions and simplications
are inevitable. The scale of elements contained in a structure can vary by several orders of
magnitude. Due to limited computing resources, geometrical complexity and engineering
costs, a limited degree of model renement is currently possible. In most models, assembly
details, secondary structural-elements and boundary conditions are either simplied or
omitted. Usually, model simplications of civil structures result in underestimates of the
stiffness of real structures because of omitted elements. NAFEM publications provide guidance
on how to avoid modeling errors and inconsistencies in nite-element models [2, 94, 95, 100].
Studies also addressed the inuence of secondary structural elements on the resistance and
transverse load distribution of bridges [58, 59, 156, 157]. Results indicated that for these
structures, secondary elements may affect the efforts up to 40% [58].
Mesh renement
When using nite-element models, the number of degrees of freedom is always less than
in the real system studied (which contains an innite number). As the number of elements
increases, the model prediction discretization error converges asymptotically [94]. Due to
limited computing resources, the number of elements used to obtain solutions is often several
22
Model-parameter uncertainties
When studying physics-based models, not all parameters are exactly known. Parameters
such as geometry parameters, some material properties, and solicitations can contribute
to modeling uncertainty. For instances, the thickness of concrete deck for girder bridges in
the USA were found to have a coefcient of variation (COV) between 0.02 [1] and 0.07 [113].
These values were reported in a study by Mirza and MacGregor [151]. Regarding material
properties, the steel Youngs modulus mean value is reported to vary between 200 and 206 GPa
with a COV varying between 0.02 and 0.06 [61, 78, 108, 212]. Steel Poissons ratio is reported to
have a COV of 0.03 [61, 78]. Based on the work of Loo and Base [132], the concrete Poissons
ratio COV is estimated to be 0.02. Uncertainties on many other parameters such as material
densities, concrete Youngs modulus and geometrical dimensions are case-specic. Therefore,
in absence of statical surveys, these uncertainties can be based on engineering heuristics.
The inuence of parameter uncertainties on predicted values is found by propagating parameter uncertainties ( = [1 , 2 , . . . , n sp ]) in the model g() using numerical sampling techniques.
Each parameter value can be described by a random variable U,i having a probability density
function fU,i . Figure 1.8 illustrates the process of parameter uncertainty propagation where
parameter value samples are drawn from their respective distribution and provided to the
model. Propagating uncertainty in this way returns the model prediction variability (r i ) due to
the uncertainty in parameter values.
1/2
Theoretically, the mean value of the combined uncertainty converges as 1/n sampl
[185]. In
es
practical applications, the number of samples (n sampl e ) required to obtain a stable solution is
problem dependent and may vary according to input pdf s. A method for detecting that the
uncertainty propagation has converged is to monitor the standard deviations of the model
responses to stop sampling when these values reach steady states. Further details regarding
the numerical implementation of such a procedure is presented by Cox and Siebert [52].
An alternative to numerical sampling is to use Polynomial Chaos Expansion or Stochastic
Galerkin Methods to propagate uncertainties in a more efcient way than random sampling
[57, 223, 224]. These approaches can be several orders of magnitude more efcient than Monte
Carlo methods. However, they lose generality since they require additional assumptions.
23
Template model
response
Template model
...
...
Probability
Main uncertainty
Uncertainty related to
bound definition
Error
Combined uncertainty
(curvilinear distribution)
Figure 1.9: Curvilinear distribution used to describe a uniform distribution where bounds are
inexactly dened.
Review of applications
Many authors have developed measurement-system design techniques based on the principle
that the quality of a measurement system depends upon the independence of the information
acquired. For example, several sensors measuring perfectly dependent quantities would
not provide more information than a single sensor. Yeo et al. [240] have used parameter
grouping techniques to nd independent measurement locations for damage localization
using static tests. Kang et al. [115] proposed a similar approach using a genetic algorithm
to position dynamic measurements. Another approach by Worden and Burrows [236] uses
neural networks to nd optimized sensor placement for dynamic measurements. Stephan
[219] proposed a methodology that placed sensors at locations where the Fisher information
matrix is maximized while minimizing the amount of redundancy in acquired data.
Papadimitriou [167] and Robert-Nicoud et al. [189] used entropy to select optimal sensor
congurations. The methods nd the conguration of sensors that maximizes the disorder in
the predicted values for model parameters. In another proposal, Papadimitou [168] used an
evolutionary algorithm to place sensors. More recently Papadimitriou and Lombaert [169]
have studied the effect of prediction error dependencies on sensor placement. The method
described by Meo and Zumpano [148] concentrate sensor positions in high energy content
regions (for dynamic measurement). This approach favors sensor locations having a high
signal to noise ratio. The main limitation of these approaches is that even if they can maximize
the performance of a measurement system with respect to some criteria, they do not quantify
in absolute terms the utility of measurements for interpreting data.
Value of information
Pozzi and Der Kiureghian [176] observed that the value of a piece of information depends on
its ability to guide our decisions. This supports the idea that measurement systems should be
designed according to interpretation goals. Currently there is a lack of systematic methodolo-
25
y = M+
(1.9)
contains parameters that minimizes, in a least-squares sense, the difference between the
response surface and the model response (Equation 1.10).
= (M T M )1 M T y
(1.10)
Design-of-experiment theory provides efcient ways to decide where samples should be taken
in the parameter space in order to best t a polynomial function [72]. For instance using
the Box-Behnken design [33], a second-order polynomial function can be built for 16 initial
parameters using only 385 evaluations of the physics-based model (e.g. nite-element model).
Other designs such as central-composite design, full and fractional factorial design can be
used as well. Note that each require a specic number and conguration of samples. Figure
1.10 presents the sample congurations of a central composite design for three parameters
where each axis is the normalized parameter range of the original system.
Details regarding sample arrangements can be found in [122]. Even when more advanced
techniques can outperform polynomial regression, this method remains a good tradeoff
26
Normalized parameter #1
1
0.5
0
0.5
1
1
0.5
0
Normalized 0.5
parameter #2
1 1
1
0.5
0
0.5
Normalized
parameter #3
Figure 1.10: Central-composite design for three parameters where each axis represents the
normalized parameter range.
Greedy algorithm
One of the well known heuristic-based optimization technique uses a greedy algorithm [50].
Greedy algorithms break an optimization task in a sequence of steps, where for each, it chooses
a local optimal solution without coming back on previous choices made. This type of approach
is not guaranteed to nd the global optimum. It is however well suited for a wide range of
problems.
Random walk
Several approaches are based on random walks to guide parameter sampling depending on
the response of a likelihood function. In these approaches, samples are drawn with respect to
a likelihood function P (y|) dened for the residual (()) of the difference between predicted
27
Likelihood function
Parameter space
a)
b)
This process is a Markov-chain in the sense that the generation of a new step is only based
on the current step [92]. This sampling technique ensures that the samples [1 , 2 ] are taken
such that their sampling density is proportional to the likelihood function P (y|). Note that
the random steps can be obtained using any probability distribution. The size of the steps
[1 , 2 ] inuences the acceptance rate of the random walk. Good practice rules recommends
that the step size is xed so the acceptance rate is between 30%-50% [225]. Further details
regarding the convergence of Markov-chains can be found in [51]. This concept of randomwalk sampling is behind several techniques used to explore high dimensional solution spaces.
The rst was proposed by Metropolis et al. [149]. Later, Hastings generalized their formulation
in the Metropolis-Hasting algorithm [92]. Random-walk sampling techniques are extensively
used in Bayesian inference applications (see 1.2.2). For instance Cheung and Beck [47] used a
methodology called Hybrid Monte-Carlo simulation to infer the properties of structures using
dynamic data. A Metropolis-Hasting algorithm is used by Most [153] for similar purposes. In
a context other than Bayesian inference, this technique offers the possibility to explore the
space without looking for a single optimal solution.
1.6 Conclusions
This chapter covered several concepts that are related to system identication. Figure 1.12
presents a timeline of contributions reported in this literature survey. It illustrates the temporal
relationships between existing contributions on which this thesis builds.
28
1.6. Conclusions
Hastings
1970
Random-walk
Metropolis
Jaynes
1953
1957
Random walk Bayesian probability
Legendre
1806
Gauss
Bayes
1809
1763
Bayes theorem Least-squares
Laplace
1812
Bayesian probability
Sidak
1967
Hypothesis testing
Popper
1934
Falsification
Bellman
1970
Identifiability
Shannon
1949
Entropy
1700
Present
Figure 1.12: Timeline of key contributions made in scientic elds related to datainterpretation. These contributions are a non-exhaustive survey of the vast literature available
in each eld.
An aspect discussed in this chapter is that our capacity to install measuring systems is much
greater than our capacity to interpret data. This happened, in part, because interpretation
methods are based on approaches that were originally developed in the eld of statistics,
signal processing and control engineering. Researchers in these elds commonly assume that
modeling errors can be treated as Gaussian noise (i.e. independent random variables). Such
an assumption is not generally applicable to civil infrastructures because in these systems,
systematic biases are present and their effects on the uncertainty dependencies involved in
the error structure are often difcult to quantify.
Figure 1.13 presents a schematic comparison of current data-interpretation techniques with
respect to the subjectivity involved and to the difculty to meet hypotheses. In the context of
structural identication, available system identication methodologies either rely on restrictive hypotheses that are seldom fullled in practice for complex systems or they sometimes rely
on subjective choices, not tied to a systematic methodology. Therefore, there is a need for new
approaches that offer tradeoffs between subjectivity and the difculty to satisfy hypotheses.
Subjectivity
Generalized
Likelihood (GLUE)
Approximate Bayesian computation
Need for new approaches
offering a compromise between
subjectivity and restrictivity of
hypotheses
Bayesian inference
Residual minimization
Restrictive hypotheses (Decreasing
relevance to real situations)
29
30
Summary
This chapter describes the error-domain model falsication approach; A model
falsication methodology that correctly identify parameter values of complex
systems without requiring denitions of dependencies between uncertainties.
Prior knowledge is used to bound ranges of parameters to be identied and build
an initial set of model instances. Predictions from each model are compared
with measurements so that inadequate model instances and model classes are
falsied using threshold bounds. The probability content included in each set of
2.1 Introduction
Literature review reported that many system-identication methodologies require the definition of uncertainties and dependencies for comparison points1 i where predictions and
1 A comparison point is a quantity common to a model (prediction location and type) and a system (measure-
31
2.2 Methodology
When trying to identify the behavior of a system, there may be several potentially adequate
model classes to represent it (g(. . .), h(. . .), . . . , etc.). In the context of structural engineering, a
structure could be represented by, for example, several different nite element models. Note
that a model class is a synonym for template model. Model classes take n p physical parameters, = [1 , 2 , . . . , n p ]T , as arguments, which correspond to the system properties such
as geometry, material characteristics, boundary conditions, loading, etc. Each combination
of model class and parameter set leads to n m predictions gi () obtained at each location
i {1, . . . , n m }. When taking a model class g(. . .) and the right values for parameters , the
value corresponding to the difference between a prediction gi ( ) and its modeling error
(mod el ,i ) is equal to the true value Qi for the real system. The true value Qi is also equal
to the difference between the measured value y i and the measurement error (measur e,i ). This
relation is presented in Equations 2.1 & 2.2.
gi ( ) mod el ,i = Qi = y i measur e,i
(2.1)
(2.2)
2 Methods such as Bayesian inference can be referred to as parameter-domain system identication because
the plausibility of models is based on a posterior probability function dened in the parameter domain.
32
2.2. Methodology
In practice, neither the true value Qi nor error values i can be known. Only uncertainties
described by a probability density function (pdf ) of errors i can be estimated. Uncertainties
are dened prior to compute threshold bounds using either statistical data or using engineering heuristics. Such a pdf, fUi (i ), represents the distribution of a continuous random
variable Ui . fUc,i (c,i ) describes the probability of the residual of differences between predicted
and measured values (c,i ). Uc,i is obtained using Equation 2.2 by subtracting the modeling
(Umod el ,i ) and measurement (Umeasur e,i ) uncertainties. The pdf of Uc,i is presented in Figure
2.1. Note that random variables are used here to describe the outcome of either stochastic or
deterministic processes.
Figure 2.1: The combined probability density function describes the outcome of the random
variable Uc,i , i.e. the combination of modeling and measurement uncertainties.
T = [Tl ow,1 , Thi g h,1 ] [Tl ow,2 , Thi g h,2 ] . . . [Tl ow,nm , Thi g h,nm ] Rnm
(2.3)
...
(2.4)
When no information is available to quantify dependencies between residuals c,i , threshold bounds can be computed for each residual c,i as the shortest set of threshold bounds
33
Thi g h,i
Tl ow,i
i {1, . . . , n m }
(2.5)
As in the previous case, for each comparison point i , Tl ow,i and Thi g h,i dene the shortest
interval including a probability content equal to 1/nm for the combined uncertainty pdf Uc,i .
1
(1 1/nm )
2
1
= FU1c,i 1 (1 1/nm )
2
Tl ow,i = FU1c,i
Thi g h,i
(2.6)
Threshold bounds dene the limit of an hyper-rectangular domain T that has a probability
larger or equal to of containing the correct residuals between predicted and measured values.
This relation is expressed in Equation 2.7 where Uc,i is the random variable describing the possible residual outcomes. With this methodology, the adequacy of the identication depends
on the correct denition of uncertainties associated with the model and measurements. When
using threshold bounds to falsify inadequate model instances, there is a probability larger
or equal to of not discarding valid model instances, regardless of values of dependencies
between residuals c,i and regardless of the number of measurements (n m ) used. Threshold
bounds are dened once and are then used to evaluate what initial model instances can be
falsied.
m
P (i =1
Tl ow,i Uc,i Thi g h,i )
(2.7)
Model instances are falsied if they do not satisfy the inequalities presented in Equation 2.8.
Model instances that are not falsied are candidate models and are all considered as equal in
the sense that they are all possible explanations of the observed behavior. A model class g(. . .)
is falsied if all possible sets of parameter values are falsied by observations. When a whole
model class is falsied, it is generally an indication that there are aws in assumptions related
34
2.2. Methodology
to the model adequacy.
(2.8)
Figure 2.2 illustrates the concepts of threshold denition using multiple measurements. In this
gure, threshold bounds Tl ow,i and Thi g h,i are found separately for each combined uncertainty,
for a probability 1/2 . When these bounds are projected on the bi-variate pdf, they dene
a rectangular boundary used to separate candidate and falsied models. This criterion for
falsifying models does not require knowledge of dependencies between uncertainties. Also,
the probability of falsifying an adequate model instance does not increase for any number
of measurements used. It makes the approach suitable for situations where dependencies
cannot be evaluated, such as when model simplications introduce systematic bias for several
model predictions. Examples are shown in 5.2.
Uncertainty
comparison point #1
Uncertainty
comparison point #2
Figure 2.2: Threshold denition. Threshold bounds Tl ow,i and Thi g h,i are found separately for
each combined uncertainty for a probability 1/2 . When these threshold bounds are projected
on the bi-variate pdf, they dene a rectangular boundary that is used to separate candidate
and falsied models.
a)
Predicted
displacement
1
Real displacement
b)
Real displacement
Figure 2.3: Error dependency between degrees of freedom in a nite element beam model.
If errors are random (a), predicted displacements at any location are independent of each
other and appear to vary around the real displacement. On the other hand, systematic effects
introduce dependencies in the error structure (b). For example, if the boundary condition is
not adequately modeled, the displacement may be biased at several location.
Template
Predictied
responses (ri)
model
p
Primary
parameter
expected values
Model instances
parameter space
Secondary
parameter
values
Multiple
predicted
values
values . Several thousand evaluations of the template model are made to compute the
prediction uncertainty due to uncertain secondary-parameter values.
T
u ,i = r i ,1 , r i ,2 . . . r i ,no r i
U = [u ,1 , u ,2 , . . . , u ,nm ]
(2.9)
(2.10)
37
Each combination
of parameter is
solved using the
template model
Template
model
Predictied
responses
Model instances
parameter space
Figure 2.5: An initial set of model instances is generated based on a grid, where the template
model is evaluated using each parameter set. The grid is dened using the minimal and
maximal bounds for parameter ranges, dened based on engineering judgment. Discretization
intervals are also provided to specify the sampling density.
The template model is evaluated for each parameter combination. This entire set of parameter
combinations is named the initial model set. Predictions from each model instance are stored
n p
in a matrix having n k = j =1 I j lines and n m columns. Inadequate instances from the initial
model set are falsied using Equation 2.8. Instances that are not falsied are classied as
candidate models. This model falsication operation is illustrated in Figure 2.6.
The size of the initial model set increases exponentially with the number of parameters to
identify (n p ). Considering that the space of possible model instances can hardly be represented
by less than three subdivisions per parameter, if ten parameters have to be identied, 310
60 000 samples are required. Such a number of evaluations is already difcult to achieve with
current computing capacity when dealing with nite-element models. For 20 parameters, the
number of evaluations required increases to more than three billion.
Initial model
instance set
Candidate models
Falsified models
2
Model
falsification
Figure 2.6: Initial model set organized in a n p -parameter grid used to explore the space of
possible solutions. inadequate model instances are falsied using Equation 2.8 and models
that are not falsied are classied as candidate models.
Instead of creating samples along a grid, the space of possible model instances can be explored
using techniques such as Monte Carlo and Latin-Hypercube random sampling [147]. However,
by proceeding in such a way, users may loose the sense about how densely the space is
sampled. Several researchers already underline that high-dimensional spaces tend to be
extremely empty [112, 231]. For instance, when sampling over 20 parameters one can be
mislead into thinking that a million samples is sufcient when the space is in fact only sparsely
39
40
Likelihood
Figure 2.7: Two-dimensional likelihood function used to generate parameter samples. The
likelihood L cm is maximal when the observed residuals o,i are within threshold bounds
[Tl ow,i , Thi g h,i ]. This example was created using Equation 2.11 with a shape function parameter = 10.
Equation 2.11.
11/
L cm (o,1 , o,2 ) =
e
2T1 T2 (1/)
Ti =
1
|o,1 Ti |
T1
|o,2 Ti |
T2
Ti = Tl ow,i T i
(2.11)
(2.12)
(2.13)
Reducing the number of model evaluations by using a random walk grid sampling
Traditional random walk methodologies can create steps leading to any location in the parameter domain Rn p . The number of evaluations of the model can be reduced by performing
the random walk over points aligned on a grid, as presented in 2.5.1. If the random walk
leads to a point in the parameter domain that has already been evaluated, it can re-use the
result previously computed. The spacing and limits of the grid are dened to cover all plausible combinations of parameters and with a density sufcient so that solutions that might
be considered as equivalent are not sampled. An illustrative example of this random walk
sampling technique is presented in 2.6.1.
41
Primary parameters
to be identified
Inputs
-Template model(s)
-Uncertainties (Model & measurements)
-Measurements
-Range of parameter values
New objectives
- More
measurements
- New model
class
Model falsification
For all
measurement locations
Candidate models
If all model
intances
are falsified
Falsified models
600mm
25mm
400mm
Figure 2.9: Composite beam cross-section. The structure studied is a simply supported, ten
meter long, composite beam. The beam is modeled by shell elements using the nite-element
method.
In this illustrative example, the value for two primary-parameters have to be identied: the
concrete and the steel Youngs modulus. Parameter ranges are respectively [15-45] GPa and
[190-212] GPa. Each parameter range is subdivided in ten parts to generate an initial model
set containing 100 model instances.
Uncertainties
For this example, only three uncertainty sources are considered. The rst two are sensor
resolution and model simplications. They are described by a uniform distribution where
minimal and maximal bounds are presented in Table 2.1. These values were chosen arbitrarily
for illustration.
Table 2.1: Modeling and measurement uncertainty sources for the beam example
Uncertainty source
Sensor resolution
Model simplications
Vertical displacement
min
max
-0.025 mm
0%
0.025 mm
5%
The third source of uncertainty is due to secondary parameters of the model. Three secondary
parameters contribute to model prediction uncertainty: the inaccuracy in the thickness of
the slab and the two steel elements (web and ange). These inaccuracies are represented by
Gaussian distributions having a mean of zero and a standard deviation of 1 mm for concrete
and 0.05 mm for steel elements. The uncertainty in model predictions is obtained by taking
1000 combinations of these three secondary-parameters and then evaluating the template
model for each set. During these simulations, primary model-parameters are kept to their
mean values. The distribution of secondary-parameter model uncertainty is combined with
43
Sensorresolution
uncertainty
Modelsimplification
uncertainty
Combination
through a
numerical
sampling process
Comparison point #2
Secondaryparameter
uncertainty
Sensorresolution
uncertainty
Modelsimplification
uncertainty
Figure 2.10: Combined uncertainty probability density function for mid-span and quarterspan comparison points. For each comparison point, uncertainties are separated in modeling
Umod el ,i and measurement Umeasur e,i uncertainties. These are subtracted from each other to
obtain the combined uncertainties Uc,1 and Uc,2
.
These two pdf s are presented in Figure 2.11 in a bivariate probability density function. This
bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i ) including a target probability = 0.95. Minimal and maximal bounds for each location are found numerically by
satisfying Equation 2.14 while minimizing the area enclosed in the projection of threshold
bounds on the bivariate pdf.
Tl ow,1
(2.14)
Model falsication
For the rst measurement location (quarter-span) a model instance is falsied if its residual
between predicted and measured values (o,i ) is either lower than -0.09 mm or higher than
0.03 mm (i.e. the threshold bounds). For the second measurement location (mid-span) the
threshold bounds are set at -0.14 mm and 0.04 mm. These bounds dene a square coverage
region that has a probability at least equal to of not wrongly discarding the right model
instance. In order to falsify inadequate models, simulated measurements are generated by
randomly selecting the prediction from one model instance and then adding errors randomly
drawn from the combined uncertainty pdf s. The simulated measurements obtained are
-1.3 mm and -2.0 mm, for quarter and midspan vertical displacements.
The observed residuals o,1 and o,2 are computed by subtracting the measured values from
the predictions of each model instance. Then, by comparing these observed residuals to
44
0
0.1
0
0.05
0
0.1
0.2
Residual of difference between
predicted and measured value at mid-span (mm)
0.05
0.1
Figure 2.11: The two combined uncertainty pdf s are presented in a bivariate probability
density function. This bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i )
including a target probability = 0.95. Minimal and maximal bounds for each location are
found numerically by satisfying Equation 2.14 while minimizing the area enclosed in the
projection of the threshold bounds. Model instances are falsied if, for any comparison point,
the difference between predicted and measured values (gi () y i ) are outside the rectangular
threshold bounds.
45
x 10
4.5
Candidate model
Falsified model
Correct model
3.5
2.5
1.5
1.9
1.95
2
2.05
Steel Youngs modulus
2.1
x 10
Figure 2.12: Representation of the initial model set with candidate and falsied models. By
comparing the difference between model predictions and measurements with threshold
bounds, 58 model instances out of 100 are falsied. The 42 candidate models are outlined the
shaded region.
The shaded region corresponding to the candidate model set includes the correct parameter
set. In this case, parameter compensation leads to more than one model that can explain
measurements when including uncertainties. If new predictions have to be made using the
template model, all combinations of parameters dened by the candidate model set should
be used to compute uncertainties for these new predictions.
Random-walk sampling
Random-walk grid sampling technique proposed in 2.5.3 can be used to reduce the number
of evaluations required for dening the candidate model set. The likelihood function used for
sampling is based on Equation 2.11 with = 100. The random walk is based on the generation
of samples from a Gaussian distribution having a mean of 0 and a standard deviation of 1
multiplied by the grid spacing. The value obtained is rounded to obtain an integer number
corresponding to a point arranged on a grid. As mentioned in 2.5.3, a step toward a new
position is accepted in two circumstances. The rst is if its likelihood is larger than the
likelihood obtained at the previous position. Secondly, if its likelihood is lower than the
likelihood obtained at the previous position, the new position has a probability of being
accepted corresponding to the ratio of these two likelihoods (see 1.5.2).
46
x 10
4.5
Candidate model
Falsified model
Correct model
4.0
3.5
3.0
2.5
2.0
1.5
1.90
Starting point
1.95
2.00
2.05
Steel Youngs modulus
2.10 5
x 10
Figure 2.13: Exploration of the model instance space using random walk MCMC. The same 42
candidate models were found however it required 22% less samples than with an exhaustive
grid sampling. The vertices between samples corresponds to the path followed by the random
walk. The starting point is highlighted by a circle.
This gure shows that the grid-based random walk technique can efciently create samples in
the candidate model set. This technique can reduce the number of evaluations of inadequate
model instances compared with the systematic evaluation of all possible parameter combinations. The performance of the approach depends on the size of the candidate model set
compared with the initial model set size.
500
Time (s)
1000
b) Measured system
5
500
Time (s)
1000
displacement (mm)
a) True system
5
displacement (mm)
displacement (mm)
500
Time (s)
1000
displacement (mm)
10
100
200
300
400
500
Time (s)
600
700
800
900
1000
Figure 2.14: An illustration of the effects of model simplications and omissions on timedomain structural identication. Graph a) represents the true displacement of a structure
over time. When monitoring a system, noise is usually recorded in addition to the system
behavior (b). Simulations of the system behavior are inevitably inexact (c). When comparing
the time-domain measured and predicted signals (d), a bias is present. In such a situation, the
residual cannot be described using stationary Gaussian noise.
MAC( y j , gl (k ) ) =
2
|H
y j gl (k ) |
H
|H
y j y j ||g ( ) gl (k ) |
l
(2.15)
{( j , k, l ) N3 : MAC( y j , gl (k ) ) MAC }
(2.16)
The result of this correspondence check is a matrix of size n k by n y where each line corresponds to a model instance and each column to a measured mode. The matrix is lled
with indexes l mapping, for each model instance, what predicted mode corresponds to what
measured mode. When a set { j , k, l } does not satisfy Equation 2.16, the index l is set to zero.
The matrix is used during the falsication process to indicate what predicted frequency is
compared with what measured frequency based on the method presented in 2.2.
49
Threshold
rectangular
region
Figure 2.15: Two-dimensional -norm likelihood function generated from Equation 2.13
using = 100. The projection of vertical walls on the horizontal plane corresponds to threshold
bounds. Large values of can be used to approximate the -norm likelihood function. This
can be used as an alternative to 2 -norm Bayesian inference.
2.9. Conclusions
Candidate models
Probability
Falsified models
1.9
1.95
2
2.05
Youngs Modulus steel (x105 MPa)
2.1
1.5
2.5
3.5
4.5
Figure 2.16: Posterior pdf for the illustrative example presented in 2.6.1. This posterior
pdf is computed using the -norm likelihood function presented in Figure 2.15. The prior
distribution is set as uniform over the range of 190-212 GPa for steel and 15-45 GPa for concrete
Youngs modulus. The candidate model region found corresponds to the set found using errordomain model falsication.
2.9 Conclusions
This chapter introduces the error-domain model-falsication approach. Concepts related to
hypothesis falsication are used to discard inadequate model instances based on the residual
of the difference between predicted and measured values. Specic conclusions are:
1. Error-domain model falsication does not require knowledge of the dependence between uncertainties to identify possible parameter values of models. With this methodology, the probability of falsifying a correct model instance does not increase for any
number of measurement used.
2. Several sampling procedures can explore the space of possible solutions satisfactorily.
These solutions are best for different types of problems. Nonetheless, complexity is a
challenge when dealing with high-dimensional model spaces.
3. Error-domain model falsication is compatible with Bayesian inference that uses a
likelihood function based on the -norm. An example shows that results from the two
approaches are equivalent when the same initial assumptions are made.
51
Summary
This chapter describes the expected identiability metrics. These metrics predict probabilistically to what extent measuring a system can be useful to falsify
models from an initial set and to reduce future prediction ranges (F 1C M ( ) and
F 1P R ( ) respectively). The expected identiability quanties the effects on data
interpretation of a-priori choices, such as model class, measurement location,
measurement type, sensor accuracy, and constraints, such as uncertainty level
and dependencies.
3.1 Introduction
Our capacity to interpret data depends on aspects such as the choice of model classes, model
parameters (and their range of possible values) and the extent of uncertainties inuencing
models and measurements. This chapter presents a methodology to predict probabilistically
to what degree measurements are useful for reducing the number of candidate models and
their prediction ranges, with respect to the aspects mentioned above. These metrics were introduced by Goulet and Smith as the expected identiability [87]. It is based on the generation
53
Model uncertainty
Predicted behavior
Measurement uncertainty
Simulated measurement
Figure 3.1: Schematic representation of the inclusion of modeling and measurement errors in
the generation of simulated measurements. Modeling error is added to the predicted behavior
of a model instance to obtain the assumed true behavior. Simulated measurements (y s,i ) are
obtained by adding a measurement error to the assumed true behavior obtained previously.
The process where simulated measurements are generated is illustrated in Figure 3.2. Before
making observations on a structure, any model instance from the initial model set can be
an adequate explanation of the system behavior. Therefore, any model can be randomly
chosen to generate simulated measurements. Each model instance in the initial model set
is made from a combination of parameters (k = [1 , 2 , . . . , n p ]k , k {1, . . . , n k }) used in a
54
Template
model (
Predictied
responses (ri)
Model instances
parameter space
Simulated measurements
Measurement location #2
Measuring
uncertainty
Modeling
uncertainties
Modeling
uncertainties
Measuring
uncertainties
...
...
[ c,1,c,2,...,c,nm ]
error
instances (c,1)
Correlation (12)
Figure 3.2: Illustration of the process of simulating measurements based on the predictions of
model instances and on uncertainties. Note that the generation of simulated measurements
include the correlation between expected residual pdf s.
55
Independent
Moderate -
Low -
Low +
Moderate +
High +
Probability
High -
-0.80
1.00
-0.60
-0.75
-0.40
-0.50
-0.20
-0.25
0.20
0.40
0.25
0.60
0.50
0.80
0.75
1.00
Correlation value
Figure 3.3: Qualitative reasoning description used to dene the uncertainty correlation. The
correlation value is presented on the horizontal axis. The vertical axis corresponds to the
probability of occurrence a given correlation value depending on its qualitative description,
"low", "moderate", "high", "positive" and "negative".
Uncertainty correlations are included in the process of simulating measurements by generating correlated error samples from the combined uncertainty pdf (Uc ). Details regarding
sampling multivariate correlated random variables can be found in the references [80, 81].
Each realization of error uses a different correlation values distributed according to the density
functions provided in the qualitative reasoning scheme.
56
57
1
P=0.95
0.8
0.6
P=0.5
0.4
0.6
0.4
0.2
0.2
Probability
Probability
0.8
0
0
20%
40%
60%
80%
100%
Maximal size of the candidate model set
(% of the initial model set,
)
20%
40%
60%
80%
100%
Maximal size of the candidate model set
(% of the initial model set,
)
Figure 3.4: a) Example of empirical cumulative distribution function (cdf ), F C M (nC M ), used to
compute the expected size of the candidate model set. F C M (nC M ) depends on {,Uc , n m , },
the target identication reliability, uncertainties, the measurement conguration used and
uncertainty dependencies. In this example, there is a probability = 0.95 of falsifying at
least 60% of the models (F 1C M (0.95) = 40%) and a probability = 0.50 of falsifying at least
75% of the models (F 1C M (0.50) = 25%). b) Effect of uncertainties, dependencies and target
identication reliability on F C M (nC M ). There is no unidirectional trend associated with the
choice of measurement congurations.
3.4. Conclusions
set, simulated measurements are generated and used to falsify model instances. For each set
of simulated measurements, the number of candidate models nC M and the prediction ranges
n P R obtained are stored. When the number of simulated measurements generated reach
1000 instances, the cumulative probability density functions F C M (nC M ) and F P R (n P R ) are
computed. These steps are repeated again to verify if these cdf s converged to stable results. If
not, more simulated measurements are generated. Otherwise, the answer is returned to users
who decide if the expected performance is sufcient. If it is, it justies proceeding with in-situ
measurements. Otherwise, users can choose to review their initial assumptions, for instance
using more accurate sensors and by using better model classes to reduce uncertainties. If no
improvement in the initial assumptions is possible the expected identiability justies not
performing monitoring intervention on the structure.
Reductions in the expected number of candidate models and in prediction ranges are indicators of the usefulness of measurements for obtaining new knowledge and for making better
prognosis. In the case where threshold bounds are large in comparison with model prediction
variability, monitoring the system is unlikely to provide useful decision support because model
instances cannot be falsied in signicant numbers. Computing the expected identiability
using the number of candidate models and prediction ranges as metrics can help determine
whether or not measuring is likely to reveal new knowledge about the system behavior.
3.4 Conclusions
In this chapter, a methodology is proposed to predict probabilistically to what extent, measuring would be useful to falsify model instances and to reduce future prediction ranges. This
tool can be used to support decision-making regarding monitoring interventions. Specic
conclusions are:
1. The effects on data interpretation of a-priori choices, such as model classes, measurement locations, measurement types, sensor accuracy, and constraints, such as uncertainty levels and dependencies can be quantied by the expected identiability metrics.
This quantication is compatible with the falsication framework that is described in
Chapter 2.
2. Two metrics, the expected number of falsied models and the expected prediction
ranges are useful to predict in absolute terms the usefulness of measurements.
3. The methodology is able to generate simulated measurements that include locationspecic correlated and systematic uncertainties.
59
Random selection of a
model instance from the
initial model set
Model
instances
Falsified
models
Store results
Candidate
models
is N1000
?
YES
NO: N = N + 1
YES
NO
No monitoring
interventions
NO
Is the expected
performance sufficient?
YES
Monitoring
Figure 3.5: Flowchart representing the steps involved in the computation of the expected
identiability. These metrics quantify the utility of monitoring for better understanding the
behavior of a system.
60
4 Measurement-system design
Summary
This chapter describes the measurement-system design methodology proposed
to maximize the usefulness of measurements for data-interpretation. This approach uses the expected identiability metric to evaluate the performance of
several measurement and test congurations. It shows that over instrumentation
is possible and that too much data may hinder interpretation.
4.1 Introduction
The expected identiability described in Chapter 3 is used as a performance metric to optimize
the efciency of measurement systems for falsifying model instances. In common monitoring
interventions, a balance is sought between performance and cost. Thus, the methodology
uses cost as a second objective for optimizing measurement systems. The cost of a measurement system is computed to be the sum of sensor costs and the expenses related to testing
equipment, such as trucks in the case of static-load tests.
In the absence of computational support, engineers usually measure structures where the
61
100%
100%
Useful
Over
measurements instrumentation
100%
+
0
0
Number of measurements (|cost)
Figure 4.1: Schematic representation of the phenomena involved in the design of measurement
systems. The total number of candidate models decreases as the number of measurements
increases until the point where additional observations are not useful (solid curve). Over
instrumentation is due to the combined effects of the increased amount of information and
threshold adjustments (dashed curves).
In Figure 4.1 the total number of candidate models (solid line) decreases as the number of
measurements increases, until the point where additional observations are not useful. Beyond
this point, additional measurements may decrease the efciency of the identication by
increasing the number of candidate models (i.e. reducing the number of falsied models).
Over instrumentation is due to the combined effects of two competing trends: an increase in
performance due to additional information brought by new observations and a decrease in
performance due to threshold adjustments (dashed lines).
In order to avoid over-falsication, threshold bounds are conservatively adjusted using the
62
Sidk
correction (see 2.2). Threshold corrections ensure that the reliability of the identication meets the target when multiple measurements are used simultaneously to falsify
model instances. In other words, the criteria used to falsify models (threshold bounds) depend
upon the number of measurements used. Over instrumentation occurs when including a new
measurement falsies less model instances than the number of additional instances accepted
by measurements, due to threshold bound adjustments. Such a situation is likely to happen
when the information contained in several measurements is not independent. Furthermore,
poor identication performance can be expected when modeling and measuring uncertainties
are large in comparison with the prediction variability within the initial model instance set.
Figure 4.2 presents a conceptual example where the Youngs modulus (E ) of a beam is sought
using a static load test and vertical displacement measurements. It is intuitively known that the
vertical displacement measured right over the support will not be able to distinguish between
any value of E . Nevertheless, using this measurement would require to widen threshold
bounds for the two other comparison point, indeed decreasing the overall interpretation
capacity.
Useless measurement
E=?
Vertical displacement measurement
Figure 4.2: Conceptual example used to illustrate the situation where additional measurements
can lead to over instrumentation.
nm
nm !
= 2n m 1
k=1 k!(n m k)!
(4.1)
63
4.3.1 Methodology
The methodology used to design efcient measurement systems is based on an Greedy algorithm. It identies what single measurement that can be removed from an initial conguration
containing n m measurements while minimizing the change in the result of an objective function. This process is repeated until a single measurement is left.
Figure 4.3 presents a owchart describing steps involved in the optimization of the performance of measurement systems. In this owchart, the vector s contains n m dummy variables
indicating whether or not a potential measurement location is used.
START
Identify initial potential measurement locations (
Store results
is N=1
NO, N = N - 1
YES
YES
Compute measurement configuration cost
and remove dominated solutions
NO
No monitoring
interventions
YES
Monitoring
Monitoring cost
The Greedy algorithm rst evaluates the expected identiability when using simultaneously
all possible measurement types and locations. Once the expected identiability (F 1C M ( ))
is computed using all sensors each sensor is removed successively from the set of N = n m
64
Complexity
For static monitoring, if one load case is possible the Greedy algorithm performs the measurement2
system optimization in less than n m
/2 iterations, where n m is the maximal number of measurements. Figure 4.4 compares the number of iterations required with the number of sensor
combinations possible. It shows that Greedy algorithm complexity (O(n 2 )) leads to a number of sensor combinations to test that is signicantly smaller than the number of possible
combinations (O(2n )).
Magnification
x 105
Number of combinations
12
Number of combinations
10
8
O(2n)
6
4
4
6
8
10 12 14 16 18
Number of possible measurement locations
Number of possible
combinations O(2n)
300
Greedy algorithm
(Polynomial complexity O(n2))
200
100
0
2
0
400
4
6
8
10 12 14 16 18
Number of possible measurement locations
20
20
Figure 4.4: An example of the growth in the number of iterations required by the Greedy
algorithm compared with the solution space growth.
65
4.4.1 Methodology
As it is done for static analyses, prior to acquiring data, simulated measurements can be used
to quantify the utility of measurements to falsify models. The process is divided in two steps:
the determination of the modes of interest and the determination of where to measure to
obtain information about these modes.
y s = r v Uc
(4.2)
Instances of simulated natural frequencies are used to emulate the model falsication process.
Each time simulated natural frequencies are generated, the number of candidate models
obtained nC M is stored in a vector C M . When this process is repeated a number of times
sufcient to obtain a stable statistical distribution, an empirical cumulative distribution
66
Mode match
(expressed as a percentage
of mode match using all possible sensors)
Figure 4.5: Schematic representation of the effect of the number of measurement locations on
the mode match criterion. The initial conguration using all sensors result in a 100% mode
match (a), the same match may be possible with a conguration using less sensors (b), and a
tradeoff between information and costs may be xed with fewer sensors (c). The mode-match
criterion quanties the capacity to nd a correspondence between predicted and measured
mode-shapes.
shapes does not inuence the measurement congurations obtained. The measurement
conguration having the smallest number of sensors and reaching a target mode-match criteria of is stored each time a reference mode-shape is tested. The results obtained using
several reference mode-shapes are summarized in an histogram describing the frequency of
use of each sensor contained in the initial conguration. Sensors that are used for a relative
frequency less than a target q [0, 1] are discarded. Stricter or looser targets values can be
used depending on applications. The set of optimal sensor found is denoted Q opt .
Optimization strategy
Each methodology presented in 4.4.1 involves a number combinations that grows exponentially with both the number of sensors and mode shapes. In most practical applications, an
exhaustive search of the solution space is not possible. In order to nd efciently optimized
sets of mode shapes and sensors, search algorithms are necessary. Many choices are available
in the literature (see 1.5.2). A methodology suited for this type of problem is the inverse
greedy algorithm proposed in 4.3.1. This heuristic-based optimization technique is used
recursively to look for either what sensor or what mode can be removed from a set while
leading to the best performance. This methodology can provide optimized sets of either of
mode shapes or sensors in less than n 2 iterations.
In order to nd optimized sets of sensors (4.4.1), multiple loops of greedy optimization may
increase the performance. For instance, when no measured data is available to describe
mode shapes, the conguration returned using a single loop of the greedy algorithm can be
over-conservative (see Figure 4.6a). A solution to this challenge is to run the greedy algorithm
several times and remove the sensors used in a frequency less than q for the subsequent
68
4.5. Conclusions
optimization loops until no additional sensor can be removed. Figure 4.6b presents the sensors
removed after several optimization iterations.
Sensors removed for the next
greedy optimization iteration
Relative
frequency
1 2 3 ...
na
Measurement location
a)
1 2 3 ...
na
Measurement location
b)
Figure 4.6: Histograms representing the relative frequency of usage of each sensor. a) This
gure represents the result obtained after the rst greedy optimization loop. Measurement
locations used in a frequency less than q are remove during the subsequent greedy optimization loop. b) This gure represents the nal sensors removed after two or more optimization
loops.
4.5 Conclusions
This chapter describes a methodology to analyze the performance and to design optimized
measurement systems. Too many redundant measurements may decrease data-interpretation
performance. This aspect is incorporated in a systematic and quantitative framework that is
able to help prevent over instrumentation in measurement systems. Specic conclusions are:
1. The criteria used to falsify models (threshold bounds) are dependent upon the number of measurements. If the error structure is incompletely dened and too many
measurements are used, data-interpretation can be hindered by over instrumentation.
2. Optimizing measurement-system congurations involves treatment of large amounts of
data that would be unreasonable to analyze manually. The measurement-system design
methodology can be used to determine good tradeoffs with respect to interpretation
goals and available resources.
69
na
Measurement location
Select an optimized set of sensors
70
5 Case studies
Summary
This chapter describes the validation tests and applications using the errordomain model falsication approach as well as complementary methodologies
presented in Chapters 2-4. A rst illustrative example demonstrates the capacity of the approach to perform correct diagnosis in situations where there are
systematic errors and when the error structure is unknown. Four additional
case studies are drawn from performance-evaluation applications to show the
potential of the approaches for understanding the behavior of full-scale systems.
5.1 Introduction
Validation tests and applications of the error-domain model falsication approach are presented in this chapter. Note that the term validation refers here to the comparison of predicted
quantities with measurements. As exposed in 1.2.1, complete validation of theories and
hypotheses is often not possible. Therefore, the applicability of the solutions proposed is
demonstrated for ve case studies covering the identication of the behavior of structures
and the identication of leaks in a pressurized pipe network. Aspects covered by each case
71
Comparison
with other
approaches
Cantilever
beam example
(5.2)
Langensand
Bridge
(5.3)
S&D
Grand-Mere
Bridge (5.4)
Tamar Bridge
(5.5)
Lausanne
fresh-water
distribution
network (5.6)
Case study
72
Structural
identication
(S)tatic/(D)yna.
Model
class
comparison
Measurement
system
design
Figure 5.1: True and idealized cantilever beams. Parameters to be identied using the idealized
beam are the Youngs modulus (E ) and the value of vertical force applied (F ).
The vertical displacement v(x) of the beam at any location x [0, l ] (l = 3000 mm) is described
by Equation 5.1. For any location x the error introduced by the idealized model is (x) =
F l x/K .
v(x) =
F x 2 (3l x)
6E I
(5.1)
Simulated measured values, y(x), are obtained according to Equation 5.2, where v (x) is the
model displacement computed with correct parameter values E and F . u meas is a realization of Umeas N (meas , meas ), a Gaussian random variable describing sensor resolution
uncertainty. The mean of this random variable is 0 and its standard deviation is 0.02 mm.
Sensor resolution errors are independent of the measured locations. The combined uncertainty variance is obtained by summing the variance of the model simplication and sensor
73
(5.2)
o () = g() y
(5.3)
Residual minimization
that are optimal in a least-squares sense
The rst identication approach nds parameters ,
(see Equation 5.4). The weighing matrix W is set to [diag(y)]2 . In this approach, the goal is to
calibrate model parameters to obtain the smallest weighted sum of the square of the residuals.
Uncertainties are assumed to be Gaussian and independent.
74
(5.4)
constant, if
P () =
0,
if
(5.5)
The function mapping the residual values to the likelihood of parameter values is presented
in Equation 5.6, where U c is a vector containing the mean of the combined uncertainty pdf
for each location i . Equation 5.6 is the multivariate Gaussian distribution. The normalization
constant P (y) (see Equation 1.2) is computed by integrating P (y|)P ()d . This integral
can be evaluated in the domain only, because the prior knowledge assigns a credibility of 0
outside this domain.
n m /2
P (y|) = (2)
||
1/2
1
exp (o () U c )T 1 (o () U c )
2
(5.6)
For this method, uncertainties are assumed to be independent. Thus, the covariance matrix
is a diagonal matrix containing the variance 2c for each comparison point where measured
and predicted values are available.
75
True solution
Correct diagnostic
Biased diagnostic
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
10
the least-squares optimal solution does not match the true solution, even if in one case (c)
results are close. This is because there is more than one optimal solution and the leastsquares optimal solution is trapped in a local minimum created by measurement noise.
Therefore, selecting a single best parameter set can lead to a biased identication. For
Bayesian inference, the true parameter values are in the 95% credible region for all values of
n m . Note that the size of the 95% credible region is reduced as n m increases. This indicates
that the more measurements are used, the more precise the identication is, because each
new measurement brings additional information. For this scenario, Bayesian inference leads
to a correct identication for any n m .
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.2: Comparison of parameter values identied using least-squares parameter identication and Bayesian inference with the correct parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this scenario,
there are no systematic errors and uncertainties are rightfully assumed to be independent.
The labels correct and biased identication apply to Bayesian inference.
Correct identification
Biased identification
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
True solution
10
Figure 5.3 presents the identication results obtained using error-domain model falsication.
The shaded region represents the candidate models and the white region the falsied model
instances. Analogously to Bayesian inference, the identication is correct for any value of n m .
However, with this method, the size of the candidate model set is larger than Bayesians 95%
credible region.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.3: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set.
For the rst scenario where no systematic errors are present, both the Bayesian inference
and error-domain model falsication approach lead to correct identications. On the other
76
True solution
Correct identification
Biased identification
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
10
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Correct identification
6
4
2
Envelope containing
95% credible regions
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
True solution
10
Figure 5.4: Comparison of parameter values identied using least-squares parameter identication and Bayesian inference with the true parameter value for Youngs modulus (E )
and vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, uncertainties are wrongfully assumed to be independent. The labels correct and
biased identication apply to Bayesian inference.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.5: Comparison of parameter values identied using Bayesian inference with the true
parameter value for Youngs modulus (E ) and vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). The shaded area represents the region including 95% certainty
credible regions obtained when varying the correlation from 0 to 0.99, for all covariance terms
simultaneously.
set increases with the number of measurement used because errors are strongly correlated.
Therefore, each new measurement brings almost no new information to further discard model
instances. When more measurements are included, threshold bounds are widened to include
the effect of unknown dependencies between uncertainties, according to Equation 2.5. With
this approach, the increase of the candidate model set size is smaller than for the sets found
when varying the correlation coefcient for the Bayesian inference.
This example illustrates that using wrong values of uncertainty correlation with Bayesian
inference may lead to biased identications. It also shows that error-domain model falsication can achieve correct identications without having to dene uncertainty dependencies.
Finally, over-instrumentation is possible when the error structure is incompletely dened
since for both Bayesian inference and error-domain model falsication, the precision of the
identication decreases when measurements are added, (e.g. the size of the candidate model
set increases).
78
Correct identification
Biased identification
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
True solution
10
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.6: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set.
True solution
Correct identification
Biased identification
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
Figure 5.7 compares identication results for Bayesian inference and weighed least-squares
residual minimization with true parameter values E and F , for a number of measurements
n m {1, 2, 10, 50}. It shows that for any n m , these methods lead to biased identications. As in
the previous case, the sizes of the credible regions decrease with the number of measurements
used. Again, it can lead to the belief that the identication is correct because the identication
results are restricted to a small region. These approaches are unable to signal systematically
that initial assumptions regarding the model adequacy were wrong.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.7: Comparison of parameter values identied using least-squares and Bayesian
inference with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model predictions
and in measurement error estimation are not recognized. The labels correct and biased
identication apply to Bayesian inference.
79
Correct identification
Biased identification
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
True solution
10
Figure 5.8 presents the identication results obtained using error-domain model falsication.
When one or two measurement locations are used, the approach leads to biased identications
because it nds candidate model sets that do not include the correct solution. For any values
n m > 2, the approach identied that assumptions made regarding the model adequacy were
wrong. It leads to a correct identication by returning no candidate model. Therefore, given a
sufcient number of measurements, error-domain model falsication can perform correct
identications even in the presence of unrecognized errors. For this third scenario, the correct
identication was to identify that initial assumptions that created the entire model class was
awed with respect to the uncertainties dened.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa
Figure 5.8: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model predictions
and in measurement error estimation are not recognized.
5.2.3 Discussion
For the previous example, it is trivial to either quantify the dependencies introduced by model
simplications or to parametrize the boundary conditions as it was already proposed in
several approaches [4, 82]. However, for full-scale civil-structures, model simplications are
inevitable and uncertainty dependencies are in most cases unquantiable. Thus, most fullscale identication tasks correspond to either the second or third scenario where systematic
bias is introduced by model simplications and it may or may not be recognized.
80
No systematic errors
Recognized systematic errors
Unrecognized errors
Residual
minimization
Bayesian
inference
Error-domain
model falsication
1
2
1 Provided that dependencies are either known or that all possible values are tested.
2 Provided that a sufcient number of measurements are used.
When using Bayesian inference to identify properties of complex structures, varying all covariance terms simultaneously may not correctly capture the effect of systematic errors. Moreover,
this could lead to computational complexity issues because the number of covariance terms
2
is dened by n m
/2 n m . Thus, when using 10 measurements, if three correlation values, say
-0.9, 0 and +0.9, are tested for each covariance term, the number of evaluations required is
340 1019 . One alternative is to combine the potential of both error-domain model falsication
and Bayesian inference as proposed in 2.8.
81
Model falsication using static and dynamic data (2.2 & 2.7)
Usage of surrogate models (2.5.2)
Expected-identiability computation (3.3)
Load-test conguration optimization (4.2)
2.2m
1.1m
This case-study involves the Langensand Bridge, Lucerne, Switzerland. This structure was
under construction (half of width launched) when tested. This bridge is approximately 80 m
long and has a slender prole, see Figure 5.9.
Girder profile
Y
X
Clearance = 0.15m
Figure 5.10 represents the parts of the bridge in place during the rst and second construction
phases. It consists of a concrete deck acting in composite action with a steel girder. The central
part of the bridge is used as roadway and the external parts are sidewalks.
124
114
Roadway
116
112
Sidewalk
Phase 1
Phase 2
82
Load case #2
T1
T1
T2
T2
114
116
Z
S7
A1
S17
S13
S12
Load case #3
112
T1
T2
113
114
116
S7
A1
S17
S13
S12
Load case #4
112
T1
T1
113
114
116
S7
A1
S17
S13
S12
Load case #5
112
113
T2
114
116
S7
A1
S12
S17
S13
Figure 5.11: Test-truck layout for load cases 1 to 5 (Phase 1) for the Langensand Bridge
Table 5.3: Static measurements taken on the Langensand Bridge. Mean and standard deviation
represent the measurement variability obtained by repeating each load case three times.
Load
case
1
2
3
4
5
UY-S7-112
mm
y
-17.9
-17.1
-18.9
-9.4
-8.5
0.09
0.09
0.07
0.06
0.13
UY-S12-112
mm
y
UY-S17-112
mm
y
-22.5
-26.5
-28.0
-14.6
-12.9
-17.6
-23.7
-23.1
-12.7
-11.6
0.27
0.21
0.17
0.14
0.30
0.33
0.42
0.40
0.18
0.08
Sensors
RZ-A1-113
rad
y
-1113
-1020
-1126
-553
-506
5.5
3.8
11.5
1.9
2.9
RZ-S7-113
rad
y
-644
-730
-734
-382
-377
1.7
2.1
5.0
1.3
2.0
EX-L1
mm/mm
y
-18.5
-31.9
-30.3
-17.0
-14.3
0.81
1.01
0.23
0.40
0.12
EX-L2
mm/mm
y
-14.5
-21.3
-20.6
-11.1
-9.7
0.81
0.40
0.69
0.31
0.70
83
112
Roadway
114
Road surface
Concrete
reinforcement
Concrete
barrier
Sidewalk
Transverse
stiffeners for girder
116
Orthotropic
deck stiffners
Table 5.4: Ranges and discretization intervals for parameters to be identied on the Langensand Bridge.
Parameters to be identied
Units
Range
Number of
discretization intervals
GPa
GPa
GPa
kN/m
20-44
2-20
200-212
0-1000
13
10
7
11
Within the initial model set, the concrete Youngs modulus has the highest inuence on
model predictions. Figure 5.13 presents the relative importance of each parameter. The high
importance of the concrete Youngs modulus is due to the combination of prediction sensitivity
and possible parameter range.
84
Relative importance
0.6
0.4
0.2
0
Stiffness of
the horizontal
support
Steel Youngs
modulus
Concrete
Youngs
modulus
Pavement
Youngs
modulus
Figure 5.13: Relative importance of each primary parameter on the model predictions of the
Langensand Bridge.
Uncertainties
Several secondary-parameters (not to be identied) contribute to model prediction uncertainty (see sections 1.3.2 and 2.4.1). For instance, the uncertainties related to the geometry
of the structure (variation in the thickness of the elements t ), Poissons ratio of concrete
(v), truck weight and the variation of strain-sensor positioning. All of these uncertainties
are represented by normal distributions; the details of each are summarized in Table 5.5.
Estimation of these uncertainties are based on the values found in the literature (see, 1.3.2)
and on site observations. Variations in temperature during the tests affected the properties
of concrete and pavement materials. The uncertainty in the change in ambient temperature
during the tests is represented as an uniform distribution dened between 0 and 5 degrees
Celsius. The uncertainty associated with temperature is the maximal variation of temperature
measured during the tests. Based on the relationship proposed by Bangash and England
[14] the variation in percentage of the concrete Youngs modulus is equal to the variation
of temperature divided by 137. For the road surface, the relation between temperature and
Youngs modulus is taken to be the temperature variation divided by 30. This last relationship
is based on the experimental work conducted by Perret [171] on similar materials.
Table 5.5: Secondary-parameter uncertainties for Langensand Bridge
Uncertainty source
Unit
Mean
v concrete
t steel plates
t pavement
t concrete
Truck weight
Strain sensor positioning
%
%
%
Ton
mm
0
0
0
0
35
0
0.025
1
5
2.5
0.125
5
For the quantication of other uncertainty sources (except for sensor resolution), no information other than engineering heuristics is available. In all cases, values provided are intended
to be conservative evaluations of minimal and maximal bounds, that should include the
true error. Details regarding other uncertainty sources are presented in Table 5.6. In this
85
Displacement
min
max
-0.2 mm
0%
-1%
-1%
0.2 mm
7%
0%
1%
Rotation
min
max
-4 rad
0%
-1%
-1%
4 rad
7%
0%
1%
Strains
min
max
-4 m/m
0%
-2%
-1%
4 m/m
20%
0%
1%
Uncertainty correlation
Evaluation of dependencies between uncertainty sources is a difcult task since such information is rarely available. The dependence between secondary-parameter uncertainties does not
require a direct evaluation since these are obtained through their propagation in the niteelement model. Figure 5.14 shows the correlations between prediction types and locations for
secondary-parameter uncertainties. These correlations can be compared with the assumption
of independence commonly used by other identication approaches (see 1.2.2). In this
Figure, correlation matrices are presented where each axis on the horizontal plane represents
a prediction type and the height of each bar represents the absolute correlation level between
two prediction types. The predictions resulting from uncertainties in secondary-parameter
values are highly correlated for static behavior.
86
Uncertainty
correlation
Uncertainty
correlation
1
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
0
0
Strains
Strains
Rotations
Displacements
Rotations
Predictions
Predictions
Displacements
Figure 5.14: Correlation between predictions for Langensand Bridge due to uncertainties in
secondary-parameter values. Results obtained from the Langensand Bridge model do not
reect the common assumption of independence.
Uncertainty combination
Uncertainty sources mentioned above are combined together and threshold bounds are
computed for each measurement location based on a target reliability = 0.95. The relative
importance of each uncertainty source is presented in Figure 5.15. Labels associated with
predictions and measurements refer to the their type and location as presented in Figure
5.11. For displacement and rotation quantities the dominant uncertainty sources are model
simplications, secondary-parameter uncertainty and measurement repeatability. Other
uncertainty sources have a less important contribution to the total uncertainty. Note that for
rotations, the contribution of sensor resolution is negligible. For strain measurements the
dominant uncertainty sources are sensor resolution, measurement repeatability and model
simplications.
Model falsication
Model instances are falsied if for any measurement location, the residual of the difference
between predicted and measurement value is outside the threshold bounds. Model instances
that cannot be falsied are part of the candidate model set. Starting from an initial model set of
10 010 instances, 7 578 individuals are falsied. This leads to a candidate model set containing
87
Relative importance
RZ
EX
L2
L1
EX
S
71
13
11
13
17
S
UY
111
12
S
111
07
111
[Prediction/measurement]
UY
UY
RZ
Figure 5.15: Example of uncertainty relative importance for the Langensand Bridge. Excepted
for strain, the dominant uncertainty sources are model simplication, secondary-parameter
uncertainty and measurement repeatability.
2 432 instances, a reduction of more than 75% in comparison with the initial model set. Figure
5.16 presents a matrix of plots illustrating the pairwise combinations of parameters found in
the candidate model set. The range of each plot corresponds to the range of parameters. Only
the concrete Youngs modulus parameter range has been reduced by measurements. This is
because this parameter has the largest combination of sensitivity and parameter range.
The 2 432 instances are used here to perform predictions for other quantities than those used
to obtain the candidate model set. Table 5.7 compares the prediction ranges of the initial
model set for the rst ve dynamic excitation frequencies with the ranges obtained using the
candidate model set. The frequency prediction range for all modes is reduced from 55% to up
to 82% compared to the prediction made using the initial model set. This indicates that the
prediction range of models can be reduced by using in-situ measurements.
UXSTIFF
1000
500
0
2.05 2.1
5
EXSTEEL x 10
5
x 10
EXSTEEL
UXSTIFF
1000
500
0
2.1
2.05
2
3
4
4
EXCONC x 10
3
4
4
EXCONC x 10
5
x 10
2.1
EXCONC
EXSTEEL
UXSTIFF
500
0
x 10
1000
2.05
2
0.5 1 1.5 2
4
EXPAV x 10
0.5 1 1.5 2
4
EXPAV x 10
4
3
2
0.5 1 1.5 2
4
EXPAV x 10
Figure 5.16: Pairwise comparison of parameters found in the candidate model set for the
Langensand Bridge.
Table 5.7: Comparison of frequency prediction ranges computed using the initial and the
candidate model sets. For these ve modes, predictions made using the candidate model set
lead to the reduction in prediction ranges from 55% to up to 82% compared to the predictions
made using the initial model set.
Predictions
Initial model
set (IMS)
predictions
Candidate model
set (CM)
predictions
Frequency range reduction
min
max
range
min
max
range
0.93
1.11
0.17
1.03
1.07
0.03
2.61
3.04
0.42
2.83
2.95
0.12
3.16
3.67
0.51
3.47
3.57
0.10
4.15
4.62
0.46
4.37
4.57
0.19
7.76
8.60
0.83
8.16
8.53
0.37
(%)
-82%
-71%
-80%
-58%
-55%
89
Relative importance
E
EX XL2
L1
S7
RZ
11
A
U
UY YS1 1113 3
7
UY
S1
21 111
S
07
11
111
[Prediction/measurement]
RZ
Figure 5.17: Example of uncertainty relative importance when using a surrogate model to evaluate the initial model set. The contribution of the surrogate model approximation (uncertainty
source no.7) is small compared with other sources of uncertainties.
Ref.
Ref.
Figure 5.18: Accelerometer layout for the Langensand Bridge. Each triangle represents a
recording point. Labels Ref. represent reference sensors.
90
The average power spectral density function (PSD) of the Langensand bridge recordings
is displayed in Figure 5.19. The number of singular values showing a peak at a particular
frequency indicates the number of modes having energy at this frequency.
Vertical 1
Vertical 2
Torsion 1
Transverse 1
30
20
10
1.5
4 5 6 7 8 9 10
Frequency (Hz)
15
20
30
40
Figure 5.19: Average power spectral density function (PSD) of the recordings made on the
Langensand Bridge.
The modes identied are detailed in Table 5.8 and the corresponding mode shapes are displayed in Figure 5.20. The four rst modes of the girder, from 1.27 Hz up to 4.4 Hz, are easily
identiable (Figure 5.19): rst vertical bending, rst lateral bending, second vertical bending
and rst torsion modes. Other peaks in the spectra are not all linked to structural behavior for example at 1.8 Hz, where a transient disturbance is noticed. Moreover, two peaks are found
in the spectra in all datasets with the same modal shape corresponding to the rst transverse
mode (peak #2), whereas the numerical model is likely to correspond to the same mode. Both
are used for structural identication.
Between 7 Hz and 8 Hz, as mentioned above, there should be 3 close modes, but only 2 could
be found: the third vertical mode of the girder and a second torsion mode of the girder.
Another second torsion mode, affecting mostly the walkway, is found around 9.3 Hz. Then,
from 10 Hz on, 19 more modes up to 34 Hz related to the walkway or the bridge bottom ange
(not instrumented) were found.
Peak # 1
Peak # 7
Peak # 2
Peak # 8
Peak # 9
Peak # 13
Peak # 14
Peak # 4
Peak # 5
Peak # 6
Peak # 10
Peak # 11
Peak # 12
Peak # 16
Peak # 17
Peak # 22
Peak # 23
Peak # 3
Peak # 15
Peak # 18
Peak # 24
Peak # 19
Peak # 25
Peak # 21
Peak # 20
Peak # 27
Peak # 26
Figure 5.20: Mode shapes of the Langensand Bridge computed from ambient vibration monitoring. Large oscillations on the left side of the structure corresponds to walkway vibration
modes.
92
Interpretation
FDD method
f
Vertical 1
Transverse 1 peak 1
Transverse 1 peak 2
Vertical 2
Torsion 1
Vertical 3
Torsion 2a
Torsion 2b
Walkway 1
Walkway 2
Walkway 3
Walkway 4
Walkway 5
Walkway 6
Walkway 7
Walkway 8
Walkway 9
Walkway 10
Walkway 11
Walkway 12
Walkway 13
Walkway 14
Walkway 15
Walkway 16
Walkway 17
Walkway 18
Walkway 19
1.27
2.58
2.83
3.53
4.40
7.29
7.95
9.33
10.03
10.88
11.57
12.30
13.06
13.43
14.34
14.33
15.88
17.76
19.60
21.72
23.64
27.72
29.48
31.12
32.77
34.05
34.83
0.02
N/A
N/A
0.02
0.04
0.11
0.15
0.15
0.12
0.15
0.11
0.11
0.10
0.05
N/A
0.07
0.08
0.09
0.07
0.10
0.15
0.20
0.15
0.16
0.08
0.09
0.05
93
Longitudinal direction
Transverse direction
Vertical direction
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
0.2
0.5
1
2
Frequency (Hz)
3 4 5
100
0.2
0.5
1
2
Frequency (Hz)
3 4 5
100
0.2
0.5
1
2
Frequency (Hz)
3 4 5
Figure 5.21: Comparison of two recordings taken in the centre of the Langensand bridge along
the 3 axes, with and without trafc.
Unit
Mean
v concrete
t steel plates
t pavement
t concrete
density steel
density concrete
density pavement
%
%
%
Ton/m3
Ton/m3
Ton/m3
0
0
0
0
7.85
2.40
2.30
0.025
1
5
2.5
0.025
0.05
0.05
Other uncertainty sources are reported in Table 5.10 where the lower and upper bounds for
each uncertainty source are provided. These uncertainty distributions are modeled as an
extended uniform distribution (EUD) (see Appendix A) having a value of 0.3.
Estimation of uncertainties in the experimental parameters is not straightforward. A part of the
uncertainty is related to the variability of the frequencies themselves and another part is related
to errors in the estimations. Measurement variability is assessed by estimating the distribution
of the results along the dataset (see Table 5.8) inferred by a normal distribution. The standard
deviation observed does not exceed 2%. Based on references provided in 1.3.2, ambient
vibration monitoring epistemic uncertainties are represented by an uniform distribution with
bounds at 2% of the frequency values.
Model simplication uncertainty evaluation is based on previous studies (5.3.2) that estimated the prediction uncertainty related to degrees of freedom (DOFs) (displacements and
rotations) to be between 0% and 7% of the averaged predicted value. Since the natural frequency is proportional to the square of the stiffness, the uncertainty in frequency prediction is
estimated as the square root of the DOFs uncertainty rounded towards the highest integer.
Note that the sign of model simplication uncertainty is inverted compared with values reported in 5.3.2 because simplications and omissions now decrease the model fundamental
frequencies. Mesh renement errors are evaluated by rening the mesh of the model until
it converges to a stable value. Additional uncertainties are provided to include other minor
factors that could have been neglected.
Combining all the previous sources of uncertainty for each vibration mode leads to the combined uncertainty pdfs. The relative importance of uncertainty sources, averaged for all
95
Frequency
min
max
Measurement variability
AVM epistemic variability
Model simplications & FEM
Mesh renement
Additional uncertainties
frequencies, is shown in Figure 5.22. The main components of uncertainty are the measurement variability, the uncertainty introduced by secondary-parameter uncertainties (Table 5.9)
and the model simplications. Threshold bounds are computed for each mode with a target
reliability = 0.95.
Relative importance
0.5
1 Measurement variability
2 Secondary parameters
3 Model simplifications
4 Additional uncertainty
5 AVM epistemic variations
6 Mesh refinement
0.4
0.3
0.2
0.1
0
1
3
4
Uncertainty source no.
Figure 5.22: Relative importance of uncertainty sources for the Langensand Bridge. The
dominant component of the combined uncertainty is the measurement variability.
Uncertainty dependencies
From all the uncertainty sources presented in the previous section, only the dependencies due
to the secondary-parameter uncertainty can be evaluated during their propagation through
the template model. Furthermore, the frequency correlation between modes tends to decrease
for higher modes ( f > 12 Hz). The result of this evaluation is presented in Figure 5.23. The
horizontal axes represent the modes studied and the height of each bar is the absolute value
of the correlation between pairs of secondary-parameter uncertainties.
The results presented in this gure show that a high correlation between uncertainties is
expected. This does not correspond to the idealized case assumed by traditional approaches
where uncertainties are all independent (see 1.2).
Model falsication
For a predicted mode to be associated with an observed one, the MAC value computed from
these two must be larger or equal to mac = 0.8 . Fifteen modes have such correspondence.
96
Correlation
1
0.8
0.6
0.4
0.2
0
15
13
11
9
7
5
Mode number
11
13
15
9
Mode number
Figure 5.23: Correlation between the predicted frequencies of different natural modes. Predictions are obtained by varying secondary-parameter values
These modes are presented in Figure 5.24. Model instances for which predicted mode shapes
do not pass the MAC correspondence check are discarded. The remaining instances are
classied as either candidate or falsied models, based on the residual of the difference
between the predicted and observed frequency values. Figure 5.25 compares the scatter
in model predictions with the observed frequency for the rst and second modes. Model
instances are represented on the horizontal axis and the vertical axis, corresponds to either the
predicted or measured value. Falsied models are represented by points and candidate models
by crosses. Threshold bounds used to falsify models and combined uncertainty distributions
are also presented in this gure. The candidate models found all have a partial limitation in
the free-movement of the bearing devices.
The stratied pattern in model predictions for the rst frequency is due to the discrete grid
sampling used to generate the initial model set. The rst frequency is found to be mainly
inuenced by the stiffness of the bearing device hindrance. For mode 1, lower frequencies
(1.00 1.05 Hz) correspond to either low restriction or free bearing device movement and high
frequencies (> 1.15 Hz) to heavily restricted longitudinal displacement. The candidate models
are those within the threshold bounds for all modes. Thresholds falsied all model instances
having no restriction (k = 0 kN/m) on the bearing device movement. Models with high values
of restriction (k = 2000 kN/m) are also discarded. These trends are presented in Figure 5.26
where the candidate model set is shown as pairwise combinations of parameter values.
The effect of the three other parameters is not signicant enough compared with the uncertainties to reject further instances. The bearing device hindrance only signicantly inuences
the rst frequency of the structure. The effect of bearing device hindrance is not as important
for higher modes for which the scatter in the data is similar to that of the second frequency
presented in Figure 5.25. For the third frequency and higher modes, all models lie within the
threshold bounds. Therefore, measurement of these modes do not lead to the rejection of any
model instances.
97
Figure 5.24: Mode shapes computed from the Langensand Bridge model and used for the
identication
3.2
Mode #1
1.4
1.45
1.35
1.3
1.25
1.2
1.15
1.1
3.1
2.9
2.8
2.7
2.6
1.05
1
Mode #2
500
1000
1500
2000
2.5
2500
500
1000
1500
Threshold bounds
Measured value
Uncertainty pdf
2000
2500
Model instances
Model instances
Candidate model
Falsified model
Figure 5.25: Comparison of the model prediction scatter and measured value for the rst two
frequencies for Langensand Bridge.
98
UXSTIFF
1000
500
1000
2000
BCSTIFF
2000
BC-STIFF
UXSTIFF
1000
500
1000
3
4
4
EXCONC x 10
3
4
4
EXCONC x 10
4
0.5 1 1.5 2
4
EXPAV x 10
EXCONC
500
x 10
2000
BC-STIFF
UXSTIFF
1000
1000
0.5 1 1.5 2
4
EXPAV x 10
4
3
2
0.5 1 1.5 2
4
EXPAV x 10
Figure 5.26: Pairwise comparison of parameters found in the candidate model set for the
Langensand Bridge. Candidate models were found using dynamic data.
This shows that results from ambient vibration monitoring may not always be directly interpreted. In this case, the small amplitudes of the input appear to be insufcient to overcome the
cohesion and friction involved with the longitudinal movement of the bearing devices. Due to
the arched prole of the structure, these restraints inadvertently increased the rst natural
frequency of the structure by 8% and thereby its apparent stiffness by 17%. Such apparent
stiffness was not observed during static measurements because the horizontal forces (truck
loading) were high enough to overcome friction forces in the bearing devices. The most likely
hypothesis is that during the second measurement [246], the higher noise level due to the
trafc partially freed the bearings and noticeably modies the fundamental mode causing a
decrease in the resonance frequency and an increase in the damping ratio (Figure 5.21). These
ndings can have an inuence on the remaining fatigue life of the structure by modifying the
stress cycle amplitudes under service loads.
Model-class falsication
Interpreting data using an approach adjusting the structural parameters of a structure to
minimize the discrepancy between predicted and measured values can be dangerous. If wrong
assumptions are made at the beginning, for instance if the hypothesis that bearing devices
do not work properly under the measured conditions is not included, wrong conclusions
are obtained. In the case of the approach presented here, when the comparison of model
instances and measurements was rst performed without the hypotheses of bearing device
malfunction, all model instances were falsied. Such a situation indicated that the model
99
Table 5.11: Qualitative evaluation of uncertainty correlation between comparison points for a
each uncertainty source.
Measurement
/prediction type
100
Displacement
Rotation
Strain
Uncertainty source
Displacement
Rotation
Strain
High +
High +
High +
High +
High +
High +
Model
simplication
& FEM
Displacement
Rotation
Strain
High +
High +
High +
High +
High +
High +
Mesh
renement
Displacement
Rotation
Strain
Moderate +
Moderate +
Moderate +
Moderate +
Moderate +
Moderate +
Additional
uncertainties
Displacement
Rotation
Strain
Low +
Low +
Low +
Low +
Low +
Low +
Sensor
resolution
0.9
0.8
Probability
0.7
0.6
0.5
P=50%
0.4
0.3
0.2
0.1
0
0
20%
40%
60%
80%
100%
The probability density function (pdf ) is the derivative of the cumulative density function
(cdf ). Therefore, the high probability content of the domain is contained where the cdf
slope is steep. The polygonal sign represents the actual candidate model set size that is
obtained in 5.3.2 using real measurements. The expected number of candidate models is in
agreement with the number obtained using real observations. In this situation a signicant
reduction in the number of candidate models ( 65%) is expected with a high probability
(95%). Therefore, if the objective is to reduce the number of possible model instances that
101
102
1
Frequency range of the candidate
model set obtained from on-site observations
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Probability
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
103
Independent
Probability
High +
-0.20
0.25
0.50
Correlation value
+0.20
0.75
1.00
Figure 5.29: The correlation choice proposed in the qualitative reasoning scheme (3.2.1) is
varied from 0.2. This variation is used to test the robustness of the expected identiability
with respect to uncertainty dependency choice.
Independent uncertainties
Proposed qualitative reasoning scheme -0.20
Proposed qualitative reasoning scheme
Proposed qualitative reasoning scheme +0.20
1
Predictions located in the
distribution tail
(low probability of occurence)
0.9
0.8
Probability
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20%
40%
60%
80%
100%
Maximal number of candidate models
(% of the initial model set)
Figure 5.30: Comparison of the cumulative distribution function of the candidate model set
expected size obtained for several assumptions of correlation. The vertical dashed line corresponds to the number of candidate models obtained using on-site measurements (see 5.3.2).
Assuming that uncertainties are independent does not lead to conservative predictions.
104
Structure description
The nite-element template model used to generate model instances is presented in Figure
5.31. The primary parameters to identify are the concrete Youngs modulus for the slab poured
during construction phase one and two, the asphalt Youngs modulus for phase one and
two and the stiffness of the horizontal restriction that could take place at the longitudinally
free bearing devices. Details of the construction phases are presented in Figure 5.32. The
possible range for concrete Youngs modulus varies from 15 GPa to 40 GPa, from 2 GPa to
15 GPa for asphalt, and the bearing device restriction varies from 0 kN/mm to 1000 kN/mm.
Each parameter range is subdivided into ve intervals to generate 3 125 initial model instances.
124
Sidewalk
Roadway
112
Road surface
114
Concrete
barrier
Concrete
reinforcement
X
Z
116
Transverse stiffeners
for girder
Orthotropic
deck stiffners
Roadway
116
Sidewalk
112
EX-1 UY
EX-4
UY
**
EX-5
UY
*RZ
113
114
UY
* *
*UY
*
*UY
*
Phase 2
RZ
112
UY
EX-2
EX-3
Phase 1
124
* EX
* EX
*UY
UY
UY
*UY
*UY
Measurements
U: Displacement
R: Rotation
E: Strain
X
S7
A1
S12
S13
S17
Figure 5.32: Langensand Bridge cross-section and potential sensor layout to be used for future
monitoring of the Langensand Bridge.
Load-Case 1
Load-Case 3
124
124
4
112
112
114
114
A1
S7
S12
S7
S12
S17
S13
Load-Case 4
124
124
1
2
112
112
3
2
3
4
114
114
A1
S17
S13
Load-Case 2
1
2
3
4
2
3
X
A1
S7
S12
S13
S17
A1
S7
S12
S13
S17
106
Uncertainty dependencies
Dependencies between uncertainty sources and locations are included in the process of
simulating measurements. These dependencies are described by correlation coefcients.
Since little information is available for evaluating these quantities, the qualitative reasoning
scheme proposed in 3.2.1 is used to describe them. Correlation denitions are the same as
those presented in Table 5.11. For each measurement location, a combined uncertainty pdf is
computed. Threshold bounds are determined for a target probability xed at = 0.95. The
denition of the threshold bounds width depends on the number of measurements used in
the falsication process. Therefore, specic threshold bounds are computed for each sensor
conguration.
107
Maximal
cost
100
40
Minimal
number of
candidate models
20
Useful Over
measurements instrumentation
0
0
0.4
0.8
1.2
1.6
Load-Test Cost (x10,000$)
2.0
Figure 5.34: Measurement-system design multi-objectives optimization results for the Langensand Bridge.
The sensor and load-case congurations associated with each dot in Figure 5.34 are reported
in Table 5.12. In this table, the columns containing diamond signs indicate which sensors and
load-cases are selected for each conguration reported in Figure 5.34. The expected number
of candidate models is obtained for a probability of 95%. This is the upper bound for the
number of candidate models that should be obtained when using real measurements. This
means that individual results are likely to be better.
In this case, the best measurement system found uses 4 sensors with 3 load-cases and would
result in almost 80% of model instances to be falsied. This measurement-system conguration is halfway between the cheapest and most expensive measurement systems. It leads
to a reduction of monitoring costs by up to 50% compared with the maximal cost. Results
presented here indicate that over-instrumenting a structure is possible. The approach proposed is not intended to replace engineering judgment; it is presented as a tool for exploring
the benets of a wide selection of possible sensor types and locations. In the case where the
optimal conguration uses too few sensors, additional provisions might have to be taken to
account for possible sensor breakage, malfunction and robustness through redundancy.
108
3400
3000
LC1
LC2
LC3
LC4
Cost ($)
Expected CM
UY-S03-114
UY-S07-114
UY-S12-114
UY-S17-114
UY-S21-114
UY-S03-124
UY-S07-124
UY-S12-124
UY-S17-124
UY-S21-124
RZ-A1-113
RZ-S4-113
RZ-S7-113
RZ-S10-113
EX-L1
EX-L2
EX-L3
EX-L4
EX-L5
Sensors &
load-cases
3400
1252
3800
1092
5300
909
8400
783
6900
861
9900
722
10200
712
11500
676
11700
694
12500
711
12900
720
Load-test congurations
15600
720
16000
725
16600
725
18000
729
18200
755
19500
773
19900
781
Table 5.12: Optimized measurement congurations are shown by a vertical set of symbols for a given sensor type and location. The cost of
the load-test along with the expected number of candidate models computed for probability of 95% are reported for each conguration.
109
110
West bound
12.2 m
East bound
39.6 m
39.6 m
181.4 m
12.2 m
observed cracks
sand filling
111
12.8 m
Y
Z
X
pavement
Shell elements
Z
X
Figure 5.38: Grand-Mere Bridge cross-section and isometric view of the simplied shell-based
model
Shell-solid model
The shell-solid model is built from a combination of shell and solid elements as presented in
Figure 5.39a. The level of renement used in this model class is above what is usually employed
in practice because of the increase in modeling and computational effort required. Structural
elements that have a small thickness compared to their other dimensions, such as the web,
the upper anges and the cantilevered deck, are modeled as shell elements. Elements having
variable thickness are approximated by step-wise constant thicknesses. The wedge-shaped
ends are modeled as solid elements, as well as the llets of the cross-section and the roadway
barriers. To ensure the compatibility between 6 degrees of freedom per node (DOF) shell and
3 DOFs solid elements, constraint equations are applied to the nodes located at the shell-solid
interfaces. For support conditions, bearing plates are represented using constraint equations
where supports are dened for relevant master nodes only. The cross-section of the shell-solid
model is presented in Figure 5.39b.
pavement
(shell elements)
Solid elements
Shell elements
Constraint equations
Y
Z
X
Figure 5.39: Grand-Mere Bridge cross-section and isometric view of the shell-solid model
113
pavement
(shell elements)
Solid elements
Y
Z
X
Figure 5.40: Grand-Mere Bridge cross-section and isometric view of the solid-based model
L/2 = 90.7 m
Y
X
Z
only model class. r S,i the value predicted by the solid model, r SS,i the value predicted by the
shell-solid model and r SO,i the value predicted by the shell-based model.
SS,i =
SO,i =
r SS,i r S,i
r S,i
r SO,i r S,i
r S,i
(5.7)
(5.8)
The most detailed model (i.e. solid-based) is expected to be more rigid than the shell-solid
model and the latter more rigid than the shell-based model. This is due to load-carrying
elements that are neglected in both the simple shell-solid and shell-based models. More rigid
models lead to smaller static predictions and higher natural frequencies. Therefore, modeling
simplication errors are expected to be positive for the static predictions, and negative for
frequencies. Furthermore, errors are expected to be larger for the shell-based model than with
the shell-solid model.
Vertical displacement
Relative errors computed for vertical displacements on the central part of the main span
are presented in Figure 5.42. The average relative error on the central part of the main span
is +12% for the shell-based model, and +2% for the shell-solid model. The error from the
shell-based model increases at locations near the intermediate supports. Zones around the
supports have complex geometries (simultaneous variation in web, lower ange and llet
dimensions) that are not well captured by simplied shell-models.
The errors obtained for the shell-solid model are smaller because less geometrical simplications are made. However, negative errors are observed at the supports on the side spans due
to the higher level of simplication involved in the shell-based model and that affect the effort
redistribution.
115
SO:+18%
SS:+2%
SO:+18%
SS:-12%
SO:+13%
SS:+2%
SO:+11%
SS:+2%
SO:+11%
SS:+2%
SO:+19%
SS:+2%
SO:+25%
SS:-8%
SO:+13%
SS:+2%
SO:+25%
SS:+1%
Figure 5.42: Relative error for vertical displacement predictions due to model simplications.
SO:+22%
SS:-1%
SO:+11%
SS:+2%
X
Z
SO:+10%
SS:+2%
SO:+24%
SS:+0%
SO:+13%
SS:+2%
Figure 5.43: Relative error for rotation predictions around Z-axis due to model simplications.
Longitudinal strain
Strain relative errors are computed at different locations on the sections as shown in Figure
5.44. Predictions for the cantilever deck are taken on the upper bre whereas other prediction
locations are made on the inside of the box girder. The results obtained show that strain
predictions are very sensitive to local model simplications.
SO: Shell only model
SS: Shell-solid model
SO:+30%
SS:+3%
SO:+27%
SS:+3%
SO:-2%
SS:-11%
SO:-10%
SS:-2%
SO:-5%
SS:-5%
SO:+29%
SS:+0%
SO:+2%
SS:+1%
SO:+2%
SS:+0%
SO:+11%
SS:-1%
SO:+11%
SS:+0%
SO:+16%
SS:+1%
SO:+15%
SS:-2%
SO:+28%
SS:+1%
Y
SO:-6%
SS:-9%
SO:+151%
SS:+127%
SO:+33%
SO:+14% SS:+4%
SS:-4% SO:+103%
SS:+96%
X
Z
SO:+108%
SS:+97%
SO:+108%
SS:+97%
Figure 5.44: Relative error in strain prediction along X-axis due to model simplications.
116
Natural frequency
The relative errors that have been computed for natural frequencies are presented in Table
5.13. The shell-based model systematically predicts lower frequencies than the solid-based
model. In the case of the shell-solid model, prediction errors are smaller than the shell-based
model for vertical bending modes. The accuracy of the shell-solid model is less for modes
corresponding to lateral bending and torsion.
Table 5.13: Relative error in predicted natural frequencies (%) and losses in the MAC criteria
due to model simplications.
Errors in natural frequencies (%)
MAC criteria
Mode number
Description
Shell-based
model
Shell-solid
model
Shell-based
model
Shell-solid
model
1
2
3
4
5
6
Vertical bending*
Lateral bending*
Vertical bending*
Lateral bending
Vertical bending*
Torsion*
Torsion-2
Vertical bending*
Torsion
Vertical bending
-4.4
-4.1
-4.4
-3.1
-2.6
-7.5
-5.3
-8.8
-6.1
-0.5
-9.6
-1.1
-16.3
-0.2
-18.9
+3.6
-1.4
+1.9
-1.2
1.00
1.00
1.00
1.00
0.98
0.99
1.00
0.97
0.99
1.00
0.99
1.00
0.89
0.99
0.51
0.59
1.00
0.61
1.00
7
8
9
*: measured mode
Mode shape vectors are extracted for each model and for each natural frequency reported
in Table 5.13. A MAC test is performed on mode shapes. The results of the comparison are
shown in Table 5.13. For the shell-solid model, there are two close modes that present a mix of
torsion and lateral bending (Mode 6) and both fail the MAC test with a value below 0.6. Mode
number 8 (torsion) also has a poor MAC value. This indicates that torsional behavior is not
adequately captured by the rened shell-solid model. Therefore, this model class should not
be used to explain observations involving torsional behavior.
Vertical displacement
Relative errors for the vertical displacement predictions are presented in Figure 5.45. The
exclusion of the three rst features (barriers, pavement, and concrete reinforcement) gives
constant prediction errors of respectively +7%, +0% and +2% over the length of the bridge.
The exclusion of diaphragms and the simplication of the support conditions are discrete
simplications. Therefore, the error on the prediction is not the same for all locations. The
errors increase as the predicted value are extracted closer to the location of the omitted
element. The cumulative effect of all secondary elements signicantly affects the predicted
displacement values. Note that the sum of the effect of components is not equal to effect of all
components taken together.
G:+6%
P:+0%
R:+2%
B:-1%
S:+5%
A:+10%
G:+6%
P:+0%
R:+2%
B:-8%
S:+0%
A:-5%
G:+7%
P:+0%
R:+2%
B:+1%
S:+4%
A:+15%
G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%
G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications
G:+8%
P:+0%
R:+2%
B:+1%
S:+2%
A:+13%
G:+8%
P:+0%
R:+2%
B:+1%
S:+2%
A:+13%
G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%
G:+7%
P:+0%
R:+2%
B:+2%
S:+5%
A:+16%
G:+6%
P:+0%
R:+2%
B:-1%
S:+9%
A:+14%
G:+6%
P:+0%
R:+2%
B:-8%
S:+5%
A:+0%
Y
X
Z
Figure 5.45: Relative error for vertical displacement prediction due to secondary-elements
omission.
G:+6%
P:+0%
R:+2%
B:+0%
S:+6%
A:+18%
G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%
G:+6%
P:+0%
R:+2%
B:+0%
S:+8%
A:+20%
G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%
G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications
G:+7%
P:+0%
R:+2%
B:+0%
S:+12%
A:+21%
Y
X
Z
Figure 5.46: Relative error for rotation prediction around Z-axis due to secondary-elements
omission.
Longitudinal strain
Figure 5.47 shows that the relative errors in longitudinal strain predictions are dependent
upon their location. Quantitatively, the parameters having the most inuence are the barriers.
The errors in the predictions obtained with the model that excludes the diaphragms, and the
predictions obtained with simplied support conditions present anomalies, such as localized
high values and negative values. The predictions made on the bottom chord, far away form
any secondary elements, are less affected by their omission.
G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications
G:+37%
P:+0%
R:+3%
B:+0%
S:+1%
A:+42%
G:+3%
P:+0%
R:+3%
B:+0%
S:+0%
A:+8%
G:+12%
P:+1%
R:+2%
B:-1%
S:+1%
A:+15%
G:+1%
P:+0%
R:+1%
B:+2%
S:+3%
A:+2%
G:+28%
P:+1%
R:+2%
B:+0%
S:+0%
A:+33%
G:+2%
P:+0%
R:+2%
B:+8%
S:-11%
A:+16%
G:+3%
P:+0%
R:+3%
B:+0%
S:+0%
A:+8%
G:+12%
P:+1%
R:+2%
B:+2%
S:-1%
A:+19%
G:+12%
P:+1%
R:+2%
B:+1%
S:-9%
A:+23%
G:+10%
P:+1%
R:+2%
B:+2%
S:-5%
A:+24%
Y
X
Z
Figure 5.47: Relative error for strain prediction along X-axis due to secondary-elements omission.
119
Mode 1
Vertical
bending
Mode 2
Lateral
bending
Mode 3
Vertical
bending
Mode 4
Vertical
bending
Mode 5
Torsion
Mode 6
Vertical
bending
Barrier
Pavement
Reinforcement
Diaphragms
Support conditions
All parameters
-0.07
+3.5
+0.08
-0.3
-1.1
+2.5
-5.4
+3.6
+0.06
-8.9
-3.1
-10.8
-0.2
+3.2
+0.03
-0.4
-1.4
+1.5
-0.4
+2.9
-0.05
-0.1
-0.9
+1.5
+12.9
+3
+0.08
-16.6
-1.8
-12.9
+0.3
+2.9
+0.01
-0.2
-1.0
-7.4
Result summary
The level of complexity of the model in terms of geometry simplication, element type combinations, as well as secondary structural-elements have a signicant inuence on predictions.
Also, zones having localized simplications such as intermediate supports that present complex geometries, are more sensitive to simplications and result in higher prediction errors.
The estimation of the prediction errors in natural frequencies shows that modes involving
lateral bending and torsion are more sensitive than the others to the exclusion of secondary
structural-elements. Furthermore, simplied models may not adequately represent local
behavior since prediction errors in longitudinal strain may be important (>100%). Global
behavior, such as natural frequencies, displacements and rotation around the transversal axis,
should be favored as quantities to be compared with measurements during structural identication. Although the solid-based model accurately represents the geometry of the bridge, it
remains an approximation of the real structure. Additional errors should be expected between
the predictions given by this model and the real behavior. The results of this study provide
lower bounds of model simplication errors that can be used for structural identication.
The most important aspect of this study is that it shows that model simplications and
omissions systematically affect the error structure. This means that when an aspect of a
structure is neglected in the model, it may systematically affect the prediction errors for
several prediction locations and for several prediction types to varying degrees. Modeling
each of these errors by zero-mean independent Gaussian noise would not represent the error
structure observed in this case study.
120
Y
Z
In order to be interpreted, measured accelerations have to be synthesized into natural frequencies and mode shapes (experimental modal analysis). The technique used here is the
Frequency Domain Decomposition (FDD) method [38]. The six rst singular values of the
averaged power spectral density matrices (FDD spectrum) are shown in Figure 5.49. The
number of singular values showing a peak in a given frequency range corresponds to the
number of modes in this range. For instance, around the frequency 1.04 Hz, two singular
values show a peak indicating the presence of two modes. A total of six modes were identied
with condence. The natural frequencies of these modes and plots of the mode shapes are
given in Figure 5.50.
1-Vertical
bending(1.04Hz)
4-Vertical
bending (3.71Hz)
5-Torsion (5.66Hz)
80
70
6-Vertical
bending (5.66Hz)
2-Lateral
bending (1.04Hz)
60
50
40
30
20
10
.2
.5
1.5 2
5
Frequency (Hz)
10
15 20
50
Figure 5.49: Averaged singular values of the power spectral density matrices for Grand-Mere
Bridge.
10
50
6
4
2
0
10
0
50
50
100
100
100
150
150
200
150
200
0246
200
06
105 0
(a) Mode shape number one - Verti- (b) Mode shape number two - Lat- (c) Mode shape number three - Vercal bending (1.04Hz)
eral bending (1.04Hz)
tical bending (2.12Hz)
20
15
10
5
0
5
10
10
6
2
50
50
100
100
10
5
0
50
150
100
150
150
200
200
200
X
0 6
0 6
(d) Mode shape number four - Verti- (e) Mode shape number ve - Tor- (f) Mode shape number six - Vertical
cal bending (3.71Hz)
sion (4.68Hz)
bending (5.66Hz)
Figure 5.50: Measured mode shapes and frequencies for Grand-Mere Bridge
122
East bound
E1
L1
E3
E2
L1
L2
E0
Table 5.15: Values for parameters (3 for each parameter) used to create the initial model set for
the Grand-Mere Bridge.
Free parameter
Concrete Youngs modulus
Bearing device restriction
Effective Youngs modulus of cracked concrete at west pile
Effective Youngs modulus of cracked concrete at mid-span
Effective Youngs modulus of cracked concrete at east pile
Length of crack zone over supports
Length of crack zone at mid-span
Units
Values
GPa
kN/mm
m
m
Uncertainties
Secondary parameters are those that have a marginal effect on the structural response and are
considered as uncertainties. Natural frequencies are inuenced by the mass and the rigidity
of the structure. Therefore, the secondary-parameters are the concrete, pavement, and sand123
Uncertainty source
Concrete density
Pavement Youngs modulus
Pavement density
Sand density
T/m3
MPa
T/m3
T/m3
2.2
1000
2.0
1.1
2.6
10000
2.4
2.0
Gaussian distribution
Unit
Mean
MPa
202000
6000
Details of other uncertainty sources are summarized in Table 5.17. These sources are described
by the extended uniform distribution (see Appendix A). This distribution includes uncertainty
regarding the bounds dening the uniform distributions. The parameter represents the
uncertainty of bound positions as a fraction of the initial interval width. The element discretization error estimation is based on a mesh renement analysis is conducted to determine
the maximal plausible prediction error for each natural frequency. The evaluation of uncertainties related to model simplications is based on the study presented in 5.4.3. An uncertainty
of 4% is added to the errors estimated in the previous study to represent the fact that the
solid-based model used as reference is also an approximation of the real structure. Additional
provisions are taken for mode #5 since predictions of the torsional behavior were found in
5.4.3 to be less accurate than other predictions.
Table 5.17: Other uncertainty sources for Grand-Mere Bridge.
Uncertainty source
AVM epistemic variation
Model simplications
Mode 1
Mode 2
Mode 3
Mode 4
Mode 5
Mode 6
Mesh renement
Additional uncertainties
-2%
2%
0.30
-8%
-8%
-8%
-7%
-15%
-9%
0%
-1%
-1%
-1%
-1%
-0%
-1%
-2%
1%
1%
0.30
0.30
0.30
0.30
0.30
0.30
0.30
0.30
Gaussian distribution
Mean
Measurement variability
124
0%
2%
ea
su
Relative importance
M red
od f
el req
s
u
AV C imp en
c
o
M
nc lific ies
ep re at
i
Ad ist te on
di em de s
tio ic ns
Pa
na va ity
ve
r
l
m Me un iati
en s ce on
t Y h r rta
ou efi int
ng ne y
P
St a s me
ee ve mo nt
l Y me du
ou nt
lu
ng de s
s ns
Sa mod ity
nd ul
de us
ns
ity
The relative importance of uncertainties is shown in Figure 5.52. The most important sources
of uncertainty are frequency measurements, model simplications and concrete density.
All uncertainty sources are combined together to obtain threshold bounds for each mode.
Thresholds are computed for a target reliability = 0.95.
0.30
0.20
0.10
Mode-shape accordance
Predicted and measured mode shapes are compared using the MAC test to verify that the
comparison of natural frequencies is performed over the right modes. Only modes 1, 3, 4
and 6 passed the test (MAC value > 0.8). Modes 2 and 5 give a MAC value below 0.6 and
are therefore not used to falsify models. As expected, the template model could not predict
torsional behavior (mode number 6). Poor correspondence between mode shapes may also
be attributed to predictions that give unexpectedly rigid behavior (see Figure 5.50b ). In any
case, not including these two modes in the model-falsication process remains conservative.
Model falsication
Candidate and falsied models for modes 1, 3, 4 and 6 are presented in Figure 5.53. This
gure shows the comparison between the measured natural frequencies and predictions
from all the model instances. Model instances are represented on the horizontal axis, and
the predicted and measured values are shown on the vertical axis. The position of each dot
corresponds to a prediction given by a model instance. The continuous line is the measured
natural frequency and the two dashed lines are the threshold bounds that separate candidate
model instances from falsied ones. The circles are instances that remain in the candidate
125
1.2
2.4
Predicted/measured values
Predicted/measured values
Mode 1
1.3
1.1
1
0.9
0.8
0.7
2
1.8
1.6
1.4
0.6
0.5
0
500
4.5
1000
1500
Models
2000
2500
Mode 4
500
1000
1500
Models
2000
2500
2000
2500
Mode 6
6.5
6
Predicted/measured values
Predicted/measured values
2.2
3.5
5.5
5
4.5
4
2.5
3.5
2
500
1000
1500
Models
2000
2500
Rejected model
Threshold bounds
Candidate model
Measured frequency
3
0
500
1000
1500
Models
Figure 5.53: Comparison between the measured and predicted frequencies for modes 1, 3, 4
and 6 for the Grand-Mere Bridge.
127
E0
4
3
2
1
x 10
2
K
x 10
10
2
10
E0
4
3
2
1
0.5 1
E1
0.5 1
E1
10
2
10
E1
x 10
E0
4
3
2
1
0.5 1
E2
0.5
0.5 1
E2
0.5 1
E2
0.5 1
E3
1
E2
10
2
10
E1
x 10
E0
4
3
2
1
0.5
0.5 1
E3
0.5
0.5 1
E3
0.5 1
E3
0
0.5 1 1.5 4
L1 x 10
1
E2
E1
1
0.5
0.5 1 1.5 4
L1 x 10
0.5
1
E3
10
2
10
E0
x 10
4
3
2
1
0.5 1 1.5 4
L1 x 10
0.5 1 1.5 4
L1 x 10
0.5
0.5 1 1.5 4
L1 x 10
4
L1
E3
E2
E1
E0
x 10
x 10
4
4
1
1
1
10
1.5
2
3
10
1
0.5
0.5
0.5
2
0.5
1
0
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
4
4
4
4
4
4
L2 x 10
L2 x 10
L2 x 10
L2 x 10
L2 x 10
L2 x 10
Figure 5.54: Pairwise representation of the candidate model set parameter values for GrandMere Bridge.
128
129
Figure 5.55: Tamar Bridge model and accelerometer layout for Tamar Bridge.
In order to be interpreted, acceleration recorded need to be synthesized into natural frequencies and mode shapes. The technique used here is the Frequency Domain Decomposition
(FDD) method [38]. The conversion of the bridge time-domain signal gave the averaged
singular values of the power spectral density shown in Figure 5.56.
Eighteen modes were identied in total. The natural frequencies of these modes are presented
in Table 5.18 and plots corresponding to each mode shapes are given in Figure 5.57. Note that
the mode shapes of torsional modes (3, 6, 7, 10, 12, 14, 15, 17) only include two measurement
points in the transverse direction. This explains why torsional mode shapes are only partially
represented.
Measured mode shapes are compared with predicted ones using MAC values. A summary of
130
50
40
30
20
10
0
.2
.5
1
1.5
2
Frequency (Hz)
this comparison is presented in Figure 5.58. In this gure, the relative frequency of the MAC
values for each measured mode is presented. Relative frequency is used because measured
mode shapes are compared with 3 125 model instances (see 5.5.2). Here, 13 modes have
an acceptable correspondence between predicted and measured values (MAC 0.8). Modes
having a MAC value below 0.8 are not used for further comparison.
60
40
40
60
40
20
20
20
60
0
0
50
100
150
200
250
300
350
400
450
500
50
100
150
200
20
20
0
150
200
250
300
350
400
450
500
20
0
250
300
350
400
450
500
100
150
200
350
400
450
200
250
300
350
400
450
100
150
200
250
300
350
400
450
200
250
300
350
400
450
100
150
200
250
300
350
400
450
20
0
50
100
150
200
250
300
350
400
450
500
100
150
200
250
300
350
400
450
500
50
100
150
200
250
300
350
400
450
500
40
20
0
100
150
200
250
300
350
400
450
500
50
100
150
200
250
300
350
400
450
500
60
40
20
0
0
50
100
150
200
250
300
350
400
450
500
50
500
40
60
450
40
500
20
0
50
40
400
60
50
60
350
300
20
50
40
500
250
60
Z
150
200
0
100
150
20
0
50
100
40
500
60
20
500
40
50
60
450
0
50
400
20
500
350
0
150
300
40
500
20
300
40
60
20
250
40
250
60
60
100
200
50
150
0
50
100
40
60
20
200
50
40
150
20
0
60
100
500
50
450
60
400
40
60
40
100
350
60
50
300
250
100
150
200
250
300
350
400
450
500
Relative frequency
Figure 5.57: Measured mode shapes and frequencies for Tamar Bridge.
0.5
0
0.2
0.4
0.6
0.8
MAC value 1.0
10
12
14
16
18
Figure 5.58: MAC value relative frequency quantifying the correspondence between predicted
and measured mode shapes for Tamar Bridge.
132
Saltash side
Saltash side
deck expansion joint
Label
Frequency (Hz)
MAC 0.8
Vertical 1
Vertical 2
Torsion 1
Vertical 3
Vertical 4
Torsion 2
Torsion 3
Vertical 5
Vertical 6
Torsion 4
Vertical 7
Torsion 5
Vertical 8
Torsion 6
Torsion 7
Vertical 9
Torsion 8
Vertical 10
0.39
0.59
0.72
0.97
1.06
1.22
1.31
1.52
1.70
1.86
2.17
3.06
3.47
3.63
4.16
4.52
5.09
5.27
Sidespan cables
initial strains
Saltash tower
deck expansion joint
Main cable
initial strains
Plymouth side
Plymouth side
deck expansion joint
133
Relative importance
The relative importance of each primary parameter is shown in Figure 5.60 for each mode
having passed the MAC correspondence test. This gure shows that the contribution of each
parameter varies according to the mode studied. For the rst modes, the main-cable initial
strain dominates the structures behavior. For higher modes, the relative importance tends to
be more evenly distributed.
1
0.5
0
12
13
14
16
18
11
10
Mode number
Figure 5.60: Primary-parameter relative importance for each mode of the Tamar Bridge
Uncertainties
The rst set of uncertainties described here is associated with secondary model-parameters
that are not intended to be identied. Table 5.19 presents each uncertainty source and the
properties of the Gaussian distribution used to describe them.
The second set of uncertainties is described in Table 5.20. These other sources describe uncertainties associated with measurements and the template model itself. Since these uncertainty
evaluations are based on experience and heuristics, the extended uniform distribution is used
to represent the inaccurate knowledge of uncertainty bounds (see Appendix A).
Based on values reported in 1.3.2, an extended uniform distribution with bounds at 2%
of the frequency values is used to describe measurement uncertainty due to the epistemic
variability associated with ambient vibration monitoring. Measured frequencies variability is
represented by a Gaussian distribution having a mean of zero and a standard deviation of 2%
of the measured frequency. This uncertainty is an upper bound based on other experiments
134
Steel density
Concrete density
Deck density
Steel Youngs modulus
Concrete Youngs modulus
Cable Youngs modulus
Steel Poissons ratio
Concrete Poissons ratio
Concrete tower thickness
Orthotropic steel deck thickness
Concrete concrete deck thickness
Main-cable area
Hanger-cable area
kg/m3
kg/m3
kg/m3
GPa
GPa
GPa
%
%
%
m2
m2
7850
2400
7850
202
30
155
0.29
0.23
0
0
0
0.088
0.0024
2%
5%
5%
6
4.5
6
3%
3%
2
1
2
1%
1%
reported in 5.3.4.
Uncertainty in model simplications and the nite element method (FEM) include bias caused
by omissions in the model along with simplifying hypotheses and numerical errors made
during the model resolution. An upper bound of the model uncertainty is evaluated based
on experience gained during the identication of previous civil structures. Mesh renement
uncertainty represents an upper bound for the effect of the approximation made by using
a nite number of elements to model the structure. Additional uncertainties are conservative provisions for other negligible uncertainty sources that may add up to the model and
measurement uncertainties.
Table 5.20: Other uncertainty sources for Tamar Bridge.
Uncertainty source
AVM epistemic variation
Model simplications & FEM
Mesh renement
Additional uncertainties
min
Frequency
max
-2%
-4%
0%
-1%
2%
1%
2%
1%
0.3
0.5
0.3
0.3
Gaussian distribution
Mean
Measurement variability
0%
2%
All uncertainty sources are combined together to describe the total uncertainty associated
with each mode. The relative importance of uncertainty sources averaged over all modes,
is presented in Figure 5.61. Measurement variability, ambient vibration monitoring (AVM)
epistemic variations and model simplication are the dominant uncertainty sources. When
taken individually, secondary parameters have a marginal inuence on the total uncertainty.
135
Relative importance
0.30
0.20
0.10
AV Me
M as
ep ur
is em
te e
M mi nt
St od c un var
ee el c iab
l s e
i
Ad Yo imp rtin lity
di un lifi atie
tio g ca s
ti
n s
C
a b M al u mo ons
O l e e n du
r
C tho Yo sh r cer lus
on tr u e ta
C
cr op ng fin int
on
et ic s em y
cr
e d m e
et
Yo ec o n
e
un k s d u t
de
g tif lus
ns
ity D s m fne
(e ec od ss
xc k ul
ep de us
n
St ted sit
e d y
C el d ec
To M abl en k)
a e
C we i n de sity
on r c a n
cr wa b si
St ete ll t l e a ty
ee d hic r e
e
a
C H l Po ck kne
on a is d s
cr gn so en s
et er n si
e c s ty
Po ab ra
is le tio
so a
n re
s a
ra
tio
0.46
Uncertainty distribution
Predicted/measured values
0.44
0.42
0.40
Measured frequency
0.38
0.36
0.34
0.32
500
1000
1500
2000
Models
2500
3000
3500
4000
0.80
0.75
0.9
Predicted/measured values
Predicted/measured values
0.70
0.65
0.60
0.55
0.50
0.8
0.7
0.6
0.5
0.45
0.4
0.40
0.35
0
500
1000
1500
2000
Models
2500
3000
3500
4000
500
2000
Models
2500
3000
3500
4000
3500
4000
3.5
3.4
2.2
3.3
2.1
Predicted/measured values
Predicted/measured values
1500
2.3
2
1.9
1.8
1.7
3.2
3.1
3
2.9
2.8
1.6
1.5
0
1000
2.7
500
1000
1500
2000
Models
2500
3000
3500
2.6
4000
500
1000
1500
2000
Models
2500
3000
4.4
Predicted/measured values
4.2
4
3.8
3.6
3.4
3.2
3
0
500
1000
1500
2000
Models
2500
3000
3500
4000
(f ) Mode number 14
Figure 5.62: Comparison of model prediction scatters with measured values for global and
torsional modes (modes number 1, 2, 3, 10, 12 and 14).
137
1.15
1.25
1.1
1.2
Predicted/measured values
Predicted/measured values
1.05
1
0.95
0.9
0.85
1.1
1.05
1
0.95
0.8
0.75
0
1.15
500
1000
1500
2000
Models
2500
3000
3500
0.9
4000
500
1500
2000
Models
2500
3000
3500
4000
3500
4000
3500
4000
1.75
1.7
2.4
Predicted/measured values
Predicted/measured values
1000
1.65
1.6
1.55
1.5
1.45
1.4
2.3
2.2
2.1
2
1.35
1.9
1.3
1.25
0
500
1000
1500
2000
Models
2500
3000
3500
1.8
0
4000
500
5.2
3.8
3.6
3.4
3.2
2.8
0
1500
2000
Models
2500
3000
Predicted/measured values
Predicted/measured values
1000
4.8
4.6
4.4
4.2
4
500
1000
1500
2000
Models
2500
3000
3500
3.8
0
4000
500
1000
1500
2000
Models
2500
3000
Predicted/measured values
5.5
4.5
0
500
1000
1500
2000
Models
2500
3000
3500
4000
Figure 5.63: Comparison of model prediction scatters with measured values for vertical bending deck modes (modes number 4, 5, 8, 11, 13, 16 and 18).
138
Identication results
Using mode numbers 1, 2 & 3 falsies 2 601 model instances out of 3 125. This leads to a
limited number of candidate models (524), reducing signicantly the number of possible
combinations of physical parameters that are able to explain the observed frequencies.
1011
Main_s: Main-cable initial strain
Sidespan_s: Sidespan cable initial strains
Support_ply: Plymouth side support longitudinal stiffness
Support_sal: Saltash side support longitudinal stiffnes
Deck_exp: deck expansion joint longitudinal stiffness
104
104
1011
Support_sal
104
1011
104
1011
104
5
10
Deck_exp
Support_sal
Support_ply
5
10
Deck_exp
1 2 3 3
Main_s x 10
1011
1011
Deck_exp
1011
Support_sal
Support_ply
Support_ply
Figure 5.64 presents the candidate model set parameter values using pairwise parameter
graphs. The range of each plot corresponds to the range of each parameter. In order to better
represent results, axes do not have a linear scale. These plots show that the parameter range is
reduced for the main cable initial strain where all higher values are discarded. Additionally,
the number of possible permutations of other parameters is also reduced.
104
104
1 2 3 3
Main_s x 10
1 2 3 3
Main_s x 10
1 2 3
Sidespan_s
3
x 10
104
1 2 3
Sidespan_s
3
x 10
1011
104
1 2 3
Sidespan_s
3
x 10
3
Main_s
1011
Deck_exp
104
Support_sal
Support_ply
1011
x 10
2
1
1 2 3
Sidespan_s
3
x 10
Figure 5.64: Pairwise representation of the candidate model set parameter values for Tamar
Bridge.
The candidate models found serve as a baseline for comparing the evolution of the bridge
condition. These models can be used to compare the actual behavior with results from future
monitoring campaigns to detect changes in the candidate model set. Such changes could
indicate that the state of the structure has changed and thus allow for preventive interventions.
A study is presented in the next section where the accelerometer conguration used for this
study is optimized for future monitoring activities.
139
Qualitative label
Independent
High+
High+
Low+
Independent
The cumulative distribution function describing the expected size of the candidate model set
is presented in Figure 5.65. Expected identiability indicates that there was a probability of
approximately 15% of obtaining a candidate model set containing 524 models or less. This
observation supports the validity of the expected identiability metric because the result is
well within the variability expected. Note that for high cumulative probabilities (> 0.9), the
expected identiability varies signicantly for small changes of cumulative probability. This
variability is due to the wide and non-uniform dispersion in predicted values for the two rst
modes (see Figure 5.62 a&b). Therefore, it is not possible to provide robust predictions for the
expected size of the candidate model set for such high probability.
Probability
1
0.8
0.6
0.4
0.2
0
20%
40%
60%
80%
100%
Maximal number of candidate models
(% of the initial model set)
100
80
60
40
20
0
1
4 5 6 7 8 9 10 11 12 13
Number of modes used
Even if when using several modes to falsify model instances the predicted decrease in performance is negligible, it is desirable to obtain data for more than one mode. Selecting several
modes is intended to increase the robustness of the identication. Here, the measurementsystem design methodology presented in 4.4 is used to nd which accelerometers in the
layout presented in Figure 5.55b can be removed while keeping the same interpretation performance. The modes of interest are modes 1, 2 & 3, the rst two vertical deck-bending modes
and the rst torsional mode.
Results are reported in Figure 5.67, showing the mode-match criterion value reached for mode
1, 2 & 3 depending on the number of locations monitored. The mode-match criteria quantify
the capacity to nd correspondence between predicted and measured mode shapes. Using
only sixteen modal points leads to a negligible loss in the mode-match criteria compared with
141
Mode
FQ-1
FQ-2
FQ-3
FQ-4
FQ-5
FQ-8
FQ-10
FQ-11
FQ-12
FQ-13
FQ-14
FQ-16
FQ-18
Number of modes
Expected CM
7
1082
8
1103
8
1103
9
1121
10
1131
11
1153
12
1172
13
1209
1
986
2
991
3
1032
4
1034
5
1045
6
1068
using all 37 modal points. Using less modal points signicantly diminishes the capability to
link predicted and measured mode shapes. This drop is more important for mode #2 and #3.
The target mode match criteria is set to = 0.99. Therefore, the conguration with at least 16
sensors is necessary to satisfy the mode match Q,l /r e f for all three modes.
Mode match
1
Number of sensors
satisfying a mode-match
0.8
0.6
0
10
15
20
25
30
Number of locations monitored
Mode #1
Mode #2
35
Mode #3
Figure 5.67: The effect of the number of acceleration sensors on mode match criterion used to
associate predicted and measured mode shapes. A minimum of 16 sensors are necessary to
satisfy a mode match Q,l /r e f , where = 0.99.
Figure 5.68 presents the optimized accelerometer layout using the sixteen acceleration sensors.
The approach detected that it is necessary to keep several measurement location transversally to be able to separate bending and torsional mode shapes. Provided initial parameters,
reference mode shapes and features to be identied, the approach can nd optimized accelerometer layouts. In this case, the measurement system optimization methodology resulted
in a reduction in the number of accelerometers required by more than 55% while only allowing
for a negligible loss in the interpretation capacity.
Limitations
During the optimization of the accelerometer layout, a reference mode shape was available
from the measurements made on the bridge. In situations where such reference mode shapes
are not available, the capacity to perform the placement of accelerometers using simulated
142
Figure 5.68: Optimized accelerometer layout Q opt for Tamar Bridge obtained using existing
mode-shape data. This conguration with 16 sensors corresponds to the layout identied in
Figure 5.67.
1. Structural identication found an upper bound for the main cable initial strain. It also
reduced by 55% the number of parameter permutations possible compared with the
initial model set.
2. The model of the structure adequately represents the bridge global and torsional behavior. Higher exural modes in the deck appear to be systematically biased. For these
modes, predicted frequencies are underestimated by approximately 15%.
3. Given initial parameters, reference mode shapes and features to be identied, optimized accelerometer layouts can be determined. For Tamar Bridge the optimized
measurement-system found reduced the number of accelerometers required by more
than 55% while only causing a negligible loss in the interpretation capacity.
143
for
locations
..
Flow measurements
..
Leak at location #
..
Leak at location #3
Leak at location #2
Leak at location #1
Leak location
..
Figure 5.69: General framework for the detection of leaks in pressurized pipe networks.
144
1200
Pipe
Network node
Tank
Pump
Reservoir
1000
800
600
400
200
500
1000
1500
2000
too large to differentiate between what is attributed to consumption and what is attributed to
leaks. Therefore, every time the total consumption in the network goes below the minimal
hourly ow for one day, the ow velocities in pipes are recorded.
Monitoring devices
Data can be acquired on water distribution networks using ultrasonic ow-meters. These devices are chosen for their non-invasive characteristics and their high accuracy. Ultrasonic ow
meters measure the difference in travel time between pulses. The accuracy of commercially
available devices is 2% of the measured ow. Considering that a ow sensor can be installed
on each pipe, there are 295 possible sensor locations. Note that in this proof of concept study,
no real data is available. Therefore, simulated measurements are generated.
Uncertainties
There are several sources of uncertainty associated with the ow velocity model, the measurements and the water network itself. All uncertainties are represented by random variables
described as follows.
Parameter uncertainties are those having a direct inuence on the network ow-velocity
model. In water distribution networks, common secondary-parameters uncertainties are:
node elevations, pipe diameters, minor losses, roughness coefcients and water demand.
The uncertainty in the elevation of nodes is described using a Gaussian distribution with an
average of 0 and standard deviation of 50 mm. Such a level of accuracy on the node position
can be obtained using local non-invasive measurements such as ground penetrating radars
to accurately locate pipe depth [32, 102]. Uncertainties in pipe diameters are described by a
146
Average hourly
consumption (m3/h)
250
200
150
Minimum hourly
consumption
100
50
0
10
15
20
Hour
Figure 5.71: Typical hourly averaged water consumption measured over one day
These uncertainty sources correspond to uncertain parameters of the model. Their inuence
on predicted ow velocities is found by propagating the uncertainties on parameter values in
the model to obtain the uncertainties on predicted velocities for each pipe.
The uncertainty on sensor resolution is taken as a uniform distribution having as lower and
higher bound 2% of the measured value. The uncertainty associated with model simplications is represented by an extended uniform distribution (EUD, see Appendix A) having a lower
and higher bound of 20% of the predicted value and a factor of 0.3. The EUD represents
the uncertainty of bound positions as a fraction [0, 1] of the initial interval width.
Additional uncertainties attributed to other minor sources are also modeled by an extended
uniform distribution. The lower and higher bounds have a value of 1% of the predicted
value and a parameter = 0.3. For the purpose of generating simulated measurements,
uncertainties are assumed to be independent due to the close coupling of the system.
1200
1000
800
600
400
200
0
1200
500
800
600
400
200
1000
1000
1500
Relative Coordinates (m)
2000
500
1000
1500
Relative Coordinates (m)
1200
1000
800
600
400
200
0
2000
500
1000
1500
Relative Coordinates (m)
800
600
400
200
1000
500
1000
1500
Relative Coordinates (m)
2000
For this network, it is not feasible to perform an exhaustive search to nd the optimal
measurement-system since more than 1088 combinations of sensors are possible. The inverse
greedy algorithm is used to nd optimized congurations of sensors. Figure 5.73 presents the
expected number of falsied leak scenarios for the optimized sensor congurations found. In
this gure, the horizontal axis corresponds to the number of sensors used and the expected
number of candidate leak scenarios is plotted on the vertical axis. The dashed line shows the
boundary of the feasible domain.
In Figure 5.74, the expected radius including all leak scenarios is presented for the sensor
148
100
80
60
40
20
0
50
100
150
250
0
200
Number of flow velocity measurement points
Figure 5.73: Relation between the expected number of candidate leak scenarios and the
number of ow measurements used.
congurations tested. In this graph, the horizontal axis corresponds to the expected radius
within which all potential leaks are included. In both cases, the expected performance increases rapidly with the number of sensors used until it reaches an asymptote. For engineering
purposes, a good tradeoff between the expected performance and the number of sensors used
is reached with 14 sensors. The measurement conguration is presented in Figure 5.75 where
the 14 sensors chosen are represented by squares.
1000
800
600
400
200
0
0
50
100
150
200
250
Number of flow velocity measurement points
Figure 5.74: Relation between the radius including all leak scenarios and the number of ow
measurements used.
Using the optimized sensor conguration found, the expected number of leak scenarios is
studied for several levels of leak ow. This expected identiability of the monitoring system is
presented in Figure 5.75 for leak levels of 100, 75, 50, and 25 L/min. The cumulative distribution
function for each leak level is shown in Figure 5.76.
For high probability levels ( = 0.95), the expected number of leak scenarios identied
remains low for leaks under 75 L/min. For lower probability levels ( = 0.50) good results can
be expected up to 50 L/min. The sensor placement optimization procedure is repeated for a
leak level of 25 L/min to test if by increasing the number of measurements, the performance
can be increased. Results are presented in Figure 5.77. In this case, when monitoring 100 pipes,
149
Flow sensor
1200
1000
800
600
400
200
0
0
500
1000
1500
Relative coordinates (m)
2000
Probability
0.8
100L/min
75L/min
50L/min
0.6
25L/min
0.4
0.2
0
20
40
60
80
100
Maximal number of candidate leak scenarios (%)
Figure 5.76: Expected number of candidate leak scenarios identied for several leak intensities.
150
the number of expected leak scenarios can be reduced by half compared with the situation
where only 14 sensors are used. Therefore, efciently locating leaks in the water distribution
network for a leak of 25 L/min is feasible if a sufcient number of pipes can be instrumented.
100
80
60
40
20
0
Figure 5.77: Relation between the expected number of candidate leak scenarios and the
number of ow measurement points used, for a leak level of 25 L/min.
If the identication of lower leak ow levels is required, an option could be to reduce uncertainties associated with the model and measurements. The most important uncertainty
sources are the water consumption at network nodes and the model simplications. If these
uncertainties can be reduced, a better performance can be expected for lower leak levels.
151
152
6 Conclusions
Chapter 6. Conclusions
tion in prediction ranges, quantify probabilistically the utility of measuring.
155
7 Future work
In addition to the points reported in 6.5 that need to be addressed, this chapter identies
promising paths to tackle current issues related to the management of infrastructures and
more generally, to the diagnosis of complex systems.
157
Number of measurements
used for data interpretation
Figure 7.1: Schematic representation of the relationship between the number of measurements
used for data interpretation and the probability of committing a Type-I diagnosis error, in case
of misevaluation of uncertainties.
Probability of commiting a
Type-II diagnosis error
Number of non-redundant
measurements used for data interpretation
Figure 7.2: Schematic representation of the relationship between the number of nonredundant measurements used for data interpretation and the probability of a Type-II diagnosis error, in case of misevaluation of uncertainties.
Probability of
a diagnosis error
Type-II error
Type-I error
Number of measurements
minimizing the probability of
either diagnosis error due
to uncertainty misevaluations
Number of measurements
used for data interpretation
Figure 7.3: Schematic representation of the relationship between the number of measurements
used during data interpretation and the probability of either a Type-I or a Type-II error, in case
of misevaluation of uncertainties.
159
7.1.3 Uncertainties due to the interactions between primary and secondary parameters
The methodology presented in Chapter 2 separates primary parameters to be identied and
secondary parameters contributing to prediction uncertainties. Techniques presented in
1.5.1 could be used to quantify the effect of the interactions between primary and secondary
160
Figure 7.4: Two-dimensional stochastic elds representing the Youngs modulus spatial variability in concrete bridge decks.
Identifying precisely the Youngs modulus patterns using system identication is not foreseeable for two reasons. First, the inuence of such local variability is likely to be insufcient to be
distinguished from other sources of uncertainties. Second, it would involve a large number of
parameters to be identied, undermining the ability to correctly explore the space of possible
solutions.
An alternative is to include the spatial variability of materials as a secondary-parameter uncertainty (see 2.4.1). This additional source of uncertainty would have the effect of widening
threshold bounds. The mean value for the material property would be the parameter to be
identied.
161
Likelihood
0
Figure 7.5: Two-dimensional likelihood function (Equation 7.1), used to generate parameter
samples on the limit separating candidate and falsied models. The likelihood is maximal
when the observed residuals o,i are equal to threshold bounds [Tl ow,i , Thi g h,i ]. This example
was created for shape functions = 10, = 0.8.
Figure 7.5 is generated using the likelihood function presented in Equation 7.1 where Ti and T i
are computed using Equations 2.12 and 2.13. The shape of the likelihood function (Equation
7.1) is controlled by the parameters which affects the sharpness of peaks and which affects
their symmetry. This function is based on the subtraction of two -order generalized Gaussian
distributions (see 1.2.2) scaled by the factor . Note that these equations can be extended for
162
| |
|c,2 |
| |
|c,1 |
2 11/
1 c,1
1 T
+ c,2
+ T
T1
T2
1
2
L T (c,1 , c,2 ) =
e
e
2T1 T2 (1/)
(7.1)
Figure 7.6 presents a comparison between the random-walk grid-based sampling proposed in
2.5.3 and the methodology presented above. The example is based on the composite beam
presented in 2.6.1 where the shaded area is the candidate model set.
Grid-based random-walk sampling
4.5
4.5
x 10
3.5
2.5
1.5
1.9
x 10
3.5
2.5
1.94
1.98
2.02
2.06
Steel Youngs Modulus (MPa)
1.5
1.9
2.1
5
x 10
1.94
1.98
2.02
2.06
Steel Youngs Modulus (MPa)
2.1
5
x 10
Figure 7.6: Comparison of model instance space exploration using Grid-based random-walk
and falsication-limit sampling. The vertices between samples correspond to the path followed by the random walk. The shaded area is the candidate model set identied in the
example presented in 2.6.1.
This procedure subtracts one dimension from the space of possible solutions because the
boundary of the n p -dimensional candidate model set has n p 1 dimensions. Even if this approach is not intended to overcome all difculties associated with high-dimensional sampling,
it could reduce exploration time for practical applications.
Single sensor
removal
Multiple sensors
removal
Figure 7.7: Sensor interaction relative importance quantifying the contribution of single sensor
removal compared to multiple sensors removal.
In this example the effect of single sensor removal on the number of candidate models is
signicantly higher (in absolute magnitude) than the effect of multiple sensors removal.
Therefore, Greedy algorithms would be likely to outperform other heuristic-based stochastic
sampling techniques.
Limit-state verification
of an existing structure
Adequate
performance?
Site
investigation
YES
Adequate
performance?
NO
Refine reliability
analysis
YES
Adequate
performance?
Interventions
required
NO
YES
No interventions
Figure 7.8: Future work in relation with the general framework for structural evaluation of
existing structures presented in Figure 3.
Front of optimized
measurement systems
0
1
0
Investments in structural performance monitoring
Figure 7.9: Future perspectives for measurement system and test setup design where the
objective functions are money invested versus return on investments in terms of savings on
maintenance.
166
START
Simulated measurements
Candidate models
Scope of future work
Prediction ranges
for critical scenarios
Store results
Evaluation of servicability
or safety criteria
Choice 1: No intervention
Choice 2: Minor intervention
Choice 3: ...
Choice X: Major intervention
Enough
samples?
NO
Expected return on
investments
YES
Investments in structural
performance monitoring
Figure 7.10: Framework representing steps leading to a measurement system overall cost
optimization.
167
n = 0(B-A)
n=0
n=1
n=2
n=3
Combination
...
Zero-order
uncertainty
Multiple orders
of uncertainty
(Uncertainty of
uncertainty)
Extended Uniform
Distribution (EUD)
Figure A.1: Extended uniform distribution that included several orders of uncertainty
Each distribution associated with any order of uncertainty can be dened independently.
However, for practical applications such level of renement is often not available. A simplied
method for dening multiple orders of uncertainty is to provide a fraction that can take
values between 0 and 1. This fraction denes the width of the n t h order of uncertainty using
the relation n (B A), where A and B are the bounds of the zero-order uncertainty.
L=
n (B A)
2
n=0
(A.1)
169
170
Bibliography
[1] The AASHO road test, report 2, materials and construction. Technical Report Special Report 61b, pp.151-154,
1962.
[2] A nite element primer. NAFEMS, Glasgow, UK, 2003.
[3] H. Abdi. Encyclopedia of measurement and statistics, chapter The Bonferonni and idk corrections for
multiple comparisons. Sage, 2007.
[4] H. Ahmadian, J.E. Mottershead, and MI Friswell. Physical realization of generic-element parameters in
model updating. Journal of vibration and acoustics, 124:628, 2002.
[5] H. Akcay, H. Hjalmarsson, and L. Ljung. On the choice of norms in system identication. In IEEE Transactions
on Automatic Control, volume 41, pages 13671372. IEEE, 1996.
[6] R.J. Allemang and D.L. Brown. A correlation coefcient for modal vector analysis. In Proceedings of the 1st
international modal analysis conference, volume 1, pages 110116, Schenectady, NY, USA, 1982. Union Coll.
[7] K.F. Alvin. Finite element model update via Bayesian estimation and minimization of dynamic residuals.
Technical report, Sandia National Labs, Albuquerque, NM, 1996.
[8] I.G. Araujo, E. Maldonado, and G.C. Cho. Ambient vibration testing and updating of the nite element
model of a simply supported beam bridge. Frontiers of Architecture and Civil Engineering in China, 5(3):
344354, 2011.
[9] D. Arroyo and M. Ordaz. Multivariate Bayesian regression analysis applied to ground-motion prediction
equations, part 1: Theory and synthetic example. Bulletin of the Seismological Society of America, 100(4):
15511567, 2010.
[10] ASCE. 2009 report card for Americas infrastructure. Technical report, American Society of Civil Engineers,
Washington, 2009.
[11] ASME. Guide for verication and validation in computational solid mechanics. ASME, 2006.
[12] S.F. Bailey, A. Radojicic, and E. Brhwiler. Case studies in optimal design and maintenance planning of civil
infrastructure systems, chapter Structural Safety Assessment of the Dornaz Bridge, pages 112. ASCE, 1999.
[13] O. Balci and R.G. Sargent. Validation of simulation models via simultaneous condence intervals. American
Journal of Mathematical and Management Sciences, 4(3):375406, 1984.
[14] M.Y.H. Bangash. Manual of numerical methods in concrete: modelling and applications validated by
experimental and site-monitoring data. Thomas Telford, London, 2001. ISBN 0727729462.
[15] Z.P. Bazant and S. Baweja. Creep and shrinkage prediction model for analysis and design of concrete
structures: Model b3. ACI Special publications, 194:184, 2000.
171
Bibliography
[16] Z.P. Bazant, G.H. Li, Q. Yu, G. Klein, and V. Krstek. Explanation of excessive long-time deections of
collapsed record-span box girder bridge in Palau. In Proceedings of the 8th Int. Conf. on Creep, Shrinkage
and Durability of Concrete and Concrete Structures, T. Tanabe et al., eds., The Maeda Engineering Foundation,
Ise-Shima, Japan, pages 131, 2008.
[17] Z.P. Bazant, Q. Yu, G.-H. Li, G.J. Klein, and V. Kristek. Excessive deections of record-span prestressed box
girder: Lessons learned from the collapse of the koror-babeldaob bridge in palau. ACI Concrete International,
32(6):4452, 2010.
[18] M.A. Beaumont, W. Zhang, and D.J. Balding. Approximate Bayesian computation in population genetics.
Genetics, 162(4):20252035, 2002.
[19] J.L. Beck and L.S. Katafygiotis. Updating models and their uncertainties. i: Bayesian statistical framework.
Journal of Engineering Mechanics, 124(4):455461, 1998.
[20] J.L. Beck and K.V. Yuen. Model selection using response measurements: Bayesian probabilistic approach.
Journal of Engineering Mechanics, 130(2):192203, 2004.
[21] M. Beer. Engineering quantication of inconsistent information. International Journal of Reliability and
Safety, 3(1):174200, 2009.
[22] R. Bellman and K.J. strm. On structural identiability. Mathematical Biosciences, 7(3-4):329339, 1970.
ISSN 0025-5564.
[23] Y. Ben-Haim and F.M. Hemez. Robustness, delity and prediction-looseness of models. Proceedings of the
Royal Society A: Mathematical, Physical and Engineering Science, 468:227244, 2011.
[24] J.O. Berger and L.R. Pericchi. The intrinsic Bayes factor for model selection and prediction. Journal of the
American Statistical Association, 91(433):109122, 1996.
[25] K. J. Beven. Towards a coherent philosophy for modelling the environment. Proceedings of the Royal Society
of London. Series A: Mathematical, Physical and Engineering Sciences, 458:120, 2002.
[26] K. J. Beven. A manifesto for the equinality thesis. Journal of Hydrology, 320(1-2):1836, 2006.
[27] K. J. Beven. Environmental modelling: an uncertain future? Routledge, New-York, 2009.
[28] K. J. Beven and A. Binley. Future of distributed models: Model calibration and uncertainty prediction.
Hydrological processes, 6(3):279298, 1992.
[29] K.J. Beven. Uniqueness of place and process representations in hydrological modelling. Hydrology and
Earth System Sciences, 4(2):203213, 2000.
[30] K.J. Beven, P.J. Smith, and J.E. Freer. So just why would a modeller choose to be incoherent? Journal of
hydrology, 354(1-4):1532, 2008.
[31] C.E. Bonferroni. Teoria statistica delle classi e calcolo delle probabilita. Libreria internazionale Seeber, 1936.
[32] U. Boniger and J. Tronicke. Improving the interpretability of 3D GPR data using targetspecic attributes:
application to tomb detection. Journal of Archaeological Science, 37:360367, 2010.
[33] G.E.P. Box and D.W. Behnken. Some new three level designs for the study of quantitative variables. Technometrics, 2(4):455475, 1960.
[34] G.E.P. Box and N.R. Draper. A basis for the selection of a response surface design. Journal of the American
Statistical Association, 54(287):622654, 1959.
172
Bibliography
[35] G.E.P. Box and N.R. Draper. Empirical model-building and response surfaces. John Wiley & Sons, New-York,
1987.
[36] G.E.P. Box and G.C. Tiao. Bayesian inference in statistical analysis. Wiley, New-York, 1992.
[37] S. Brenner. Sequences and consequences. Philosophical Transactions of the Royal Society B: Biological
Sciences, 365:207212, 2010.
[38] R. Brincker, L. Zhang, and P. Andersen. Modal identication of output-only systems using frequency domain
decomposition. Smart Materials and Structures, 10:441445, 2001.
[39] J.M.W. Brownjohn. Structural health monitoring of civil infrastructure. Philosophical Transactions of the
Royal Society A-Mathematical Physical and Engineering Sciences, 365(1851):589622, 2007.
[40] J.M.W. Brownjohn, A. Pavic, P. Carden, and C. Middleton. Modal testing of Tamar suspension bridge. In
Proceedings of the IMAC XXV conference, pages 1922, Orlando, USA, 2007.
[41] R. Cantieni. Langensandbrucke neubau bruckenhalfte seite pilatus - identikation der eigenschwingungen dynamische belastungsversuche. Technical Report Bericht Nr.081231, RCI Dynamics, Dubendorf,
Switzerland, 2008.
[42] D.S. Carder. Observed vibrations of bridges. Bulletin of the Seismological Society of America, 27(4):267303,
1937.
[43] F.N. Catbas, S.K. Ciloglu, O. Hasancebi, K. Grimmelsman, and A.E. Aktan. Limitations in structural identication of large constructed structures. Journal of Structural Engineering, 133(8):10511066, 2007.
[44] F.N. Catbas, T. Kijewski-Correa, and A.E. Aktan, editors. Structural Identication of Constructed Facilities.
Approaches, Methods and Technologies for Effective Practice of St-Id. American Society of Civil Engineers
(ASCE), in press, 2012.
[45] D. Chapelle and K.J. Bathe. Fundamental considerations for the nite element analysis of shell structures.
Computers & Structures, 66(1):1936, 1998.
[46] Y. Chen, M.Q. Feng, and C.A. Tan. Bridge structural condition assessment based on vibration and trafc
monitoring. Journal of Engineering Mechanics, 135(8):747758, 2009.
[47] S.H. Cheung and J.L. Beck. Bayesian model updating using hybrid Monte Carlo simulation with application
to structural dynamic models with many uncertain parameters. Journal of Engineering Mechanics, 135(4):
243255, 2009.
[48] S.H. Cheung and J.L. Beck. Calculation of posterior probabilities for Bayesian model class assessment and
averaging from posterior samples based on dynamic system data. Computer-Aided Civil and Infrastructure
Engineering, 25(5):304321, 2010.
[49] S.H. Cheung, T.A. Oliver, E.E. Prudencio, S. Prudhomme, and R.D. Moser. Bayesian uncertainty analysis
with applications to turbulence modeling. Reliability Engineering & System Safety, 96(9):11371149, 2011.
[50] T.H. Cormen. Introduction to algorithms. The MIT press, Cambridge, MA, 2001.
[51] M.K. Cowles and B.P. Carlin. Markov chain Monte Carlo convergence diagnostics: a comparative review.
Journal of the American Statistical Association, 91:883904, 1996.
[52] M.G. Cox and B.R.L. Siebert. The use of a Monte Carlo method for evaluating uncertainty and expanded
uncertainty. Metrologia, 43:178188, 2006.
173
Bibliography
[53] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II.
In IEEE Transactions on Evolutionary Computation, volume 6, pages 182 197, 2002.
[54] R. Dellacroce, P.A. Schieb, and B.Stevens. Pension funds investment in infrastructure - A survey. International
Futures Programme. OECD, 2011.
[55] A.P. Dempster. A generalization of Bayesian inference. Journal of the Royal Statistical Society. Series B
(Methodological), 30(2):205247, 1968.
[56] Y. Dodge. An introduction to statistical data analysis l1-norm based. Statistical data analysis based on the L,
1:122, 1987.
[57] P.J. Dossantos-Uzarralde and A. Guittet. A polynomial chaos approach for nuclear data uncertainties
evaluations. Nuclear Data Sheets, 109(12):2894 2899, 2008.
[58] C.D. Eamon and A.S. Nowak. Effects of edge-stiffening elements and diaphragms on bridge resistance and
load distribution. Journal of Bridge Engineering, 7(5):258266, 2002.
[59] C.D. Eamon and A.S. Nowak. Effect of secondary elements on bridge structural system reliability considering
moment capacity. Structural Safety, 26(1):2947, 2004.
[60] E.N. Economou. The observable universe. In A Short Journey from Quarks to the Universe, volume 1 of
SpringerBriefs in Physics, chapter 13, pages 109121. Springer, 2011.
[61] B. Ellingwood, T.V. Galambos, J.G. MacGregor, and CA Cornel. Development of a probability based load
criterion for American National Standard A58: Building code requirements for minimum design loads in
buildings and other structures. US Dept. of Commerce, National Bureau of Standards, Washington, 1980.
[62] M.P. Enright and D.M. Frangopol. Condition prediction of deteriorating concrete bridges using Bayesian
updating. Journal of Structural Engineering, 125(10):11181125, 1999.
[63] H.P. Fagagnini. De la valeur et la non-valeur de linfrastructure suisse de transport. Technical report, LITRA Service dinformation pour les transports publics, Bern, Switzerland, 2011.
[64] H. Fang, M. Rais-Rohani, Z. Liu, and M.F. Horstemeyer. A comparative study of metamodeling methods for
multiobjective crashworthiness optimization. Computers & structures, 83(25):21212136, 2005.
[65] C.R. Farrar, G. Park, D.W. Allen, and M.D. Todd. Sensor network paradigms for structural health monitoring.
Structural control and health monitoring, 13(1):210225, 2006.
[66] S. Ferson and L.R. Ginzburg. Different methods are needed to propagate ignorance and variability. Reliability
Engineering & System Safety, 54(2-3):133144, 1996. ISSN 0951-8320.
[67] S. Ferson and W.L. Oberkampf. Validation of imprecise probability models. International Journal of
Reliability and Safety, 3(1):322, 2009. ISSN 1479-389X.
[68] S. Ferson, J. Hajagos, D. Berleant, J. Zhang, W.T. Tucker, L. Ginzburg, and W. Oberkampf. Dependence
in Dempster-Shafer theory and probability bounds analysis. Technical Report SAND2004-3072, Sandia
National Laboratories, Albyquerque, NM, 2004.
[69] S. Ferson, R. Nelsen, J. Hajagos, D. Berleant, J. Zhang, W.T. Tucker, L. Ginzburg, and W.L. Oberkampf. Myths
about correlations and dependencies and their implications for risk analysis. Submitted to Human and
Ecological Risk Assessment, 2008.
[70] S.E. Fienberg. When did Bayesian inference become bayesian. Bayesian analysis, 1(1):140, 2006.
[71] R.A. Fisher. Applications of "Students" distribution. Metron, 5:90104, 1925.
174
Bibliography
[72] R.A. Fisher. The design of experiments. Oliver & Boyd., Oxford, 1935.
[73] D.M. Frangopol, A. Strauss, and S. Kim. Bridge reliability assessment based on monitoring. Journal of Bridge
Engineering, 13(3):258270, 2008.
[74] D.M. Frangopol, A. Strauss, and S. Kim. Use of monitoring extreme data for the performance prediction of
structures: General approach. Engineering structures, 30(12):36443653, 2008.
[75] M. Frchet. Gnralisations du thorme des probabilits totales. Fundamenta Mathematica, 25:379387,
1935.
[76] J.H. Friedman. Multivariate adaptive regression splines. The annals of statistics, 19(1):167, 1991.
[77] M.I. Friswell. Damage identication using inverse methods. Philosophical Transactions of the Royal Society
A-Mathematical Physical and Engineering Sciences, 365(1851):393410, 2007.
[78] T.V. Galambos and M.K. Ravindra. Properties of steel for use in LRFD. Journal of the Structural Division, 104
(9):14591468, 1978.
[79] A. Gelman and C. Shalizi. Oxford handbook of the philosophy of the social sciences, chapter Philosophy and
the practice of Bayesian statistics in the social sciences. Oxford University Press, Oxford, UK, 2010.
[80] P.E. Gill, W. Murray, and M.H. Wright. Practical optimization. Academic press, London, 1981.
[81] M. Gilli, D. Maringer, and E. Schumann. Numerical methods and optimization in nance. Academic Press,
2011.
[82] GML Gladwell and H. Ahmadian. Generic element matrices suitable for nite element model updating.
Mechanical Systems and Signal Processing, 9(6):601614, 1995.
[83] B. Goller and G.I. Schueller. Investigation of model uncertainties in Bayesian structural model updating.
Journal of Sound and Vibration, 25(5):61226136, 2011.
[84] B. Goller, J.L. Beck, and G.I. Schuller. Evidence-based identication of weighting factors in Bayesian model
updating using modal data. Journal of Engineering Mechanics, in press, 2012.
[85] J. Gordon and E.H. Shortliffe. Rule-based expert systems: The MYCIN experiments of the stanford heuristic
programming project, chapter 13 - The Dempster-Shafer theory of evidence, pages 272292. Reading,
Massachusetts: Addison-Wesley, 1984.
[86] J.A. Goulet and I.F.C. Smith. Extended uniorm distribution accounting for uncertainty of uncertainty. In
International Conference on Vulnerability and Risk Analysis and Management/Fifth International Symposium
on Uncertainty Modeling and Analysis, pages 7885, Maryland, USA, 2011.
[87] J.A. Goulet and I.F.C. Smith. Predicting the usefulness of monitoring for identifying the behaviour of
structures. Journal of Structural Engineering, In press, 2012.
[88] J.A. Goulet, P. Kripakaran, and I.F.C. Smith. Multimodel structural performance monitoring. Journal of
Structural Engineering, 136(10):13091318, 2010.
[89] J.A. Goulet, C. Michel, and I.F.C. Smith. Hybrid probabilities and error-domain structural identication
using ambient vibration monitoring. Mechanical Systems and Signal Processing, In press, 2012.
[90] S. Greenland. Induction versus Popper: substance versus semantics. International journal of epidemiology,
27(4):543548, 1998.
[91] R. Hadidi and N. Gucunski. Probabilistic approach to the solution of inverse problems in civil engineering.
Journal of Computing In Civil Engineering, 22(6):338347, 2008.
175
Bibliography
[92] W.K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57
(1):97, 1970.
[93] T. Haukaas and P. Gardoni. Model uncertainty in nite-element analysis: Bayesian nite elements. Journal
of Engineering Mechanics, 137(8):519526, 2011.
[94] T. Hellen. How to- Use elements effectively. NAFEMS, 2003.
[95] T. Hellen. How to- Use beam, plates and shell elements. NAFEMS, 2007.
[96] J.C. Helton. Quantication of margins and uncertainties: Conceptual and computational basis. Reliability
Engineering & System Safety, 96(9):9761013, 2011.
[97] J.C. Helton and J.D. Johnson. Quantication of margins and uncertainties: alternative representations of
epistemic uncertainty. Reliability Engineering & System Safety, 96(9):10341052, 2011.
[98] J.C. Helton and W.L. Oberkampf. Alternative representations of epistemic uncertainty. Reliability Engineering
& System Safety, 85(1-3):110, 2004. ISSN 0951-8320.
[99] M. Hiatt, A. Mathiasson, J. Okwori, S.S. Jin, S. Shang, G.J. Yun, J. Caicedo, R. Christenson, C.B. Yun, and
H. Sohn. Finite element model updating of a PSC box girder bridge using ambient vibration test. Advanced
Materials Research, 168:22632270, 2011.
[100] D. Hitchings. A nite element dynamics primer. NAFEMS, Glasgow, 1992.
[101] J.W. Hollenbach. Verication, validation, and accreditation (VV&A) recommended practices guide. Technical
report, Department of Defense, USA, 1996.
[102] J. Hugenschmidt and A. Kalogeropoulos. The inspection of retaining walls using GPR. Journal of Applied
Geophysics, 67(4):335344, 2009.
[103] R. Jafarkhani and S.F. Masri. Finite element model updating using evolutionary strategy for damage detection.
Computer-Aided Civil and Infrastructure Engineering, 2011.
[104] E.T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620630, 1957.
[105] JCGM. Evaluation of measurement data Guide to the expression of uncertainty in measurement. Number
ISO/IEC Guide 98-3:2008. JCGM Working Group of the Expression of Uncertainty in Measurement, 2008.
[106] JCGM. Guide to the expression of uncertainty in measurement supplement 1: Numerical methods for the
propagation of distributions. Number ISO/IEC Guide 98-3:2008/Suppl 1:2008. JCGM Working Group of the
Expression of Uncertainty in Measurement, 2008.
[107] JCGM. Evaluation of measurement data Supplement 2 to the Guide to the expression of uncertainty in
measurement Extension to any number of output quantities, volume JCGM 102:2011. JCGM Working
Group of the Expression of Uncertainty in Measurement, 2011.
[108] JCSS. Probabilistic model code, part 3, 2007. URL http://www.jcss.ethz.ch.
[109] W.H. Jefferys and J.O. Berger. Ockhams razor and Bayesian analysis. American Scientist, 80:6472, 1992.
[110] H. Jeffreys. Theory of probability. Oxford University Press, Oxford, third edition, 1998.
[111] X. Jiang and S. Mahadevan. Bayesian validation assessment of multivariate computational models. Journal
of Applied Statistics, 35(1):4965, 2008.
176
Bibliography
[112] L.O. Jimenez and D.A. Landgrebe. Supervised classication in high-dimensional space: Geometrical,
statistical, and asymptotical properties of multivariate data. In IEEE Transactions on Systems, Man, and
Cybernetics, volume 28, pages 3954, 1998.
[113] A.I. Johnson. Strength, safety and economic dimensions of structures. Statens Kommittee for Byggnadsforskning Meddelanden, Stockholm, Sweden, 1953.
[114] R.N. Kacker and J.F. Lawrence. Rectangular distribution whose end points are not exactly known: curvilinear
trapezoidal distribution. Metrologia, 47(3):120126, 2010.
[115] F. Kang, J. Li, and Q. Xu. Virus coevolution partheno-genetic algorithms for optimal sensor placement.
Advanced Engineering Informatics, 22(3):362370, 2008.
[116] L.S. Katafygiotis and J.L. Beck. Updating models and their uncertainties. II: Model identiability. Journal of
Engineering Mechanics, 124(4):463467, 1998.
[117] J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings of IEEE International Conference
on Neural Networks, volume 4, pages 19421948. IEEE, 1995.
[118] M.C. Kennedy and A. OHagan. Bayesian calibration of computer models. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 63(3):425464, 2001.
[119] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. science, 220(4598):
671679, 1983.
[120] J.M. Ko and Y.Q. Ni. Technology developments in structural health monitoring of large-scale bridges.
Engineering structures, 27(12):17151725, 2005.
[121] K.Y. Koo, J.M.W. Brownjohn, D.I. List, and R. Cole. Structural health monitoring of the Tamar suspension
bridge. Structural Control and Health Monitoring, In Press, 2012.
[122] R.O. Kuehl. Design of experiments: statistical principles of research design and analysis. Duxbury/Thomson
Learning, Pacic Grove, CA, 2000.
[123] C.P. Lamarche, P. Paultre, J. Proulx, and S. Mousseau. Assessment of the frequency domain decomposition technique by forced-vibration tests of a full-scale structure. Earthquake Engineering and Structural
Dynamics, 37:487494, 2008.
[124] I. Laory, T.N. Trinh, and I.F.C. Smith. Evaluating two model-free data interpretation methods for measurements that are inuenced by temperature. Advanced Engineering Informatics, 2011.
[125] P.S. Laplace. Essai philosophique sur les probabilits. M. Courcier, Paris, France, 1814.
[126] E.L. Lehmann and J.P. Romano. Testing statistical hypotheses. Springer, third edition, 2005.
[127] I. Lira. The generalized maximum entropy trapezoidal probability density function. Metrologia, 45(4):
L17L20, 2008.
[128] M. Liu, D.M. Frangopol, and S. Kim. Bridge safety evaluation based on monitored live load effects. Journal
of Bridge Engineering, 14(4):257259, 2009.
[129] Y. Liu, J. Freer, K. J. Beven, and P. Matgen. Towards a limits of acceptability approach to the calibration of
hydrological models: Extending observation error. Journal of Hydrology, 367(1-2):93103, 2009.
[130] L. Ljung. System identication: theory for the user. Prentice-Hall, Englewood Cliffs, NJ, 1987.
[131] L. Ljung and T. Glad. On global identiability for arbitrary model parametrizations. Automatica, 30(2):
265276, 1994.
177
Bibliography
[132] Y.H. Loo and G.D. Base. Variation of creep Poissons ratio with stress in concrete under short-term uniaxial
compression. Magazine of Concrete Research, 42(151):6773, 1990.
[133] T.J. Loredo. Maximum entropy and Bayesian methods, chapter From Laplace to supernova SN 1987 A:
Bayesian inference in astrophysics, pages 81142. Kluwer Academic Publishers, Dordrecth, Netherland,
1990.
[134] H. Ludescher and E. Brhwiler. Dynamic amplication of trafc loads on road bridges. Structural Engineering International, 19(2):190197, 2009.
[135] J.P. Lynch and K.J. Loh. A summary review of wireless sensors and sensor networks for structural health
monitoring. Shock and Vibration Digest, 38(2):91130, 2006.
[136] R.H. MacNeal. Finite elements: their design and performance. CRC, New-York, 1994.
[137] R.H. MacNeal and R.L. Harder. A proposed standard set of problems to test nite element accuracy. Finite
Elements in Analysis and Design, 1(1):320, 1985.
[138] P. Mantovan and E. Todini. Hydrological forecasting uncertainty assessment: Incoherence of the glue
methodology. Journal of Hydrology, 330(1-2):368381, 2006.
[139] Jean-Michel Marin, Pierre Pudlo, Christian Robert, and Robin Ryder. Approximate Bayesian computational
methods. Statistics and Computing, in press.
[140] P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar. Markov chain Monte Carlo without likelihoods. Proceedings
of the National Academy of Sciences of the United States of America, 100(26):1532415328, 2003.
[141] B. Massicotte and A. Picard. Monitoring of a prestressed segmental box girder bridge during strengthening.
PCI Journal, 39(3):6680, 1994.
[142] B. Massicotte, A. Picard, Y. Gaumond, and C. Ouellet. Strengthening of a long span prestressed segmental
box girder bridge. PCI Journal, 39(3):5265, 1994.
[143] E. Matta and A. De Stefano. Generating alternatives from multiple models: How to increase robustness
in parametric system identication. In 5th International Conference on Structural Health Monitoring on
Intelligent Infrastructure (SHMII-5), page 83, Cancun, Mexico, 2011.
[144] J. McFarland. Uncertainty analysis for computer simulations through validation and calibration. PhD thesis,
Vanderbilt University, Nashville, Te, 2008.
[145] J. McFarland and S. Mahadevan. Error and variability characterization in structural dynamics modeling.
Computer Methods In Applied Mechanics and Engineering, 197(29-32):26212631, 2008.
[146] J. McFarland and S. Mahadevan. Multivariate signicance testing and model calibration under uncertainty.
Computer Methods in Applied Mechanics and Engineering, 197(29-32):24672479, 2008.
[147] M.D. McKay, R.J. Beckman, and W.J. Conover. A comparison of three methods for selecting values of input
variables in the analysis of output from a computer code. Technometrics, 21(2):239245, 1979.
[148] M. Meo and G. Zumpano. On the optimal sensor placement techniques for a bridge structure. Engineering
Structures, 27(10):14881497, 2005.
[149] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, et al. Equation of state calculations
by fast computing machines. The journal of chemical physics, 21(6):10871092, 1953.
[150] C. Michel, P. Guguen, and P.-Y. Bard. Dynamic parameters of structures extracted from ambient vibration
measurements: An aid for the seismic vulnerability assessment of existing buildings in moderate seismic
hazard regions. Soil Dynamics and Earthquake Engineering, 28(8):593604, 2008.
178
Bibliography
[151] S.A. Mirza and J.G. MacGregor. Variations in dimensions of reinforced concrete members. Journal of the
Structural Division, 105(4):751766, 1979.
[152] B. Moller and M. Beer. Engineering computation under uncertainty-capabilities of non-traditional models.
Computers & Structures, 86(10):10241041, 2008.
[153] T. Most. Assessment of structural simulation models by estimating uncertainties due to model selection
and model simplication. Computers & Structures, 89(17-18):16641672, 2011.
[154] J.E. Mottershead and M.I. Friswell. Model updating in structural dynamics: a survey. Journal of sound and
vibration, 167(2):347375, 1993.
[155] J.E. Mottershead, M. Link, and M.I. Friswell. The sensitivity method in nite element model updating: A
tutorial. Mechanical Systems and Signal Processing, 25(7):22752296, 2011.
[156] A.S. Nowak and R.I. Carr. Sensitivity analysis for structural errors. Journal of Structural Engineering, 111(8):
17341746, 1985.
[157] A.S. Nowak and M.M. Szerszen. Bridge load and resistance models. Engineering structures, 20(11):985990,
1998.
[158] W.L. Oberkampf and M.F. Barone. Measures of agreement between computation and experiment: validation
metrics. Journal of Computational Physics, 217(1):536, 2006.
[159] W.L. Oberkampf and T.G. Trucano. Verication and validation benchmarks. Nuclear Engineering and Design,
238(3):716743, 2008.
[160] W.L. Oberkampf, S.M. DeLand, B.M. Rutherford, K.V. Diegert, and K.F. Alvin. Error and uncertainty in
modeling and simulation. Reliability Engineering & System Safety, 75(3):333357, 2002.
[161] W.L. Oberkampf, J.C. Helton, C.A. Joslyn, S.F. Wojtkiewicz, and S. Ferson. Challenge problems: uncertainty
in system response given uncertain parameters. Reliability Engineering & System Safety, 85(1-3):1119, 2004.
[162] W.L. Oberkampf, T.G. Trucano, and C. Hirsch. Verication, validation, and predictive capability in computational engineering and physics. Applied Mechanics Reviews, 57(5):345385, 2004.
[163] OECD. Infrastructure to 2030 - Mapping policy for electricity, water and transport, volume 2. Paris, France,
2007.
[164] OECD. Policy brief - infrastructure to 2030. OECD Observer, 2008.
[165] N.M. Okasha, D.M. Frangopol, and A.D. Orcesi. Automated nite element updating using strain data for the
lifetime reliability assessment of bridges. Reliability Engineering & System Safety, 99(139150), 2012.
[166] Q. Pan, K. Grimmelsman, F. Moon, and E. Aktan. Mitigating epistemic uncertainty in structural
identicationa case study for a long-span steel arch bridge. Journal of Structural Engineering, 137
(1):113, 2010.
[167] C. Papadimitriou. Optimal sensor placement methodology for parametric identication of structural
systems. Journal of sound and vibration, 278(4-5):923947, 2004.
[168] C. Papadimitriou. Pareto optimal sensor locations for structural identication. Computer methods in
applied mechanics and engineering, 194(12-16):16551673, 2005.
[169] C. Papadimitriou and G. Lombaert. The effect of prediction error correlation on optimal sensor placement
in structural dynamics. Mechanical Systems and Signal Processing, 28:105127, 2012.
179
Bibliography
[170] B. Peeters and C. Ventura. Comparative study of modal analysis techniques for bridge dynamic characteristics. Mechanical Systems and Signal Processing, 17(5):965988, 2003.
[171] J. Perret. Dformations des couches bitumineuses au passage dune charge de trafc. PhD thesis, Swiss Federal
institute of technology (EPFL), Lausanne, Switzerland, 2003.
[172] V. Plagnol and S. Tavar. Monte Carlo and quasi-Monte Carlo methods, chapter Approximate Bayesian
computation and MCMC, pages 99114. Springer, Berlin, 2004.
[173] K.R. Popper. The logic of scientic discovery. Routledge, New-York, third edition, 2002.
[174] D. Posenato, F. Lanata, D. Inaudi, and I.F.C. Smith. Model-free data interpretation for continuous monitoring
of complex structures. Advanced Engineering Informatics, 22(1):135144, 2008.
[175] D. Posenato, P. Kripakaran, D. Inaudi, and I.F.C. Smith. Methodologies for model-free data interpretation of
civil engineering structures. Computers & Structures, 88(7-8):467482, 2010.
[176] M. Pozzi and A. Der Kiureghian. Assessing the value of information for long-term structural health monitoring. In Proceedings of SPIE, volume 7984, 2011.
[177] F. Press. Earth models obtained by Monte Carlo inversion. J. Geophys. Res., 73(16):52235234, 1968.
[178] J.K. Pritchard, M.T. Seielstad, A. Perez-Lezaun, and M.W. Feldman. Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Molecular Biology and Evolution, 16(12):17911798,
1999.
[179] B. Raphael and I.F.C. Smith. Finding the right model for bridge diagnosis. In Articial Intelligence in
Structural Engineering, Computer Science, LNAI 1454, pages 308319. Springer, 1998.
[180] B. Raphael and I.F.C. Smith. A direct stochastic algorithm for global search. Applied Mathematics and
Computation, 146(2-3):729758, 2003.
[181] S. Ravindran, P. Kripakaran, and I.F.C. Smith. Evaluating reliability of multiple-model system identication.
In 14th EG-ICE workshop, Maribor, Slovenia, 2007.
[182] R. Rebba and S. Mahadevan. Model predictive capability assessment under uncertainty. AIAA Journal, 44
(10):23762384, 2006.
[183] R Rebba and S. Mahadevan. Validation of models with multivariate output. Reliability Engineering & System
Safety, 91(8):861871, 2006.
[184] R. Rebba, S. Mahadevan, and S. Huang. Validation and error estimation of computational models. Reliability
Engineering & System Safety, 91(10-11):13901397, 2006.
[185] J.A. Rice. Mathematical statistics and data analysis. Thomson Learning, Belmont, CA, 2006.
[186] P. J. Roache. Perspective: Validation-what does it mean? Journal of Fluids Engineering-Transactions of the
Asme, 131(3), 2009.
[187] C.P. Robert, J.M. Cornuet, J.M. Marin, and N.S. Pillai. Lack of condence in approximate Bayesian computation model choice. Proceedings of the National Academy of Sciences, 108(37):1511215117, 2011.
[188] Y. Robert-Nicoud, B. Raphael, O. Burdet, and I.F.C. Smith. Model identication of bridges using measurement
data. Computer-Aided Civil and Infrastructure Engineering, 20(2):118131, 2005.
[189] Y. Robert-Nicoud, B. Raphael, and I.F.C. Smith. Conguration of measurement systems using Shannons
entropy function. Computers & Structures, 83(8-9):599612, 2005.
180
Bibliography
[190] Y. Robert-Nicoud, B. Raphael, and I.F.C. Smith. System identication through model composition and
stochastic search. Journal of Computing In Civil Engineering, 19(3):239247, 2005.
[191] M. Ro. Pont Adolphe - rsultat des essais de surcharge. Technical report, Laboratoire fdral dessai des
matriaux, Zurich, Switzerland, 1933.
[192] M. Ro. La mesure directe des contraintes dans les ouvrages construits. In Centre dtudes suprieures:
Sance du 18 janvier 1939. Institut technique du btiment et des travaux publics, 1939.
[193] M. Ro. Robert Maillart 1872-1940: Ingenieur. Schweizerischer Verband fr die Materialprfungen der
Technik, 1940.
[194] M. Ro. Essais et expriences sur des constructions mtalliques en Suisse, 1925-1950. Quaderni della
costruzione metalica. Associazione fra i costruttori in acciaio italiani, 1954.
[195] L.A. Rossman. EPANET 2: users manual. US Environmental Protection Agency, Cincinati, OH, 2000.
[196] B.M. Rutherford, LP Swiler, TL Paez, and A. Urbina. Response surface (meta-model) methods and applications. In Proc. 24th Int. Modal Analysis Conf., pages 184197, St. Louis, MO, 2006.
[197] T. Saito and J.L. Beck. Bayesian model selection for ARX models and its application to structural health
monitoring. Earthquake Engineering & Structural Dynamics, 2010. ISSN 1096-9845.
[198] S. Saitta, B. Raphael, and I.F.C. Smith. Data mining techniques for improving the reliability of system
identication. Advanced Engineering Informatics, 19(4):289298, 2005.
[199] S. Saitta, B. Raphael, and I.F.C. Smith. Combining two data mining methods for system identication.
Intelligent Computing In Engineering and Architecture, 4200:606614, 2006.
[200] S. Saitta, B. Raphael, and I.F.C. Smith. A comprehensive validity index for clustering. Intelligent Data Analysis,
12(6):529548, 2008.
[201] S. Saitta, P. Kripakaran, B. Raphael, and I.F.C. Smith. Feature selection using stochastic search: An application
to system identication. Journal of Computing In Civil Engineering, 24(1):310, 2010.
[202] M. Sanayei, E.S. Bell, C.N. Javdekar, J.L. Edelmann, and E. Slavsky. Damage localization and nite-element
model updating using multiresponse ndt data. Journal of Bridge Engineering, 11(6):688698, 2006.
[203] H.R. Schalcher, H.J. Boesch, K. Bertschy, H. Sommer, D. Matter, J. Gerum, and M. Jakob. Quels seront les
cots futurs des btiments et des infrastructures suisses et qui les paiera ? Technical report, VDF Zrich,
Zurich, Switzerland, 2011.
[204] H. Schlune and M. Plos. Bridge assessment and maintenance based on nite element structural models and
eld measurements. Technical Report 2008:5, Chalmers University of Technology, Sweeden, 2008.
[205] M.B. Seasholtz and B. Kowalski. The parsimony principle applied to multivariate calibration. Analytica
chimica acta, 277(2):165177, 1993.
[206] R.G. Selby, F.J. Vecchio, and M.P. Collins. The failure of an offshore platform. Concrete International, 19(8):
2835, 1997.
[207] K. Sentz and S. Ferson. Probabilistic bounding analysis in the quantication of margins and uncertainties.
Reliability Engineering & System Safety, 96(9):11261136, 2011.
[208] G. Shafer. A theory of statistical evidence. Foundations of probability theory, statistical inference, and
statistical theories of science, 2:365436, 1976.
181
Bibliography
[209] J.P. Shaffer. Multiple hypothesis testing. Annual Review of Psychology, 46(1):561584, 1995.
[210] C.E. Shannon and W. Weaver. A mathematical model of communication. The Bell System Technical Journal,
27:379423, 1948.
[211] Z. Sidk.
Rectangular condence regions for the means of multivariate normal distributions. Journal of the
American Statistical Association, 62:626633, 1967.
[212] L. Simes da Silva, C. Rebelo, D. Nethercot, L. Marques, R. Simes, and PMM Vila Real. Statistical evaluation
of the lateral-torsional buckling resistance of steel I-beams, part 2: Variability of steel properties. Journal of
Constructional Steel Research, 65(4):832849, 2009.
[213] T.W. Simpson, T.M. Mauery, J.J. Korte, and F. Mistree. Kriging models for global approximation in simulationbased multidisciplinary design optimization. AIAA journal, 39(12):22332241, 2001.
[214] J.A. Snyman. Practical mathematical optimization: an introduction to basic optimization theory and classical
and new gradient-based algorithms, volume 97. Springer, 2005.
[215] H.W. Sorenson. Least-squares estimation: from Gauss to Kalman. Spectrum, IEEE, 7(7):6368, 1970.
[216] B.F. Spencer, M.E. Ruiz-Sandoval, and N. Kurata. Smart sensing technology: opportunities and challenges.
Structural Control and Health Monitoring, 11(4):349368, 2004.
[217] M.S. Srivastava. Methods of multivariate statistics. Wiley, New-York, 2002.
[218] P.B. Stark and L. Tenorio. Large-scale inverse problems and quantication of uncertainty, chapter A Primer of
Frequentist and Bayesian Inference in Inverse Problems, pages 932. Wiley, 2010.
[219] C. Stephan. Sensor placement for modal identication. Mechanical Systems and Signal Processing, 27(0):
461470, 2012.
[220] G.W. Stewart. Gauss, statistics, and Gaussian elimination. Journal of Computational and Graphical Statistics,
94:111, 1995.
[221] A. Strauss, D.M. Frangopol, and S. Kim. Use of monitoring extreme data for the performance prediction of
structures: Bayesian updating. Engineering Structures, 30(12):36543666, 2008.
[222] A. Strauss, D.M. Frangopol, and K. Bergmeister. Assessment of existing structures based on identication.
Journal of Structural Engineering, 136(1):8697, 2010.
[223] B. Sudret and A. Der Kiureghian. Stochastic nite element methods and reliability: a state-of-the-art report.
Technical report, University of California Berkeley, Dept. of Civil and Environmental Engineering, Berkeley,
CA, 2000.
[224] B. Sudret and A. Der Kiureghian. Comparison of nite element reliability methods. Probabilistic Engineering
Mechanics, 17(4):337348, 2002.
[225] A. Tarantola. Inverse problem theory: Methods for data tting and model parameter estimation. Siam,
Philadelphia, PA, USA, 2005.
[226] A. Tarantola. Popper, Bayes and the inverse problem. Nature Physics, 2(8):492494, 2006.
[227] B.H. Thacker, S.W. Doebling, F.M. Hemez, MC Anderson, JE Pepin, and EA Rodriguez. Concepts of model
verication and validation. Technical report, Los Alamos National Lab., Los Alamos, NM, 2004.
[228] S. Thns, M. H. Faber, and W. Rcker. Ultimate limit state model basis for assessment of offshore wind
energy converters. Journal of Offshore Mechanics and Arctic Engineering, 134(3):03190410319049, 2012.
182
Bibliography
[229] T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M.P.H. Stumpf. Approximate Bayesian computation scheme
for parameter inference and model selection in dynamical systems. Journal of the Royal Society Interface, 6
(31):187202, 2009.
[230] C. Topkaya, A.S. Kalayci, and E.B. Williamson. Solver and shell element performances for curved bridge
analysis. Journal of Bridge Engineering, 13(4):418424, 2008.
[231] M. Verleysen, D. Francois, G. Simon, and V. Wertz. On the effects of dimensionality on data analysis with
neural networks. Articial Neural Nets Problem solving methods, pages 10441044, 2003.
[232] N. Wang, B.R. Ellingwood, and A.H. Zureick. Bridge rating using system reliability assessment. II: Improvements to bridge rating practices. Journal of Bridge Engineering, 16(6):863871, 2011.
[233] R.J. Westgate and J.M.W. Brownjohn. Development of a Tamar Bridge nite element model. In Conference
Proceedings of the Society for Experimental Mechanics Series 3, volume 5, pages 1320. Springer, 2011.
[234] B.C. Williams and J. Kleer. Qualitative reasoning about physical systems: a return to roots. Articial
Intelligence, 51(1-3):19, 1991.
[235] K. Y. Wong, K.W.Y. Chan, Y.Q. Ni, and C.L. Ng. Advanced nite element model of Tsing Ma Bridge for
structural health monitoring. International Journal of Structural Stability and Dynamics, 11(2):313344,
2011.
[236] K. Worden and A.P. Burrows. Optimal sensor placement for fault detection. Engineering Structures, 23(8):
885901, 2001.
[237] K. Worden, C. R. Farrar, G.E Manson, and G. Park. The fundamental axioms of structural health monitoring.
Proceedings of the Royal Society A-Mathematical Physical and Engineering Sciences, 463(2082):16391664,
2007.
[238] F. Xie and D. Levinson. Evaluating the effects of the I-35W bridge collapse on road-users in the twin cities
metropolitan region. Transportation Planning and Technology, 34(7):691703, 2011.
[239] B.F. Yan, A. Miyamoto, and E. Brhwiler. Wavelet transform-based modal parameter identication considering uncertainty. Journal of Sound and Vibration, 291(1-2):285301, 2006.
[240] I. Yeo, S. Shin, H.S. Lee, and S.P. Chang. Statistical damage assessment of framed structures from static
responses. Journal of Engineering Mechanics, 126(4):414421, 2000.
[241] K.-V. Yuen, J.L. Beck, and L.S. Katafygiotis. Efcient model updating and health monitoring methodology
using incomplete modal data without mode matching. Structural Control & Health Monitoring, 13(1):
91107, 2006.
[242] K.V. Yuen. Bayesian methods for structural dynamics and civil engineering. Wiley, 2010.
[243] K.V. Yuen, S.K. Au, and J.L. Beck. Two-stage structural health monitoring approach for phase I benchmark
studies. Journal of Engineering Mechanics, 130(1):1633, 2004.
[244] K.V. Yuen, J.L. Beck, and L.S. Katafygiotis. Unied probabilistic approach for model updating and damage
detection. Journal of Applied Mechanics-Transactions of the ASME, 73(4):555564, 2006.
[245] E.L. Zhang, P. Feissel, and J. Antoni. A comprehensive Bayesian approach for model updating and quantication of modeling errors. Probabilistic Engineering Mechanics, 26(4):550560, 2011.
[246] A. Ziegler. Schwingungsmessungen auf der Langensandbrcke in Luzern. Technical Report no. 1621, Ziegler
Consultants, Zurich, Switzerland, 2009.
183
Academic Curriculum
Born : 1984
Nationality : Canadian
: james.a.goulet@gmail.com
[2008-2012]
[2006-2008]
[2004-2008]
Extracurricular activities
Responsible for the EDCE PhD student organization
[2010 - 2012]
[2011 - 2012]
[2007 - 2008]
[2005]
2012
185
Academic Curriculum
Swiss National Science Foundation
Fellowship for prospective researcher (Postdoctoral research)
2011
2008
2008
FQRNT Quebec National Funds for Research in Natural Sciences and Technologies
Graduate research scholarship
2008
2007
2005
Personal interests
Skiing, sailing, kayaking, surng, data visualization, international politics & economy
186
List of Publications
Publications made during the thesis are presented in the list below.
Journal papers
J.-A. Goulet, C. Michel, and I. F. C. Smith. Hybrid probabilities and error-domain structural
identication using ambient vibration monitoring. Mechanical Systems and Signal Processing,
In press.
J.-A. Goulet and I. F. C. Smith. Predicting the usefulness of monitoring for identifying the
behaviour of structures. Journal of Structural Engineering, In press.
J.-A. Goulet, P. Kripakaran, and I. F. C. Smith. Multimodel structural performance monitoring.
Journal of Structural Engineering, 136(10) 1309-1318, Oct. 2010.
Book chapter
J.-A. Goulet and I. F. C. Smith. Structural Identication of Constructed Facilities. Approaches,
Methods and Technologies for Effective Practice of St-Id, Chapter 8.8. American Society of
Civil Engineers (ASCE), 2012.
Conference proceedings
R. Pasquier*1 , J.-A. Goulet, and I. F. C. Smith. Reducing uncertainties regarding remaining lives
of structures using computer-aided data interpretation. Proceedings of the 19th International
Workshop: Intelligent Computing in Engineering, Munich, Germany, July 2012.
J.-A. Goulet, I. F. C. Smith*, M. Texier, and L. Chouinard. The effects of simplications on
model predictions and consequences for model-based data interpretation. Proceedings of
ASCE Structures Congress, Chicago, USA, March 2012.
187
List of Publications
J.-A. Goulet* and I. F. C. Smith. Prevention of over-instrumentation during the design of a monitoring system for static load tests. Proceedings of 5th International Conference on Structural
Health Monitoring on Intelligent Infrastructure (SHMII-5), Cancun, Mexico, December 2011.
J.-A. Goulet* and I. F. C. Smith. Uncertainty correlation in structural performance assessment.
Proceedings of the 11th International Conference on Applications of Statistics and Probability
in Civil Engineering, Zurich, Switzerland, August 2011.
J.-A. Goulet* and I. F. C. Smith. Extended uniform distribution accounting for uncertainty of
un- certainty. Proceedings of the International Conference on Vulnerability and Risk Analysis
and Management/Fifth International Symposium on Uncertainty Modeling and Analysis,
pages P.78-85, Maryland, USA, April 2011.
J.-A. Goulet* and I. F. C. Smith. Overcoming the limitations of traditional model-updating
approaches. Proceedings of the International Conference on Vulnerability and Risk Analysis
and Management/Fifth International Symposium on Uncertainty Modeling and Analysis,
pages p.905-913, Maryland, USA, April 2011.
J.-A. Goulet* and I. F. C. Smith. CMS4SI structural identication approach for interpreting
measurements. Proceedings of the 34rd IABSE symposium, Venice, Italy, September 2010.
J.-A. Goulet* and I. F. C. Smith. Evaluating structural identication capability. Proceedings of
the Structural Faults & Repair, Edinburgh, UK, June 2010.
J.-A. Goulet*, P. Kripakaran, and I. F. C. Smith. Structural identication to improve bridge
management. Proceedings of the 33rd IABSE symposium, Bangkok, Thailand, September
2009.
J.-A. Goulet*, P. Kripakaran, and I. F. C. Smith. Estimation of modelling errors in structural
system identication. Proceedings of the 4th International Conference on Structural Health
Monitoring on Intelligent Infrastructure (SHMII-4), Zurich, Switzerland, July 2009.
J.-A. Goulet, P. Kripakaran, and I. F. C. Smith*. Considering sensor characteristics during
measurement-system design for structural system identication. Proceedings of the 2009
ASCE International Workshop on Computing in Civil Engineering, p.74, Austin, Texas, June
2009.
188