Sie sind auf Seite 1von 218

Probabilistic Model Falsification for Infrastructure Diagnosis

THSE NO 5417 (2012)


PRSENTE le 16 juillet 2012
LA FACULT DE L'ENVIRONNEMENT NATUREL, ARCHITECTURAL ET CONSTRUIT

LABORATOIRE D'INFORMATIQUE ET DE MCANIQUE APPLIQUES LA CONSTRUCTION


PROGRAMME DOCTORAL EN STRUCTURES

COLE POLYTECHNIQUE FDRALE DE LAUSANNE


POUR L'OBTENTION DU GRADE DE DOCTEUR S SCIENCES

PAR

James-Alexandre Goulet

accepte sur proposition du jury:


Prof. M. Bierlaire, prsident du jury
Prof. I. Smith, directeur de thse
Prof. J. M. H. Brownjohn, rapporteur
Prof. E. Brhwiler, rapporteur
Prof. A. Strauss, rapporteur

Suisse
2012

Acknowledgements
I would like to acknowledge the contribution of all collaborators I had the opportunity to
work with during my doctoral studies. I also thank my family, friends and colleagues for their
continuous support over years. I would like to give special thanks to:

Close collaborators
Prof. Ian F. C. Smith (EPFL, Switzerland)
Dr. Clotaire Michel (ETHZ, Switzerland)
Sylvain Coutu (Ph.D. candidate, EPFL, Switzerland)

External collaborations
Prof. James M. H. Brownjohn (University of Shefeld, UK)
Prof. Luc Chouinard (McGill University, Canada)
Prof. William OBrien (University of Texas at Austin, USA)
Prof. Alain Nussbaumer (EPFL, Switzerland)
Prof. Franklin Moon (Drexel University, USA)
Prof. Branko Glisic (Princeton, USA)
Prof. Benny Raphael (NUS, Singapore)

Key discussions
Prof. Michael Faber (DTU, Denmark)
Prof. James L. Beck (Caltech, USA)
Prof. Eugen Brhwiler (EPFL, Switzerland)
Prof. Prakash Kripakaran (University of Exeter, UK)
Prof. Nizar Bel Hadj Ali (University of Gabs, Tunisia)
Dr. Sandro Saitta (External lecturer, EPFL, Switzerland)
iii

Acknowledgements

Data and experimental help


Dr. Reto Cantieni (RCI dynamic, Switzerland)
Dr. Armin Ziegler (Ziegler Consultants, Switzerland)
Robert Westgate (Ph.D. candidate, University of Shefeld, UK)
Dr. Claudio Pirazzi (INGENI SA., Switzerland)
Dr. Samuel Vurpillot (External Lecturer, EPFL, Switzerland)
Irwanda Laory (Ph.D. Candidate) (EPFL, Switzerland)
Joerg Hartmann (City of Lucerne, Switzerland)
Hansruedi Berchtold (Berchtold+Eicher Bauingenieure AG, Switzerland)
Dr. Sreenivas Alampali (NY DOT, USA)
Sbastien Apothloz (City of Lausanne, Switzerland)
Aitor Ibarrola (City of Lausanne, Switzerland)
Jean-Francois Laamme (Quebec transportation ministry, MTQ, Canada)

Master students
Romain Pasquier (Ph.D. student, M.Sc., EPFL, Drexel1 )
Marie Texier (M.Sc., EPFL, McGill2 )
Olivier Egger (M.Sc. candidate, EPFL, Princeton3 )
Alban Nguyen (M.Sc. candidate, EPFL)

Interns
Emma Hoffman
Youssef Mezdani (M.Sc., EPFL)

Proofread
Ian F. C. Smith
Romain Pasquier
Austin Ivey
Belinda Bates
1 Master project conducted in collaboration with Drexel University, USA
2 Master project conducted in collaboration with McGill University, Canada
3 Master project conducted in collaboration with Princeton University, USA

iv

Acknowledgements

Support
My family
Dr. Alexis Kalogeropoulos
Daphn Dethier
IMAC-laboratory and EPFL colleagues

Software
Part of the work done in this thesis used an academic license of the software ANSYS
from ANSYS inc.

Thesis project funding


Swiss National Science Foundation (SNF) under contract no. 200020-117670/1

Abstract
Most infrastructure in the western world was built in the second half of the 20th century.
Transportation structures, water distribution networks and energy production systems are
now aging and this leads to safety and serviceability issues. In situations where conservative
routine assessments fail to justify adequate safety and serviceability, advanced structural evaluation methodologies are attractive. These advanced methodologies employ measurements
to help understand more accurately structural behavior. A better understanding typically
results in more accurate reserve capacity evaluations along with other advantages. Many of
the approaches available originate from the elds of statistics, signal processing and control
engineering, where it is common to assume that modeling errors can be treated as Gaussian
noise. Such an assumption is not generally applicable to civil infrastructure because in these
systems, systematic biases in models can be very important and their effects often vary with
location. Most importantly, little is known of the dependencies between the errors.
This thesis includes a proposal for a model-based data-interpretation methodology that builds
on the concept of probabilistic falsication. This approach can identify the properties of
structures in situations where there are aleatory and systematic errors, without requiring
denitions of dependencies between uncertainties. Prior knowledge is used to bound the
ranges of parameters to be identied and to build an initial set of possible model instances.
Then, predictions from each model are compared with measurements gathered on-site so that
inadequate model instances and model classes are falsied using threshold bounds. These
bounds are dened using measurement and modeling uncertainties. The probability of dis
carding a valid model instance is regulated using the Sidk
correction to account for multiple
measurements.
A new metric called expected identiability quanties probabilistically, the utility of monitoring interventions. Expected identiability quanties the effect of hypotheses and choices
such as the uncertainty level, model-class renement, measurement locations, measurement
types and sensor accuracy. Results show that using too many measurements may decrease
data-interpretation performance.
Probabilistic model falsication, expected identiability and measurement-system design
methodologies are applied to several full-scale case studies. The work shows that datainterpretation is limited by factors such as robustness with respect to inaccurate uncertainty
denitions and the exponential complexity of exploring high-dimensional solution spaces.
Paths for tackling these issues are proposed as guidance for future research.
vii

Abstract

Keywords
System identication, falsication, uncertainties, infrastructure diagnosis, data-interpretation,
measurement-system design, bridge monitoring, performance evaluation

viii

Rsum
La majorit des infrastructures des pays industrialiss ont t construites dans la seconde
partie du 20me sicle. Ces infrastructures de transport, de distribution deau et de production
dnergie vieillissent et montrent aujourdhui des signes de dtrioration. Durant les dernires
annes, ces signes ont remis en cause lutilisation et la scurit de ces ouvrages. Lorsque les
contrles traditionnels ne sufsent pas justier la scurit et la fonctionalit des infrastructures, des mthodes avances bases sur la mesure du comportement structural des ouvrages
peuvent permettre de mieux comprendre leur fonctionnement et leur capacit relle. La majorit de ces mthodes avances vise la minimisation des diffrences entre les valeurs mesures
in situ et celles prdites laide de modles. Plusieurs de ces techniques dinterprtation des
donnes proviennent du domaine de la statistique, du traitement de signal et du contrle
des systmes, o il est courant de traiter les erreurs de modlisation comme des variables
alatoires, Gaussiennes et indpendantes. Cette hypothse sur les erreurs nest pas toujours
satisfaite lorsquelle est applique aux ouvrages du gnie civil car les relations de dpendance
entre les incertitudes sont souvent inconnues.
Cette thse prsente une mthode dinterprtation probabiliste base sur la falsication de
modles. Lapproche permet lidentication des proprits des structures, sans avoir faire
dhypothse concernant la dpendance entre les incertitudes. Les connaissances disposition
sont utilises an de gnrer un ensemble de modles pouvant expliquer le comportement
de louvrage tudi. Les valeurs prdites par chacun des modles sont compares avec les
mesures an dcarter les modles dpassant des valeurs seuils. Les seuils de falsication sont
dnis par la combinaison des incertitudes provenant des mesures ainsi que des modles et

la probabilit dcarter un modle adquat est contrle en utilisant la correction de Sidk.


Une mthode complmentaire est propose an de quantier lutilit de mesurer le comportement des structures en fonction des hypothses et choix tels que les incertitudes, le niveau
de dtail des modles ainsi que les emplacements et types de mesures. Les rsultats montrent
que lutilisation dun nombre trop important de mesures peut diminuer la performance de
linterprtation des donnes.
Le potentiel des mthodes proposes est illustr par plusieurs cas dtudes. Lensemble de
ces travaux ont permis de dterminer certains facteurs limitant linterprtation des donnes,
soit, la robustesse des rsultats face une mauvaise valuation des incertitudes ainsi que la
complexit de lexploration des espaces multidimensionnels. Des pistes de solutions sont
proposes an de guider les recherches futures.
ix

Rsum

Mots-cls
Identication des systmes, falsication, incertitudes, diagnostic des infrastructures, interprtation des donnes, systmes de mesures, mesure des structures, valuation de performance

Contents
Acknowledgements

iii

Abstract (English/Franais)

vii

List of gures

xiv

List of tables

xxii

Notation

xxv

Terms and Denitions

xxix

Introduction

1 Literature review

1.1 Infrastructure monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2 System identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1 Residual minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.2 Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

1.2.3 Model falsication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

1.2.4 Hypothesis testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.2.5 Multiple-model approaches . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

1.3 Uncertainties involved in data interpretation . . . . . . . . . . . . . . . . . . . . .

19

1.3.1 Uncertainty representation . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

1.3.2 Common sources of uncertainties in structural identication . . . . . . .

21

1.3.3 Uncertainty combination . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

1.4 Measurement-system design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.5 Exploration of high dimensional solution spaces . . . . . . . . . . . . . . . . . .

26

1.5.1 Surrogate models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1.5.2 Space exploration approaches . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

2 Error-domain model falsication

31

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32
xi

Contents
2.3 Uncertainty dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

2.4 Combination of uncertainties for system identication . . . . . . . . . . . . . . .

36

2.4.1 Secondary-parameter uncertainty . . . . . . . . . . . . . . . . . . . . . . .

37

2.4.2 Uncertainty combination . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.5 Generation of model instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.5.1 Grid-based sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.5.2 Surrogate models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.5.3 Random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.6 Model falsication summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.6.1 Illustrative example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.7 Model falsication using time-domain data . . . . . . . . . . . . . . . . . . . . .

47

2.8 Compatibility between error-domain model falsication and


Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

2.8.1 Illustrative example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

50

2.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

3 Expected identiability - Predicting the usefulness of measurements


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

3.2 Generation of simulated measurements . . . . . . . . . . . . . . . . . . . . . . . .

54

3.2.1 Correlations between uncertainties . . . . . . . . . . . . . . . . . . . . . .

55

3.3 Computation of expected identiability . . . . . . . . . . . . . . . . . . . . . . . .

57

3.3.1 Expected reduction in the number of candidate models . . . . . . . . . .

57

3.3.2 Expected reduction in the predictions ranges . . . . . . . . . . . . . . . .

58

3.3.3 Expected identiability summary & general framework . . . . . . . . . .

58

3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

4 Measurement-system design

61

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

4.2 Measurement systems and over instrumentation . . . . . . . . . . . . . . . . . .

62

4.3 Optimization method for measurement-system design . . . . . . . . . . . . . . .

63

4.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

4.4 Measurement-system design for time-domain data . . . . . . . . . . . . . . . . .

66

4.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

4.4.2 Summary of time-domain monitoring system optimization . . . . . . . .

69

4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

5 Case studies

xii

53

71

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

5.2 Cantilever beam example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

5.2.1 Comparison of system-identication approaches . . . . . . . . . . . . . .

74

5.2.2 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

5.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

5.2.4 Case study conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

Contents
5.3 Langensand Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

5.3.1 Structure description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

5.3.2 Structural identication using static data . . . . . . . . . . . . . . . . . . .

83

5.3.3 Usage of surrogate models . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

5.3.4 Structural identication using time-domain data . . . . . . . . . . . . . .

90

5.3.5 Prediction of the usefulness of monitoring . . . . . . . . . . . . . . . . . . 100


5.3.6 Optimization of measurement-system congurations . . . . . . . . . . . 105
5.3.7 Case-study conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.4 Grand-Mere Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4.1 Structure description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4.2 Model-class descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.4.3 Quantication of the effect of model simplications on prediction errors 114
5.4.4 Quantication of the effect of omitting secondary structural-elements . 117
5.4.5 Structural identication using time-domain data . . . . . . . . . . . . . . 121
5.4.6 Case study conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.5 Tamar Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.5.1 Structure and tests description . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.5.2 Structural identication using time-domain data . . . . . . . . . . . . . . 131
5.5.3 Optimization of measurement-system congurations . . . . . . . . . . . 140
5.5.4 Case study conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.6 Leak detection in pressurized pipe networks . . . . . . . . . . . . . . . . . . . . . 144
5.6.1 Leak-detection methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.6.2 Network description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.6.3 Optimization of measurement-system congurations . . . . . . . . . . . 147
5.6.4 Case study conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.7 General conclusions of the case studies . . . . . . . . . . . . . . . . . . . . . . . . 152
6 Conclusions

153

6.1 Error-domain model falsication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


6.2 Expected identiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.3 Measurement-system design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.4 Case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.5 Discussion and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.5.1 Denition of an upper bound for uncertainty . . . . . . . . . . . . . . . . 154
6.5.2 Redundancy in measurement systems . . . . . . . . . . . . . . . . . . . . 155
6.5.3 Data-interpretation and high dimensional solution spaces . . . . . . . . 155
6.5.4 Reserve capacity evaluation of existing structures . . . . . . . . . . . . . . 155
7 Future work

157

7.1 Diagnosis robustness toward inaccurate uncertainty denitions . . . . . . . . . 157


7.1.1 Measurement-system design and diagnosis errors . . . . . . . . . . . . . 157
7.1.2 Benchmarks for quantifying modeling uncertainties . . . . . . . . . . . . 160
xiii

Contents

7.2
7.3

7.4
7.5

7.1.3 Uncertainties due to the interactions between primary and secondary


parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.4 Imprecise probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Material properties spatial variability and stochastic elds . . . . . . . . . . . . .
Sampling high-dimensional solution spaces . . . . . . . . . . . . . . . . . . . . .
7.3.1 Falsication-limit sampling . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.2 Greedy algorithm applicability test . . . . . . . . . . . . . . . . . . . . . .
Model-class validity exploratory tool . . . . . . . . . . . . . . . . . . . . . . . . . .
Infrastructure management perspectives . . . . . . . . . . . . . . . . . . . . . . .
7.5.1 Fatigue remaining life analysis supported by structural performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.5.2 Measurement system overall cost optimization . . . . . . . . . . . . . . .

160
161
161
162
162
163
164
164
165
165

A Extended uniform distribution

169

Bibliography

183

Academic Curriculum

185

List of Publications

187

xiv

List of Figures
1
2
3
4

Average infrastructure investments in OECD countries. Adapted from [54]. .


World infrastructure maturity. Adapted from [163]. . . . . . . . . . . . . . . .
General framework for evaluation of existing structures. Adapted from [12].
Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1.1 A comparison between the state of knowledge in the eld of sensing technologies and
in data interpretation. The relative size of each circle describes the extent of the current
state of knowledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Likelihood function based on a 2 -norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Generalized Gaussian distribution for {1, 2, 10, }. Adapted from [225]. . . . . . . . . .
1.4 Comparison of the probability contained in rectangular and ellipsoidal condence regions when varying the correlation between two random variables. In all three cases, the
correlation used to compute the ellipsoidal region is set to 0.9. However, the correlation
used to generate realizations of X is a) 0.9, b) 0.4 & c) -0.9. Only the rectangular regions
include in all situations a proportion of the sample at least equal to the target (0.95). . . .
1.5 Comparison of the area enclosed in rectangular (a) and ellipsoidal (b) condence regions
including a target probability content of 0.95. The correlation used to compute the
ellipsoidal region is set to 0. However, the correlation used to generate realizations of X is
0.9. The area enclosed in the ellipsoidal region is 15% larger than the region dened by
rectangular bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 pdf of a random variable X , where a condence interval bounded by Tl ow and Thi g h
contains a probability ]0, 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Uncertainty on a variable X can be represented by probability bounds (Tl ow and Thi g h )
without dening a probability density function. . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Propagation of model-parameter uncertainties using Monte Carlo sampling. Adapted
from [106] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Curvilinear distribution used to describe a uniform distribution where bounds are inexactly dened. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.10 Central-composite design for three parameters where each axis represents the normalized
parameter range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.11 Conceptual illustration of random walk sampling. . . . . . . . . . . . . . . . . . . . . . . .
1.12 Timeline of key contributions made in scientic elds related to data-interpretation.
These contributions are a non-exhaustive survey of the vast literature available in each
eld. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.13 Schematic comparison of current data-interpretation techniques. . . . . . . . . . . . . . .

1
2
3
4

9
12
12

17

18
20
21
24
24
27
28

29
29

2.1 The combined probability density function describes the outcome of the random variable
Uc,i , i.e. the combination of modeling and measurement uncertainties. . . . . . . . . . . . 33

xv

List of Figures

2.2 Threshold denition. Threshold bounds Tl ow,i and Thi g h,i are found separately for each
combined uncertainty for a probability 1/2 . When these threshold bounds are projected on the bi-variate pdf, they dene a rectangular boundary that is used to separate
candidate and falsied models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Error dependency between degrees of freedom in a nite element beam model. If errors
are random (a), predicted displacements at any location are independent of each other
and appear to vary around the real displacement. On the other hand, systematic effects
introduce dependencies in the error structure (b). For example, if the boundary condition
is not adequately modeled, the displacement may be biased at several location. . . . . . . 36
2.4 Propagation of secondary-parameter uncertainty in a physical model to obtain prediction
uncertainty. Uncertainties in secondary model-parameter values are propagated through
]) while primary parameters are kept to their most likely
the template model g([,
Several thousand evaluations of the template model are made to compute the
values .
prediction uncertainty due to uncertain secondary-parameter values. . . . . . . . . . . . . 37
2.5 An initial set of model instances is generated based on a grid, where the template model is
evaluated using each parameter set. The grid is dened using the minimal and maximal
bounds for parameter ranges, dened based on engineering judgment. Discretization
intervals are also provided to specify the sampling density. . . . . . . . . . . . . . . . . . . 39
2.6 Initial model set organized in a n p -parameter grid used to explore the space of possible
solutions. inadequate model instances are falsied using Equation 2.8 and models that
are not falsied are classied as candidate models. . . . . . . . . . . . . . . . . . . . . . . . 39
2.7 Two-dimensional likelihood function used to generate parameter samples. The likelihood L cm is maximal when the observed residuals o,i are within threshold bounds
[Tl ow,i , Thi g h,i ]. This example was created using Equation 2.11 with a shape function
parameter = 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.8 Flowchart describing error-domain model falsication . . . . . . . . . . . . . . . . . . . . . 42
2.9 Composite beam cross-section. The structure studied is a simply supported, ten meter
long, composite beam. The beam is modeled by shell elements using the nite-element
method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.10 Combined uncertainty probability density function for mid-span and quarter-span comparison points. For each comparison point, uncertainties are separated in modeling
Umod el ,i and measurement Umeasur e,i uncertainties. These are subtracted from each
other to obtain the combined uncertainties Uc,1 and Uc,2 . . . . . . . . . . . . . . . . . . . 44
2.11 The two combined uncertainty pdf s are presented in a bivariate probability density
function. This bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i ) including a target probability = 0.95. Minimal and maximal bounds for each location are
found numerically by satisfying Equation 2.14 while minimizing the area enclosed in the
projection of the threshold bounds. Model instances are falsied if, for any comparison
point, the difference between predicted and measured values (gi () y i ) are outside the
rectangular threshold bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.12 Representation of the initial model set with candidate and falsied models. By comparing
the difference between model predictions and measurements with threshold bounds, 58
model instances out of 100 are falsied. The 42 candidate models are outlined the shaded
region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

xvi

List of Figures

2.13 Exploration of the model instance space using random walk MCMC. The same 42 candidate models were found however it required 22% less samples than with an exhaustive
grid sampling. The vertices between samples corresponds to the path followed by the
random walk. The starting point is highlighted by a circle. . . . . . . . . . . . . . . . . . . . 47
2.14 An illustration of the effects of model simplications and omissions on time-domain
structural identication. Graph a) represents the true displacement of a structure over
time. When monitoring a system, noise is usually recorded in addition to the system
behavior (b). Simulations of the system behavior are inevitably inexact (c). When comparing the time-domain measured and predicted signals (d), a bias is present. In such a
situation, the residual cannot be described using stationary Gaussian noise. . . . . . . . . 48
2.15 Two-dimensional  -norm likelihood function generated from Equation 2.13 using
= 100. The projection of vertical walls on the horizontal plane corresponds to threshold
bounds. Large values of can be used to approximate the  -norm likelihood function.
This can be used as an alternative to 2 -norm Bayesian inference. . . . . . . . . . . . . . . 50
2.16 Posterior pdf for the illustrative example presented in 2.6.1. This posterior pdf is
computed using the  -norm likelihood function presented in Figure 2.15. The prior
distribution is set as uniform over the range of 190-212 GPa for steel and 15-45 GPa for
concrete Youngs modulus. The candidate model region found corresponds to the set
found using error-domain model falsication. . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Schematic representation of the inclusion of modeling and measurement errors in the
generation of simulated measurements. Modeling error is added to the predicted behavior
of a model instance to obtain the assumed true behavior. Simulated measurements (y s,i )
are obtained by adding a measurement error to the assumed true behavior obtained
previously. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Illustration of the process of simulating measurements based on the predictions of model
instances and on uncertainties. Note that the generation of simulated measurements
include the correlation between expected residual pdf s. . . . . . . . . . . . . . . . . . . . . 55
3.3 Qualitative reasoning description used to dene the uncertainty correlation. The correlation value is presented on the horizontal axis. The vertical axis corresponds to the
probability of occurrence a given correlation value depending on its qualitative description, "low", "moderate", "high", "positive" and "negative". . . . . . . . . . . . . . . . . . . 56
3.4 a) Example of empirical cumulative distribution function (cdf ), F C M (nC M ), used to compute the expected size of the candidate model set. F C M (nC M ) depends on {,Uc , n m , },
the target identication reliability, uncertainties, the measurement conguration used
and uncertainty dependencies. In this example, there is a probability = 0.95 of falsifying at least 60% of the models (F 1C M (0.95) = 40%) and a probability = 0.50 of falsifying
at least 75% of the models (F 1C M (0.50) = 25%). b) Effect of uncertainties, dependencies and target identication reliability on F C M (nC M ). There is no unidirectional trend
associated with the choice of measurement congurations. . . . . . . . . . . . . . . . . . . 58
3.5 Flowchart representing the steps involved in the computation of the expected identiability. These metrics quantify the utility of monitoring for better understanding the behavior
of a system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

xvii

List of Figures

4.1 Schematic representation of the phenomena involved in the design of measurement systems. The total number of candidate models decreases as the number of measurements
increases until the point where additional observations are not useful (solid curve). Over
instrumentation is due to the combined effects of the increased amount of information
and threshold adjustments (dashed curves). . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Conceptual example used to illustrate the situation where additional measurements can
lead to over instrumentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 Flowchart summarizes steps involved in the optimization of measurement-system performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 An example of the growth in the number of iterations required by the Greedy algorithm
compared with the solution space growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Schematic representation of the effect of the number of measurement locations on the
mode match criterion. The initial conguration using all sensors result in a 100% mode
match (a), the same match may be possible with a conguration using less sensors
(b), and a tradeoff between information and costs may be xed with fewer sensors (c).
The mode-match criterion quanties the capacity to nd a correspondence between
predicted and measured mode-shapes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.6 Histograms representing the relative frequency of usage of each sensor. a) This gure
represents the result obtained after the rst greedy optimization loop. Measurement
locations used in a frequency less than q are remove during the subsequent greedy
optimization loop. b) This gure represents the nal sensors removed after two or more
optimization loops. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.7 General framework describing the measurement-system optimization methodology. . . . 70
5.1 True and idealized cantilever beams. Parameters to be identied using the idealized
beam are the Youngs modulus (E ) and the value of vertical force applied (F ). . . . . . . . 73
5.2 Comparison of parameter values identied using least-squares parameter identication
and Bayesian inference with the correct parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, there are no systematic errors and uncertainties are rightfully assumed to be
independent. The labels correct and biased identication apply to Bayesian inference. . 76
5.3 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set. . . . . . 76
5.4 Comparison of parameter values identied using least-squares parameter identication
and Bayesian inference with the true parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, uncertainties are wrongfully assumed to be independent. The labels correct
and biased identication apply to Bayesian inference. . . . . . . . . . . . . . . . . . . . . . 78
5.5 Comparison of parameter values identied using Bayesian inference with the true parameter value for Youngs modulus (E ) and vertical force (F ). The number of measurements
varies from 1 (a) to 50 (d). The shaded area represents the region including 95% certainty
credible regions obtained when varying the correlation from 0 to 0.99, for all covariance
terms simultaneously. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

xviii

List of Figures

5.6 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set. . . . . . 79
5.7 Comparison of parameter values identied using least-squares and Bayesian inference
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model
predictions and in measurement error estimation are not recognized. The labels correct
and biased identication apply to Bayesian inference. . . . . . . . . . . . . . . . . . . . . . 79
5.8 Comparison of the candidate model set found using error-domain model falsication
with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model
predictions and in measurement error estimation are not recognized. . . . . . . . . . . . . 80
5.9 Langensand Bridge elevation representation. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.10 Langensand Bridge cross section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.11 Test-truck layout for load cases 1 to 5 (Phase 1) for the Langensand Bridge . . . . . . . . . 83
5.12 Langensand Bridge nite-element model (Phase 1). . . . . . . . . . . . . . . . . . . . . . . 84
5.13 Relative importance of each primary parameter on the model predictions of the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.14 Correlation between predictions for Langensand Bridge due to uncertainties in secondaryparameter values. Results obtained from the Langensand Bridge model do not reect the
common assumption of independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.15 Example of uncertainty relative importance for the Langensand Bridge. Excepted for
strain, the dominant uncertainty sources are model simplication, secondary-parameter
uncertainty and measurement repeatability. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.16 Pairwise comparison of parameters found in the candidate model set for the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.17 Example of uncertainty relative importance when using a surrogate model to evaluate the
initial model set. The contribution of the surrogate model approximation (uncertainty
source no.7) is small compared with other sources of uncertainties. . . . . . . . . . . . . . 90
5.18 Accelerometer layout for the Langensand Bridge. Each triangle represents a recording
point. Labels Ref. represent reference sensors. . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.19 Average power spectral density function (PSD) of the recordings made on the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.20 Mode shapes of the Langensand Bridge computed from ambient vibration monitoring.
Large oscillations on the left side of the structure corresponds to walkway vibration modes. 92
5.21 Comparison of two recordings taken in the centre of the Langensand bridge along the 3
axes, with and without trafc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.22 Relative importance of uncertainty sources for the Langensand Bridge. The dominant
component of the combined uncertainty is the measurement variability. . . . . . . . . . . 96
5.23 Correlation between the predicted frequencies of different natural modes. Predictions
are obtained by varying secondary-parameter values . . . . . . . . . . . . . . . . . . . . . . 97
5.24 Mode shapes computed from the Langensand Bridge model and used for the identication 98
5.25 Comparison of the model prediction scatter and measured value for the rst two frequencies for Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.26 Pairwise comparison of parameters found in the candidate model set for the Langensand
Bridge. Candidate models were found using dynamic data. . . . . . . . . . . . . . . . . . . 99

xix

List of Figures

5.27 Cumulative distribution function (F C M ) representing the probability of obtaining a


maximum candidate model set size. The polygonal sign corresponds to the number of
candidate models obtained after falsifying inadequate models using on-site measurements (see 5.3.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.28 Cumulative distribution functions (F P R ) representing the probability of obtaining a
maximum prediction range for the rst ve natural frequencies of the structure. Note
that the width of each graph corresponds to the prediction range from the initial model
set. The polygonal signs correspond to the prediction ranges obtained after falsifying
inadequate models using on-site measurements. . . . . . . . . . . . . . . . . . . . . . . . . 103
5.29 The correlation choice proposed in the qualitative reasoning scheme (3.2.1) is varied
from 0.2. This variation is used to test the robustness of the expected identiability with
respect to uncertainty dependency choice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.30 Comparison of the cumulative distribution function of the candidate model set expected
size obtained for several assumptions of correlation. The vertical dashed line corresponds
to the number of candidate models obtained using on-site measurements (see 5.3.2).
Assuming that uncertainties are independent does not lead to conservative predictions. . 104
5.31 Langensand Bridge cross-section details for construction phase 2. . . . . . . . . . . . . . . 105
5.32 Langensand Bridge cross-section and potential sensor layout to be used for future monitoring of the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.33 Load-case layout for the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.34 Measurement-system design multi-objectives optimization results for the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.35 Grand-Mere Bridge elevation view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.36 Grand-Mere Bridge cross-section detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.37 Grand-Mere Bridge nite-element model general overview . . . . . . . . . . . . . . . . . . 112
5.38 Grand-Mere Bridge cross-section and isometric view of the simplied shell-based model 113
5.39 Grand-Mere Bridge cross-section and isometric view of the shell-solid model . . . . . . . 113
5.40 Grand-Mere Bridge cross-section and isometric view of the solid-based model . . . . . . 114
5.41 Load-case description for Grand-Mere Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.42 Relative error for vertical displacement predictions due to model simplications. . . . . . 116
5.43 Relative error for rotation predictions around Z-axis due to model simplications. . . . . 116
5.44 Relative error in strain prediction along X-axis due to model simplications. . . . . . . . . 116
5.45 Relative error for vertical displacement prediction due to secondary-elements omission. 118
5.46 Relative error for rotation prediction around Z-axis due to secondary-elements omission. 119
5.47 Relative error for strain prediction along X-axis due to secondary-elements omission. . . 119
5.48 Accelerometer layout for Grand-Mere Bridge monitoring system. . . . . . . . . . . . . . . 121
5.49 Averaged singular values of the power spectral density matrices for Grand-Mere Bridge. . 122
5.50 Measured mode shapes and frequencies for Grand-Mere Bridge . . . . . . . . . . . . . . . 122
5.51 Primary parameters to be identied for the Grand-Mere Bridge. . . . . . . . . . . . . . . . 123
5.52 Uncertainty sources relative importance for Grand-Mere Bridge . . . . . . . . . . . . . . . 125
5.53 Comparison between the measured and predicted frequencies for modes 1, 3, 4 and 6 for
the Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.54 Pairwise representation of the candidate model set parameter values for Grand-Mere
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.55 Tamar Bridge model and accelerometer layout for Tamar Bridge. . . . . . . . . . . . . . . . 130
5.56 Power spectral density showing the modes extracted. . . . . . . . . . . . . . . . . . . . . . . 131
5.57 Measured mode shapes and frequencies for Tamar Bridge. . . . . . . . . . . . . . . . . . . 132

xx

List of Figures

5.58 MAC value relative frequency quantifying the correspondence between predicted and
measured mode shapes for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.59 Schematic representation of parameters to be identied for Tamar Bridge. . . . . . . . . . 133
5.60 Primary-parameter relative importance for each mode of the Tamar Bridge . . . . . . . . 134
5.61 Tamar Bridge uncertainty relative importance. . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.62 Comparison of model prediction scatters with measured values for global and torsional
modes (modes number 1, 2, 3, 10, 12 and 14). . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.63 Comparison of model prediction scatters with measured values for vertical bending deck
modes (modes number 4, 5, 8, 11, 13, 16 and 18). . . . . . . . . . . . . . . . . . . . . . . . . 138
5.64 Pairwise representation of the candidate model set parameter values for Tamar Bridge. . 139
5.65 Cumulative distribution function (F C M ) showing the probability of a maximum candidate model set size for Tamar Bridge. The polygonal sign corresponds to the number of
candidate models obtained using on-site measurements. . . . . . . . . . . . . . . . . . . . 140
5.66 Measurement-system design multi-objective optimization results for Tamar Bridge. The
expected number of candidate models is reported for a probability = 0.50 (F 1C M (50%)).141
5.67 The effect of the number of acceleration sensors on mode match criterion used to associate predicted and measured mode shapes. A minimum of 16 sensors are necessary to
satisfy a mode match Q,l /r e f , where = 0.99. . . . . . . . . . . . . . . . . . . . . 142
5.68 Optimized accelerometer layout Q opt for Tamar Bridge obtained using existing modeshape data. This conguration with 16 sensors corresponds to the layout identied in
Figure 5.67. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.69 General framework for the detection of leaks in pressurized pipe networks. . . . . . . . . . 144
5.70 Schematic representation of the water distribution network studied . . . . . . . . . . . . . 146
5.71 Typical hourly averaged water consumption measured over one day . . . . . . . . . . . . . 147
5.72 Examples of simulated measurements for Lausanne fresh-water distribution network. . . 148
5.73 Relation between the expected number of candidate leak scenarios and the number of
ow measurements used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.74 Relation between the radius including all leak scenarios and the number of ow measurements used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.75 Optimized sensor conguration using 14 ow velocity measurements. . . . . . . . . . . . 150
5.76 Expected number of candidate leak scenarios identied for several leak intensities. . . . . 150
5.77 Relation between the expected number of candidate leak scenarios and the number of
ow measurement points used, for a leak level of 25 L/min. . . . . . . . . . . . . . . . . . . 151
7.1 Schematic representation of the relationship between the number of measurements used
for data interpretation and the probability of committing a Type-I diagnosis error, in case
of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.2 Schematic representation of the relationship between the number of non-redundant
measurements used for data interpretation and the probability of a Type-II diagnosis
error, in case of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.3 Schematic representation of the relationship between the number of measurements used
during data interpretation and the probability of either a Type-I or a Type-II error, in case
of misevaluation of uncertainties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.4 Two-dimensional stochastic elds representing the Youngs modulus spatial variability in
concrete bridge decks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

xxi

List of Figures

7.5 Two-dimensional likelihood function (Equation 7.1), used to generate parameter samples
on the limit separating candidate and falsied models. The likelihood is maximal when
the observed residuals o,i are equal to threshold bounds [Tl ow,i , Thi g h,i ]. This example
was created for shape functions = 10, = 0.8. . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.6 Comparison of model instance space exploration using Grid-based random-walk and
falsication-limit sampling. The vertices between samples correspond to the path followed by the random walk. The shaded area is the candidate model set identied in the
example presented in 2.6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.7 Sensor interaction relative importance quantifying the contribution of single sensor
removal compared to multiple sensors removal. . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.8 Future work in relation with the general framework for structural evaluation of existing
structures presented in Figure 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.9 Future perspectives for measurement system and test setup design where the objective functions are money invested versus return on investments in terms of savings on
maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.10 Framework representing steps leading to a measurement system overall cost optimization.167
A.1 Extended uniform distribution that included several orders of uncertainty . . . . . . . . . 169

xxii

List of Tables
1.1 Relations between right and wrong diagnosis in hypothesis testing. . . . . . . . . . . . . . 15
2.1 Modeling and measurement uncertainty sources for the beam example . . . . . . . . . . . 43
5.1 Summary of aspects covered by each case-study . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2 Summary of the identication methodology comparison on the basis of their capacity to
provide correct identication for the cantilever beam example. . . . . . . . . . . . . . . . . 81
5.3 Static measurements taken on the Langensand Bridge. Mean and standard deviation
represent the measurement variability obtained by repeating each load case three times. 83
5.4 Ranges and discretization intervals for parameters to be identied on the Langensand
Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 Secondary-parameter uncertainties for Langensand Bridge . . . . . . . . . . . . . . . . . . 85
5.6 Other uncertainty sources for Langensand Bridge . . . . . . . . . . . . . . . . . . . . . . . . 86
5.7 Comparison of frequency prediction ranges computed using the initial and the candidate
model sets. For these ve modes, predictions made using the candidate model set lead to
the reduction in prediction ranges from 55% to up to 82% compared to the predictions
made using the initial model set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.8 Observed frequencies ( f in Hz) with their standard deviation ( same unit) for the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.9 Secondary-parameter uncertainties for the identication of the Langensand Bridge using
dynamic data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.10 Other uncertainty sources for the Langensand Bridge. . . . . . . . . . . . . . . . . . . . . . 96
5.11 Qualitative evaluation of uncertainty correlation between comparison points for a each
uncertainty source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.12 Optimized measurement congurations are shown by a vertical set of symbols  for a
given sensor type and location. The cost of the load-test along with the expected number
of candidate models computed for probability of 95% are reported for each conguration.109
5.13 Relative error in predicted natural frequencies (%) and losses in the MAC criteria due to
model simplications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.14 Relative error in natural frequencies (%) due to the exclusion of secondary structuralelements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.15 Values for parameters (3 for each parameter) used to create the initial model set for the
Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.16 Secondary-parameter uncertainties for Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . 124
5.17 Other uncertainty sources for Grand-Mere Bridge. . . . . . . . . . . . . . . . . . . . . . . . 124
5.18 Summary of observed frequencies for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . . 133
5.19 Secondary-parameter uncertainties for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . 135

xxiii

List of Tables

5.20 Other uncertainty sources for Tamar Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 135


5.21 Qualitative labels describing uncertainty correlation for the generation of simulated
measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.22 Optimized mode selection are shown by a vertical set of symbols . The expected number
of candidate models computed for probability = 0.50 (F 1C M (0.50)). . . . . . . . . . . . 142

xxiv

Notation
Latin capital letters
B
D
H0
H1
I
L
L
N
N
P
Q
Q
T
T
U
X
W

Bayes factor
Data
Null hypothesis
Alternative hypothesis
Vector containing discretization intervals
Subset of simulated mode shapes
Likelihood function
Number of loops
Gaussian distribution
Probability
Subset of sensors
The true value for a quantity
Threshold lower and upper bounds
Multidimensional domain where threshold bounds are dened
Uncertainty source described by a random variable
Random variable
Weighting matrix

Latin lowercase letters


a
g(), h()
l

n m , n k , n ...
r
r
s
t
u
v
y

Time-domain data
Model class
Set of simulated mode shapes
Norm
Number
Matrix containing predicted values for several comparison points
Predicted value returned by a model
Vector containing dummy variables used for optimizing sensor congurations
Time
Matrix containing error realizations for several comparison points
Random integer number
Measured value
xxv

Notation
Greek capital letters

Parameter domain
Error domain
Covariance matrix
Vector containing quantities used during the computation of expected identiability
metrics
Random variable describing quantities used during the computation of
expected identiability metrics
Mode shape
Mode shape correspondence matrix

Greek lowercase letters

MAC

q
2

Probability of committing a Type-I diagnostic error [0, 1]


Parameter of the extended uniform distribution [0, 1]
Error instance
Order of a generalized Gaussian distribution
Physical parameter of a model
Standard deviation
Linear correlation coefcient [1, 1]
Target probability content ]0, 1]
Target MAC value used to test the correspondence of mode shapes ]0, 1]
Target certainty used as metric to quantify the performance of measurements ]0, 1]
Target mode-match criterion ]0, 1]
Target relative frequency of usage for sensors [0, 1]
Chi-squared distribution
Mode-match criterion

Indices
0
a
c
CM
CR
CS
i
k
l
L
lc
l ow, hi g h
m
xxvi

Refers to the number of evaluations of a template model


Refers to the number of dynamic data recording points
Refers to the combination of all uncertainty sources
Refers to the number of candidate models
Refers to the radius including candidate leak scenarios
Refers to the number of candidate leak scenarios
Refers to prediction/measurement location
Refers to the number of instances in the initial model set
Refers to predicted modes
Refers to the number of terms in the subset of simulated mode shapes
Refers to the number of load-cases
Refers to threshold lower and upper bounds
Refers to the number of measurements

Notation
o
opt
p
PR
q
s
sp
y

Refers to observed values


Refers to an optimal quantity
Refers to the number primary parameters
Refers to prediction range
Refers to the time-domain recording location
Refers to simulated measurements
Refers to the number of secondary parameters
Refers to the number of measured mode shapes
Refers to secondary parameters

Acronyms
AVM
cdf
COV
DOF
EUD
FDD
FEM
MAC
pdf
PSD
QR

Ambient vibration monitoring


Cumulative distribution function
Coefcient of variation
Degree of freedom
Extended uniform distribution
Frequency domain decomposition
Finite-element method
Modal accordance criterion
Probability density function
Power spectral density
Qualitative reasoning

Mathematical functions and symbols


d i ag ()

ex
T
H

fX
FX
F X1

arg min y(x)

Diagonal matrix
For all
Such that
Variation
Proportional to
An element of
Probability distribution
Exponential function
Transpose
Complex conjugate transpose
Probability density function (pdf ) of a random variable X
Cumulative distribution function (cdf ) of a random variable X
Inverse cumulative distribution function of a random variable X
Gamma function
Argument x that returns the minimum value for y(x)

Intersection of sets
Subset
xxvii

Notation
{, }
[, ]
], [
|...|
|
x
x
x
xEy
R
N
O

xxviii

A set
A vector, a matrix or an interval
An interval excluding bounds
Absolute value
Conditional probability
Mean of x
Best estimate of x
True or correct value for x
x 10 y
Set of real numbers
Set of natural numbers
Big O notation

Terms and denitions

Candidate model
set

Set of models that are compatible with observations, taking into


account uncertainties involved the model and measurements.

Comparison
point

Quantity common to a model (prediction location and type) and


a system (measurement location and type).

Condence
interval / region

Interval dened by T1 and T2 within which a proportion ]0, 1]


of the realizations of a random variable X should lie, P (T1 X
T2 ) = . When X is a multi-variate random variable, the term,
interval, is replaced by region.

Correlation

Degree of linear relationship between either quantities or random variables.

Credible interval /
region

Refers to the interval dened by T1 and T2 for a Bayesian posterior pdf, within which the true quantity for a variable X should
lie with a condence , P (T1 X T2 ) = . When X is a vector
of variables, the term interval is replaced by region.

Dependency

Relationship between either quantities or random variables. Dependancies can occur over time, space and for several quantities
studied simultaneously at a same location.

Error

Difference between a quantity and a reference quantity.

Error structure

Set of relationships describing the magnitude and dependance


between errors for several comparison points in a system.

Expected
identiability

Metric quantifying the performance of system identication for


falsifying inadequate models.

Expected residual

Probabilistic description of the expected differences between


predicted and measured values.

Falsication

Process of discarding hypotheses and models using empirical


evidence.
xxix

Terms and denitions


Initial model set
Inference
Likelihood
function
Model class
Model instance

Set of model instances generated prior to interpreting data.


Process of reaching logical conclusions from evidence.
Function describing the likelihood of one or several parameters
given observed data.
See template model.
Set of parameters describing the state of the system.

Mode match

Metric describing the number of model instances that have


mode shapes corresponding to reference mode shapes.

Posterior pdf

Result of a Bayesian inference where prior knowledge describing


parameter values is updated using evidence.

Primary
parameter

Parameter of a model that is the object of system identication.

Observed residual

Difference between predicted and measured values.

Secondary
parameter

Parameter of a model that contributes to the prediction uncertainty.

System
identication

Task of nding descriptions of systems that are compatible with


observations.

Systematic error

A bias error that either remains constant or that varies in a predictable manner.

Template model

Physics-based model used to generate model instances through


assigning values to its primary parameters (e.g. a representation
of a civil structure using the nite-element method).

Threshold bounds

Bounds delimiting a condence interval and used as criteria to


falsify models.

Uncertainty

xxx

Description of the possible values that an error can take.

Introduction
System identication involves comparing models with measurements to identify the properties of systems. System identication plays a crucial role in the context of diagnosis, evaluation,
repair and replacement of civil infrastructure and other complex systems.

General situation and challenges


Most infrastructure in the western world was built in the second half of the 20th century.
Transportation infrastructure, water distribution networks and energy production systems are
now aging and this leads to serviceability and security issues. In the context of Switzerland, the
need for maintenance and replacement of public infrastructure represents annually 3.5% of
the gross domestic product (GDP) [203]. This gure matches the 2030 investment target xed
by the Organization for Economic Co-Operation and Development (OECD) [164]. However,
maintaining current infrastructure quality including planned development, is not expected
to be economically sustainable [203]. A recent study concluded that in order to maintain
current levels, Switzerland needs massive investments for maintaining and improving its
infrastructure [63].
For the Americas, the perspective is much worse. In the United-States alone, ASCE estimated
that US$ 2.2 trillion are required to raise the condition of infrastructure to an acceptable level
[10]. This corresponds to a 3% GDP yearly investment. Figure 1 presents the average GDP
percentage of OECD countries going into infrastructure since 1980. This reects a general
global trend of rarefaction of resources dedicated to infrastructure.

Percent of GDP
5
going to
infrastructure
4

05

Years

20

00
20

95
19

90
19

85
19

19

80

Figure 1: Average infrastructure investments in OECD countries. Adapted from [54].

Introduction
Figure 2 report the result of a recent study on the maturity of infrastructure in several countries
[163]. It shows that most western countries have infrastructure in advanced stage of maturity.
The OECD noted that by 2030 ...a larger effort will need to be directed towards maintenance
and upgrading of existing infrastructure and to getting infrastructure to work more efciently
[164]. This challenge involves prioritizing budget expenditures and investments by improving
the way structures are currently being evaluated.
Emerging

Maturing

Mature

0
Australia
United Kingdom

Canada
Nordic
South Europe
United States
France
Benelux
Germany
Japan

Ranking

4
6
8
10
Other Asia
12
14
10

Mexico
China
Latin America
20

30

40

50

60

70

80
Degree of maturity

Figure 2: World infrastructure maturity. Adapted from [163].

A general framework for evaluation of existing structures is presented in Figure 3. The rst
step is to perform limit-state verications using simplied conservative models. If at this level
the performance is adequate, no intervention is required because the structure conservatively
meets requirements. Otherwise, conservative provisions of simplied models can be reduced
using on-site investigations. Monitoring data can be used to improve knowledge related to
the behavior of structures (i.e. system identication). If these rened models lead to the
satisfaction of requirements, again, no intervention is required. Otherwise, decision makers
may either opt for interventions on structures, for performing additional site investigations
or further rening limit-state verications using reliability analyses. When the path of the
reliability analysis is chosen, the outcome provides indications about the necessity of interventions. An alternative is again to perform further site investigations in order to reduce the
conservatism in behavior models.
The goal of this framework is to provide a safe way to evaluate the condition of structures
while avoiding performing unnecessary interventions. This is justied not only by the direct
cost of strengthening interventions and replacements; it is also supported by the indirect
societal costs due to the unavailability of infrastructure. For instance, Xie and Levinson [238]
estimated that the indirect costs of re-routing trafc during the reconstruction of the I-35W
Bridge (USA) after its collapse in 2007, ranged from US$ 71 000 to US$ 220 000 per day.
When structural verications made using conservative practices do not meet requirements,
2

Introduction
START
Scope of this thesis

Limit-state verification
of an existing structure

Apply code procedures


on simple
conservative models

Adequate
performance?

Site
investigation

Use in-situ monitoring


to improve
behavior models
NO

YES
Adequate
performance?

NO

Refine reliability
analysis

YES
Adequate
performance?

Interventions
required
NO

YES
No interventions

Figure 3: General framework for evaluation of existing structures. Adapted from [12].

rened approaches can use structural behavior measurements to reduce conservatism. As


outlined in Figure 3, the scope of this thesis includes identifying properties of complex systems
as well as measurement-system analysis and design.
Traditional physics-based models of infrastructure behavior are usually conservative and
may have poor predictive capabilities. Monitoring the displacement, rotation and dynamic
properties of structures, often help to understand their behavior and to provide better prediction capabilities. Sensing technologies enable analyses to go beyond traditional engineering
practice. This can also lead to better support for intervention prioritization and replacement
avoidance. However, the bottleneck lies in data interpretation because the amount of data
involved is often so large that it is nancially and practically infeasible to process it manually.
Furthermore, existing system-identication methodologies often rely on assumptions that
may not be fullled when dealing with models of complex systems affected by systematic errors due to omissions and simplications. In many cases, little is known about aspects related
to the dependence of errors over space, time and for several quantities studied simultaneously
at a same location.
These issues lead to the following scientic questions and objectives:
3

Introduction
What is an appropriate probabilistic framework for structural identication and
how does this framework inuence the analysis and design of monitoring strategies?
What are the consequences on system identication of having incomplete information about uncertainty dependencies?

Objectives
1. Propose a system identication methodology that can be used in situations where the
dependencies between uncertainties are only partially dened.
2. Propose a metric to quantify the utility of measurements for obtaining new knowledge
about a system physical properties.
3. Propose a methodology compatible with objectives 1 & 2 to analyze and design measurementsystem congurations.
4. Test methodologies with data obtained on full-scale civil structures and systems.

Outline
The scientic questions and objectives derived from them are addressed in ve chapters.
The rst presents a literature survey of elds related to system identication. It shows that
concepts issued from hypothesis testing can be used to identify probabilistically the properties
of systems when uncertainty dependencies are incompletely dened. Hypothesis falsication
is thus the core of the work presented here. The scientic contribution of the thesis builds
around this central idea as illustrated conceptually in Figure 4 where each new chapter builds
on all previous ones in a nested conguration.
Chapter 1: Literature review
Chapter 2: Error-domain model falsification
Chapter 3: Expected identifiability

Chapter 6
Conclusion
& limitations

Chapter 7
Future work

Chapter 4: Measurement-system design


Chapter 5: Applications

Figure 4: Thesis outline

Chapter 2 presents the error-domain model falsication approach. Chapter 3 describes how
this methodology is used to propose a new quantitative metric (expected identiability) predicting, whether measuring is likely to improve our understanding of a system. Chapter 4 builds
on the expected identiability metric to provide a methodology to evaluate the performance
and to design efcient measurement systems, including detection of over-instrumentation.
4

Introduction
Chapter 5 presents applications of the methodologies on an illustrative example and on fullscale civil systems. Finally, Chapter 6 contains conclusions and discusses the limitations of
approaches proposed. Promising concepts requiring further research are presented in Chapter
7.

1 Literature review

Summary
This chapter presents a literature review covering aspects of system identication and diagnostics related to civil infrastructure. Research made in other
closely related domains is also presented. This includes elds such as probability
interpretations, measurement-system design and high dimensional solution
space sampling. This review shows that current system identication methodologies either rely on assumptions that are seldom fullled for civil structures or
they sometimes rely on subjective choices not tied to a systematic methodology.

1.1 Infrastructure monitoring


Infrastructure monitoring has been present for decades in civil construction. Examples of
static and dynamic monitoring applications using modern technologies can be found as
early as in the 1910s [42, 191, 192]. Ro [192] reported several monitoring applications made
between 1910 and 1938 on bridges, dams and buildings. Several of them were signature
structures such as the Schwandbach and Arve Bridges designed by the Swiss engineer Robert
Maillart [193]. Another publication by Ro [194] reported monitoring applications on steel
7

Chapter 1. Literature review


structures between 1925 and 1950. These early applications were dedicated to design-proong
tasks regarding the static and dynamic behavior of structures. Also, long-term monitoring was
performed on several structures to better understand their behavior.
In the 1970s, monitoring was already extensively used to calibrate models. Several of these
applications were reported by Mottershead and Friswell [154]. More recently, interest has
shifted toward damage detection in structures [77, 202, 237, 244]. A branch of the research in
damage detection uses model-free data-interpretation tool to look at trends in data and to
identify anomalies in behavior [39, 124, 174, 175, 239]. Model-based data-interpretation uses
physics-based models to nd what phenomenon can explain observations. This second type
of interpretation techniques has a larger explanatory capability, however, costs associated
with it are usually higher. A comprehensive comparison of model-free and model-based
data-interpretation is presented in [44].
Due to its greater potential for infrastructure management support, this thesis focuses on
model-based data interpretation. Other goals of monitoring include quantifying the performance of structures to identify reserve capacity [88, 134], quantify the reliability of structures [73, 74, 221, 222], evaluate the condition of structures [46, 62, 228] and for load-rating
[128, 232].
During the last decades, measuring instruments evolved to a point where cheap and reliable
sensing technologies are now commercially available [65, 120, 135, 216]. In spite of the large
number of applications in civil engineering, our capacity to measure structures has outgrown
our capacity to interpret data. Figure 1.1 schematically illustrates this concept where the
relative size of each circle corresponds to the state of knowledge in each discipline. The
intersection between sensing technologies and data-interpretation represents practical applications. Data-interpretation was reported to be a limiting factor in successful infrastructure
monitoring [43, 44]. One reason is that assumptions made by current system-identication
methodologies are seldom met during practical applications. These assumptions and hypotheses are presented in 1.2 and concepts related to uncertainties are presented in 1.3.
A literature survey related to measurement-system design is performed in 1.4 and another
focussing on sampling techniques is presented in 1.5.

1.2 System identication


System identication (SI) is the task of nding descriptions of systems that are compatible
with observations. Model-based system identication (SI) uses physics-based models to
infer parameter values. Several approaches used in the context of large-scale structures are
presented in this section.

1.2. System identication


Sensing technologies
Data-interpretation

Practical applications

Figure 1.1: A comparison between the state of knowledge in the eld of sensing technologies
and in data interpretation. The relative size of each circle describes the extent of the current
state of knowledge.

1.2.1 Residual minimization


Residual minimization, also known as model-calibration and as model-updating consists
of adjusting the parameters of a model to minimize the difference between predicted and
measured values. This approach is the most common data-interpretation technique and has
been a research topic for several decades. Furthermore it is still being widely used in recent
applications [8, 99, 103, 165, 166, 235].
A literature review covering applications of residual minimization made prior to 2009 was
presented by Schlune and Plos [204]. Mottershead and Friswell [154] made a survey of modelupdating techniques prior to 1993. In addition to the scientic publications reported in these
papers, model calibration is commonly used by practicing engineers.
An assumption made by residual minimization approaches, is that the difference between
predicted and measured values is governed by the choice of parameter values [155]. Furthermore, most approaches cited above are based on the minimization of the sum of the squares
of the differences (errors) between predicted (g()) and measured (y) values. The parameter

values = [1 , . . . , n p ] found using a least-squares t correspond to the most likely ones ().
The variable n p refers to to the number of parameters to be identied. The simplest form of
least-squares t is presented in Equation 1.1, where n m is the number of measurements. This
methodology was developed in parallel by Gauss and Legendre in the 19th century [215, 220].
Other weighed versions of this methodology are also used to equalize the effect of amplitude
[155].

= arg min

nm 

2
gi () y i

(1.1)

i =1

Chapter 1. Literature review


Limits of applicability
The optimality of the parameter values identied holds if the the residuals of the differences between predicted and measured values are distributed as a zero-mean independent
Gaussian random variable. For civil structures, this assumption is seldom met because a
model (g()) is by denition an approximation of reality. Thus, as noted by Mahadevan et al.
[111, 145, 146, 183], the assumption of independence may not be fullled due to the systematic bias present in models. McFarland [144] mentioned that While the assumption that the
errors are independent it the simplest and most convenient, it does not necessarily take proper
account of the amount of information that the calibration data bring to bear on the calibration
parameters.
Furthermore, as stated in the ASME guideline for verication and validation [11], the domain
of validity of a model-calibration is limited to the data for which it is calibrated. Beven [29] also
argued that values of parameters calculated by the calibration of models have been recognized
as being effective values that may not have a physical interpretation outside of the model
structure within which they were calibrated. While calibrated parameter values may be useful
for interpolation, they are usually inappropriate for extrapolation and even less for use in
other models. Mottershead et al. [155] reported this limitation for the calibration of models
that include idealization errors. Also, Ben-Haim and Hemez [23] demonstrated analytically
that increasing the delity to test data will decrease the robustness to imperfect understanding
of the process.

Model verication and validation


Verication is the task of comparing model predictions with known analytical solutions. In
order to assure a good agreement with the mathematical model and convergence of the
solution, patch tests have been developed [137, 159] and are nowadays commonly applied to
commercial nite element (FE) codes. Element and code verications are usually performed
by software developers. Many elements which are in good agreement with mathematical
models (i.e. error 1%) are now widely available for linear elastic analysis [136].
U.S. Department of defense [101] dened validation as the process of determining the degree
to which a model (and its associated data) is an accurate representation of the real world from
the perspective of the intended uses of the model. Calibration (see 1.2.1) is not equivalent to
validation [186]. A calibrated model is not intended to provide reliable predictions in situations
other than the one for which it was tted [186, 227]. Validation is specic to a class of tasks.
Consequently, the use of the model for any other class of task would require the validation
process to be repeated.
Model validation cannot, in most cases, be performed using traditional experiments [227].
These experiments often show a lack in the control over parameters and involve assumptions
that are not compatible with validation. Ideally, the experiments for validation processes

10

1.2. System identication


should be designed especially for validation and contain as few assumptions as possible. Every
uncertainty involved in the experiment should be quantied. Uncertainties may occur from
boundary conditions, installations, environmental conditions, design tolerances, residual
stresses, etc. Many frameworks for model validation have been proposed by researchers in the
eld of mechanical engineering [158, 160, 162]. Several studies were performed to validate
models in the eld of structural engineering [96, 97, 145, 183, 184].
A given model may be appropriate (i.e. validated) for a particular task such as design and
inappropriate for another such as system identication. As mentioned above, this is because
validation is specic to a class of tasks. Simplied modeling approaches have been validated
by practitioners for design purposes which favor conservatism over accuracy. As opposed to
products designed by mechanical engineers, which are often produced many times in indoor
environment during industrial processes, structures built by civil engineers are one-of-a-kind
products built outside. Therefore, there is usually no other benchmark to systematically
validate the model than the structure itself. This underlines why model validation is not best
suited to understand the behavior of civil structures in the context of system identication.

1.2.2 Bayesian inference


Bayesian inference uses Bayesian conditional probability to update the prior knowledge of
model parameters using evidences (e.g. measurements and observations). This conditional
probability formulation was rst proposed by Bayes in the 18th century [133]. Equation 1.2
presents the general updating framework where the prior probability of physical parameters
P () is updated using a likelihood function P (y|) and measured data y. The posterior
probability P (|y) is obtained using the normalization constant P (y).

P (|y) =

P (y|)P ()
P (y)

(1.2)

Likelihood functions
In most formulations reported in the literature, the likelihood function P (y|) is based on a
2 -norm criterion (see, Equation 1.3). Tarantola [225] mentioned that Because of its simplicity,
the least-squares criterion (2 -norm criterion) is widely used for the resolution of inverse problems, even if its basic underlying hypothesis (Gaussian uncertainties) is not always satised.
Furthermore, the popularity of likelihood functions based on 2 -norm is in part due to the
large number of observable phenomena that actually follow this distribution. Equation 1.3
presents the simple form of a likelihood function P (y|) that is based on a 2 -norm criterion.
In this Equation, g() is a vector containing model predictions and y a vector containing measurements. is a covariance matrix containing uncertainties and correlation coefcients for
each location where predicted and measured values are compared. Such 2 -norm likelihood
11

Chapter 1. Literature review


function is thus proportional to a Gaussian probability density function, as presented in Figure
1.2.


1
P (y|) const . exp (g() y)T 1 (g() y)
2

(1.3)

Likelihood

Figure 1.2: Likelihood function based on a 2 -norm.


2 -norm is not the only option available to represent likelihood functions [5]. The  -norm
can be expressed as a Generalized Gaussian distribution of order . When = 2, it leads to
the normal distribution (i.e. Equation 1.3). When = 1, it leads to the least-absolute-value
distribution rst used by Laplace [56]. When = , it leads to a boxcar function also known
as the Chebyshev distance named in honor to the Russian mathematician. The Generalized
Gaussian distribution is expressed in Equation 1.4 where denotes the Gamma function [225].
This distribution is presented in Figure 1.3 for {1, 2, 10, }.

|xx 0 |
( )

(1.4)

Probability

11/
e
f (x, ) =
2 (1/)

Figure 1.3: Generalized Gaussian distribution for {1, 2, 10, }. Adapted from [225].

The modern Bayesian inference scheme presented here was popularized in the 1960s [36, 55,
70, 104, 110]. Its rst applications to structural identication dates from the 1990s [7, 19, 116].
Since these pioneering works, an extensive amount of applications and extensions can be
found in the literature [4749, 83, 84, 91, 93, 118, 153, 182, 197, 221, 241, 243, 245]. In most
applications on civil structures, authors assumed that the error structure for the modeling and
12

1.2. System identication


measurement processes can be represented by independent Gaussian noise centered on zero.
The term error structure refers to the set of relationships describing the magnitude of errors and
the dependance between errors for several quantities. This term is borrowed from the system
identication community. Examples where the error structure is not represented by Gaussian
noise are rare; for instance, Cheung et al. [49] included spatially correlated uncertainties in
the Bayesian identication of turbulent ow models. In the eld of geophysics, Arroyo and
Ordaz [9] also included spatial dependencies in multivariate Bayesian regression analyzes.

Model-class selection
Bayesian methods have been extended for the selection of model classes. Such approach
compares the relative credibility of several model classes (g(), h(), . . .) based on observed
data y. For instance, when comparing two model classes g() and h(), the Bayes factor can
be computed as the ratio of the likelihood of each model B = P (y|g())/P (y|h()) [24, 110].
When B > 1, data favors the model class g() over h(). Otherwise, h() is favored over g().
With this methodology, model classes having more parameters are automatically penalized
because their posterior pdf is spread over more dimensions. Several authors [20, 109] drew
parallels between this intrinsic penalizing factor and Ockhams razor also known as the parsimony principle [205]. However, such a methodology only provides relative information about
model classes plausibility. Therefore, regardless of whether the models compared are right
or wrong, one model class is shown to be either superior or equal to others. This a direct
consequence of using the law of total probability 1 with a nite number of model classes.
Therefore, if wrong choices of model classes are initially provided, this type of approach may
not be able to detect it. Further formal descriptions of Bayesian inference and model class
selection are presented in references [130, 225].

Identiability
Ljung [131] described identiability as a criterion that determines if an identication procedure leads to unique values for parameters. Katafygiotis and Beck [116] also applied the
concepts of identiability for structural identication. Their approach builds on the work
of Bellman and strm [22]. They proposed that a system is locally identiable if there are
maxima in the posterior pdf based on the difference between predicted and measured values.
A system is dened to be globally identiable if there is a single maximum. More details on
Bayesian inference and on identiability can be found in [242].

Optimality of Bayesian inference


Stark and Tenorio [218] mentioned that If the state of the world is a random variable with
a known distribution, Bayesian methods are in some sense optimal. However, for practical
1 The sum of all probabilities must equal one. P (g()) + P (h()) + . . . = 1

13

Chapter 1. Literature review


applications, the error structure is seldom accurately known, especially due to the model
simplications involved (see 1.3.2). Beven [26] mentioned that It is commonly forgotten
that statistical inference methods were originally developed for tting distributions to data in
which the modeling errors can be treated as measurement errors, assuming that the chosen
distributional form is correct. Therefore, when models are biased representations of reality,
estimating parameters using traditional inference approaches might lead to biased results if
model simplications are not recognized.

Approximate Bayesian Computation


Approximate Bayesian Computation (ABC) was proposed in the eld of biology [18, 178] to
perform Bayesian inference when no likelihood function is available. The basic principle is
the following:

1. Generate a set of parameter from a prior distribution ().


2. Compute predictions from a model g().
3. Calculate the distance d (g(), y) between predicted (g()) and measured (y) data.
4. Accept if d ; return to 1

In this framework,  is the tolerance to reject a model. If  , all parameter sets are
accepted. If  = 0, the algorithm only accepts the model predicting exactly measured data. In
case of high dimensional problems, predicted (g()) and measured (y) values are replaced
by summary statistics. Sampling methodologies based on Markov chain Monte Carlo are
presented by Marjoram et al. [140]. The method has also been extended for model class
selection [229]. However, Robert et al. [187] recently argued that these methodologies may not
yet provide trustworthy posterior probabilities of models. Approximate Bayesian Computation methodologies have a high potential because they do not require a likelihood function.
However, they are sensitive to the choice of summary statistic [139, 172].

1.2.3 Model falsication


The concept of hypothesis falsication has been well-known in science for centuries. However,
it was only in the 1930s that it was formalized by Karl Popper in The logic of scientic discovery
[173]. Popper asserted that in science, models cannot be fully validated by data, they can
only be falsied. This philosophical perspective was reused in several elds related to data
interpretation [37, 79, 90]. This section presents the work from two of the most prominent
authors who have proposed data-interpretation methods based on falsication.
14

1.2. System identication


Inverse problems and falsication
A pioneer in the eld is Albert Tarantola who has worked on statistical inference applied to
geophysical data-interpretation [225]. At the end of his career Tarantola [226] suggested to
solve inverse problems as follows: use all available a priori information to sequentially create
models of the system, potentially an innite number of them. For each model, solve the forward
modeling problem, compare the predictions to the actual observations and use some criterion
to decide if the t is acceptable or unacceptable, given the uncertainties in the observations
and, perhaps, in the physical theory being used. The unacceptable models have been falsied,
and must be dropped. The collection of all the models that have not been falsied represent the
solution of the inverse problem. In this paper, he however did not provide a general criteria to
separate candidate from falsied models.

Generalized likelihood uncertainty estimation


The Generalized likelihood uncertainty estimation approach (GLUE) was developed by Beven
and Binley [28] in the early 1990s. The approach is based on the principle that the error
structure is not necessarily known. Therefore, they proposed to use a subjective criteria for
dening a limit between candidate and falsied models. Beven argued that in inverse analyses,
there can be several adequate explanations of the observed data (equinality concept). His
work has been extensively applied in the eld of environmental modeling where uncertainties
are especially large and difcult to quantify [25, 27, 30, 129]. Some have criticized the GLUE
approach for being too subjective [138]. In the context of civil infrastructure, more information
is usually available to quantify uncertainties, than in environmental modeling.

1.2.4 Hypothesis testing


Hypothesis testing is a statistical inference technique used to test a null hypothesis H0 against
one or several alternatives H1 . When a single alternative hypothesis is used, P (H0 ) + P (H1 ) = 1.
Falsifying H0 is performed for a target condence, ]0, 1]. Therefore, H0 is rejected if P (H0 ) <
1. Table 1.1 describes the possible outcomes of hypothesis testing depending on whether or
not H0 is true. Type-I error corresponds to falsely rejecting H0 . There is a probability = 1
of committing a Type-I error. Type-II error corresponds to falsely accepting H0 . More details
regarding practical implementation of hypothesis testing can be found in [126]. Generally,
Type-I errors are considered to be more critical than Type-II errors.
Table 1.1: Relations between right and wrong diagnosis in hypothesis testing.

Reject H0
Fail to reject H0

H0 is true
Type-I error
Right diagnosis

H0 is false
Right diagnosis
Type-II error

Methodologies related to hypothesis testing has been used by Mahadevan et al. [111, 146, 183]
15

Chapter 1. Literature review


to test the validity and to calibrate models. One of the techniques reported is based on
multiple signicance tests (T 2 Hotellings [13]). These techniques compare a validation metric
based on the Mahalanobis distance, with a critical value issued from either a 2 or a F distribution. The validation metrics include model and measurement uncertainties. For
measurement uncertainties, they estimated the covariance between several quantities based
on experimental data. For model uncertainties, the correlation is evaluated based on the model
output covariance. In these studies, uncertainty dependencies due to model simplications
and omissions were not included.

Condence intervals & multiple hypothesis tests


Hypothesis testing is related to the concept of condence intervals. Condence intervals
represent bounds that include the results of multiple hypothesis tests. When testing if a
model is valid, a condence interval (for a condence level ) denes the bounds that should
include the difference between predicted and measured values when the null hypothesis,
i.e. the model tested, cannot be discarded. These can be helpful to visualize how close or
how far the hypothesis is to be falsied. Another aspect arises when testing a hypothesis
several times (i.e. multiple hypothesis thesis testing [209]). If all these tests are independent,
for each test there is a probability = 1 of having a Type-I error. Therefore, over N tests
it corresponds to a probability of 1 N . For instance, when testing the same hypothesis
ten times (without appropriate corrections) with a target = 0.95, there is a 40% chance of
committing a Type-I error. This phenomenon is known as the ination of the alpha level [3].
Many approaches are available to counteract this effect [209], such as the Bonferonni [31]

[211] corrections. Corrected alpha values  that can be used to test hypotheses
and Sidk
are presented in Equation 1.5 where the Bonferonni correction is the rst term of the Taylor

expansion of the Sidk


correction. Bonferonni correction is thus a conservative approximation

of the Sidk correction.

Sidk


Bonferonni



1/N
=1 (1 )
/N

(1.5)

Comparison of ellipsoidal and rectangular coverage regions


Coverage regions are condence intervals dened for multidimensional applications. As
presented by the Working Group of the Expression of Uncertainty in Measurement [107], the

Sidk
correction can be used to dene coverage regions for a target . Figure 1.4 compares
the probability contained in rectangular and ellipsoidal coverage regions commonly used in
multivariate hypothesis testing [146, 183, 217]. In this example realizations of the a bi-variate
Gaussian random variable X N (, ) are generated for a mean = [0, 0]T and a variance
= [1, 1]T . In plots presented in Figure 1.4 a), b) and c) the correlation coefcient dening

16

1.2. System identication


this random variable is set respectively to 0.9, 0.4 and -0.9.

-1

-1

-1

-1

-1

-1

Figure 1.4: Comparison of the probability contained in rectangular and ellipsoidal condence
regions when varying the correlation between two random variables. In all three cases, the
correlation used to compute the ellipsoidal region is set to 0.9. However, the correlation used
to generate realizations of X is a) 0.9, b) 0.4 & c) -0.9. Only the rectangular regions include in
all situations a proportion of the sample at least equal to the target (0.95).

For a Gaussian random variable, the smallest region possible including a probability is
bounded by the Mahalanobis distance D M (x) dened in Equation 1.6, where x is a vector
containing a realization of X . Even if the size of the region dened by the Mahalanobis
distance is minimal, its computation requires the denition of the correlation coefcients in
the covariance matrix . In order to calculate the Mahalanobis distance, the correlation is set
to 0.9 for all three scenarios a), b) and c).

D M (x)2 = (x )T 1 (x )

(1.6)

The probability that realizations of X are included in the rectangular and ellipsoidal region, P M
and P T respectively, is expressed in Equations 1.7 and 1.8. In Equation 1.7, 2 (n m ) is the value
of a chi-squared distribution having n m degrees of freedom, found for a target cumulative
probability = 1 = 0.05. The rectangular coverage region dened by threshold bounds

Tl ow,i and Thi g h,i are found using the Sidk


correction and a target reliability = 0.95.

P M = P (D M (x)2 2 (n m ))

(1.7)
17

Chapter 1. Literature review

m
P T = P (i =1
Tl ow,i x i i Thi g h,i )

(1.8)

For each scenario a), b) and c), 1000 realizations x = [x 1 , x 2 ]T of X are generated. The ellipsoidal region bounded by the solid line (i.e. the Mahalanobis distance) includes a proportion
P M = 0.95 of the realizations of X , only when the correlation is correctly evaluated (i.e. scenario
a)). In Figure 1.4 b) & c), the ellipsoidal region (supposed to include 95% of the realizations
of X ) only include 64% and 42% respectively. For all three scenarios, the rectangular regions
bounded by dashed lines, contains a proportion P T of the realizations of X at least equal to
the target . Square regions lead to less precise results than when using the ellipsoidal bound

[107]. Nonetheless, dening rectangular condence intervals using the Sidk


correction is
conservative with respect to the target probability , without having to make assumptions
related to the dependence between uncertainties.
Figure 1.5 compares the area enclosed in a rectangular and an ellipsoidal region both including
a proportion of the sample equal to the target = 0.95. The ellipsoidal region computed for a
correlation value of zero, encloses an area of 22.9 units and the rectangular region encloses an
area of 20.0 units. Thus, in case where the dependency between random variable is unknown,
using a rectangular condence region is both more reliable and more efcient than using a
condence region based on the Mahalanobis distance.

Area = 22.9 units

Area = 20.0 units

-1

-1

-1

-1

Figure 1.5: Comparison of the area enclosed in rectangular (a) and ellipsoidal (b) condence
regions including a target probability content of 0.95. The correlation used to compute the
ellipsoidal region is set to 0. However, the correlation used to generate realizations of X is 0.9.
The area enclosed in the ellipsoidal region is 15% larger than the region dened by rectangular
bounds.

18

1.3. Uncertainties involved in data interpretation

1.2.5 Multiple-model approaches


Multiple-model approaches are solutions to inverse problems that involve more than one
model that may explain observed behavior. Even if this concept is intuitively known by scientists, it was formalized a decade ago by Raphael and Smith for applications involving diagnosis
of civil structures [179]. This work uses evaluations of errors to separate unlikely models from
candidate model instances. Evaluation of uncertainties were used by Robert-Nicoud et al.
[188] to dene threshold bounds within which the root mean square of errors should lie in
order for them to be accepted as possible explanation of observations. Model composition
techniques using stochastic search were also proposed [180, 188, 190]. Previous work [88, 181]
used probabilistically determined threshold values to accept candidate models. However,
uncertainty combination was not included into a robust and systematic methodology. Furthermore, the approach, based on the root mean square of errors, did not account for uncertainty
dependencies between prediction locations. For the purposes of interpretation, Saitta et
al. [198201] used data-mining to nd clusters within candidate model sets and to improve
visualization and interpretation of results.
Matta and Stefano [143] also proposed a multiple-model identication approach to overcome
limitations of traditional model-updating. In this approach several sub-optimal models coexist in a group of potential explanations of the measured system. In elds other than structural
engineering, the value of having a population of models as the result of system identication
has been recognized long ago. For instance, in the 1960 Press [177] generated ve million
model instances to identify the properties of the Earth mantle. From this population, he
identied six as candidate models.

1.3 Uncertainties involved in data interpretation


Uncertainties are an inherent part of science and data interpretation is no exception. Some
concepts associated with uncertainty representation, denition and combination are reviewed.

1.3.1 Uncertainty representation


In practical applications, the values for errors are always unknown to some extent. Therefore
uncertainty is used to describe the possible outcome of an error. Uncertainty descriptions
can be based on several approaches. The main ones are: the frequentist probabilities, the
Bayesian probabilities2 and imprecise probabilities.

2 also known as subjective probabilities

19

Chapter 1. Literature review


Frequentist probabilities
The frequentist perspective deduces probabilities from a series of repeated random events. For
instance if a random event has a probability of occurrence of 0.8, it means that over an innite
amount of trials this event will have occurred 80% of the time. This long-run relative frequency
denition makes it hard to validate frequentist approaches for civil-engineering applications
where usually only one, or sometimes a few, trials are available. Therefore, this probability
interpretation is not best suited to represent deterministic events where the uncertainty is
associated with epistemic3 causes [98]. Frequentist probabilities are expressed as probability
density functions pdf s. Figure 1.6 presents a pdf for a random variable X , where a condence
interval bounded by Tl ow and Thi g h contains a probability ]0, 1]. When dealing with
multidimensional pdf s, condence intervals are called condence regions.
Probability

Confidence interval

Figure 1.6: pdf of a random variable X , where a condence interval bounded by Tl ow and
Thi g h contains a probability ]0, 1].

Bayesian probabilities
Bayesian probabilities assign degree of beliefs to events that can be either random or epistemic.
Bayesian probabilities were mainly popularized by Laplace who stated that probability theory
is nothing but common sense reduced to calculation [125]. In the 1950s, Jaynes [104] also
widely contributed to their development with the maximum entropy principle used to dene
pdf s when only partial knowledge is available. This principle says that the best pdf to represent
our knowledge is the one with the largest entropy [210]. Shafer [208] describes Bayesian
probabilities as ...a special case of the theory of evidence where all beliefs can be expressed in
the form of probabilities.
The same notation is used for both frequentists and Bayesian estimators because both approaches are compatible and can be used together: Bayesian estimators can be evaluated
from a frequentist perspective, and vice versa [218]. ISO guidelines [105] mention that using
Bayesian probability can be as reliable as a frequentist evaluation, especially in a situations
where a frequentist evaluation is based on a small number of statistically independent observations.
Even if they are not limited to it, this interpretation of probabilities is closely related to
Bayesian conditional probability where prior knowledge is updated using evidence. Bayesian
3 Epistemic uncertainties are associated with a lack of knowledge and simplications.

20

1.3. Uncertainties involved in data interpretation


conditional probability [55] is well known and widely applied in almost every eld of science
(see 1.2.2). When referring to the posterior pdf of a Bayesian inference, the term credible
interval (or region) is used to describe bounds including a given probability content.

Imprecise probabilities
The development of imprecise probabilities was motivated by a desire to separate the way
random and epistemic uncertainties are represented. Several authors [21, 6769, 75, 152, 161,
207] suggested that when dealing with epistemic uncertainties, no information is available to
dene an unique probability density function. Researchers such as Ferson [66] argued that
the maximum entropy principle cannot be justied in real-life problems. For instance, if one
only knew the upper and lower bounds that a variable X can take, it is not right to suppose
that any value of X between these bounds has an equal probability of occurrence (i.e. uniform
distribution). Figure 1.7 presents how the uncertainty on a variable X can be described by
probability bounds (Tl ow and Thi g h ). Note that no probability distribution is dened.
Probability bounds

Figure 1.7: Uncertainty on a variable X can be represented by probability bounds (Tl ow and
Thi g h ) without dening a probability density function.

Other approaches such as the Dempster-Shafer theory [68, 85, 208] can be used to describe
incomplete knowledge without using frequentist or Bayesian probabilities. There is no consensual agreement within the scientic community regarding which probability interpretation
is best.

1.3.2 Common sources of uncertainties in structural identication


Structural identication involves imprecision in both the model and the measurements. These
imprecisions can be described by uncertainties using approaches presented in 1.3.1. This section presents a non-exhaustive list of uncertainty sources that can be used to guide researchers
and practitioners when identifying the behavior of structures.

Sensor resolution
Sensor resolution describes the unreducible variations that one can expect when using a
measurement device. Note that measurement devices often have to be corrected for systematic
errors such as temperature effects and long-term drift. This type of uncertainty is one of the
21

Chapter 1. Literature review


most extensively studied. Often, probability density functions are directly provided by the
manufacturer to describe sensor resolution uncertainty. Additional provisions might have to
be taken to account for sensor installation in uncertain conditions, contact losses, as well as
specic site conditions presenting for instance high electromagnetic noise. Also, some type of
sensors have a resolution that depends either on the frequency measured or on the absolute
value measured.

Ambient vibration data-processing


Acceleration data recorded on structures is often processed to obtain natural frequencies
and mode shapes. A part of the uncertainty in these experimental parameters is related
to the variability of the frequencies themselves and another part is related to errors in the
estimations. Errors in the acceleration measurements include the digitization errors that are
usually negligible, the assumption that the system is solicited by white noise and the precision
of the modal analysis method used. These epistemic uncertainties associated with ambient
vibration monitoring were reported to be lower than 1% even for a short energetic signal [150].
Moreover, the uncertainties due to the signal processing method itself, is in the order of 1%
according to Peeters and Ventura [170]. As a comparison, Lamarche et al. [123], found that the
overall uncertainty related to ambient vibration technique is in the order of 3%, compared
with forced vibrations.

Model simplication
When modeling complex systems such as civil-structures, omissions and simplications
are inevitable. The scale of elements contained in a structure can vary by several orders of
magnitude. Due to limited computing resources, geometrical complexity and engineering
costs, a limited degree of model renement is currently possible. In most models, assembly
details, secondary structural-elements and boundary conditions are either simplied or
omitted. Usually, model simplications of civil structures result in underestimates of the
stiffness of real structures because of omitted elements. NAFEM publications provide guidance
on how to avoid modeling errors and inconsistencies in nite-element models [2, 94, 95, 100].
Studies also addressed the inuence of secondary structural elements on the resistance and
transverse load distribution of bridges [58, 59, 156, 157]. Results indicated that for these
structures, secondary elements may affect the efforts up to 40% [58].

Mesh renement
When using nite-element models, the number of degrees of freedom is always less than
in the real system studied (which contains an innite number). As the number of elements
increases, the model prediction discretization error converges asymptotically [94]. Due to
limited computing resources, the number of elements used to obtain solutions is often several

22

1.3. Uncertainties involved in data interpretation


orders of magnitude less than the number necessary to be negligibly close to the asymptote.
Without careful attention to mesh renement large errors are possible, especially for strain
predictions. A notable example is the failure in 1991 of an oil platform due to an underestimation of the shear strains by almost 50% [206]. In the eld of civil engineering, studies on mesh
renement have been performed by Topkaya et al. [230]. The authors concluded in their study
that for medium span bridges (60-120 m), the mesh size should be no more than 1.5 m (size
for rectangular mesh) for 9-node shell elements. At that level of renement, the error on strain
predictions due to discretization is reported to be 5%. Their results must be used with care
because the accuracy of shell elements are dependent on geometry, boundary conditions and
loading [45, 136] .

Model-parameter uncertainties
When studying physics-based models, not all parameters are exactly known. Parameters
such as geometry parameters, some material properties, and solicitations can contribute
to modeling uncertainty. For instances, the thickness of concrete deck for girder bridges in
the USA were found to have a coefcient of variation (COV) between 0.02 [1] and 0.07 [113].
These values were reported in a study by Mirza and MacGregor [151]. Regarding material
properties, the steel Youngs modulus mean value is reported to vary between 200 and 206 GPa
with a COV varying between 0.02 and 0.06 [61, 78, 108, 212]. Steel Poissons ratio is reported to
have a COV of 0.03 [61, 78]. Based on the work of Loo and Base [132], the concrete Poissons
ratio COV is estimated to be 0.02. Uncertainties on many other parameters such as material
densities, concrete Youngs modulus and geometrical dimensions are case-specic. Therefore,
in absence of statical surveys, these uncertainties can be based on engineering heuristics.
The inuence of parameter uncertainties on predicted values is found by propagating parameter uncertainties ( = [1 , 2 , . . . , n sp ]) in the model g() using numerical sampling techniques.
Each parameter value can be described by a random variable U,i having a probability density
function fU,i . Figure 1.8 illustrates the process of parameter uncertainty propagation where
parameter value samples are drawn from their respective distribution and provided to the
model. Propagating uncertainty in this way returns the model prediction variability (r i ) due to
the uncertainty in parameter values.
1/2
Theoretically, the mean value of the combined uncertainty converges as 1/n sampl
[185]. In
es
practical applications, the number of samples (n sampl e ) required to obtain a stable solution is
problem dependent and may vary according to input pdf s. A method for detecting that the
uncertainty propagation has converged is to monitor the standard deviations of the model
responses to stop sampling when these values reach steady states. Further details regarding
the numerical implementation of such a procedure is presented by Cox and Siebert [52].
An alternative to numerical sampling is to use Polynomial Chaos Expansion or Stochastic
Galerkin Methods to propagate uncertainties in a more efcient way than random sampling
[57, 223, 224]. These approaches can be several orders of magnitude more efcient than Monte
Carlo methods. However, they lose generality since they require additional assumptions.

23

Chapter 1. Literature review


Template model
parameters (i)

Template model
response

Template model

...

...

Figure 1.8: Propagation of model-parameter uncertainties using Monte Carlo sampling.


Adapted from [106]

Uncertainty of uncertainty denitions


When subjective knowledge and heuristics are used to dene a pdf describing a phenomenon,
it is common to have an uncertainty associated with this knowledge. Uniform distribution
is often used to represent subjective knowledge. However, the sharp bounds of the uniform
distribution may not reect the subjective process involved in the denition of uncertainty
bounds. For these situations, the Curvilinear and Iso-curvilinear distributions have been
proposed [114, 127]. When uniform distributions are used to describe uncertainties, an
uncertainty can also be associated with bound positions. When combined together, these
distributions form the Curvilinear distributions as presented in Figure 1.9.

Probability

Main uncertainty
Uncertainty related to
bound definition

Error

Combined uncertainty
(curvilinear distribution)

Figure 1.9: Curvilinear distribution used to describe a uniform distribution where bounds are
inexactly dened.

1.3.3 Uncertainty combination


In some circumstances, it is desirable to aggregate several uncertainty sources in a single
pdf. This can be done using numerical sampling. As described by Cox and Siebert [52],
error samples can be drawn from several uncertainty probability density functions and then
added together. Once normalized, the distribution of outcomes corresponds to the combined
uncertainty probability density function. In order to dene either a condence or a credible
interval including a target probability , it is recommended by ISO guidelines [106] to draw at
least 104 1/(1 ) samples. For example, when = 0.95, this corresponds to 200 000 samples.
24

1.4. Measurement-system design

1.4 Measurement-system design


Measurement-system design refers to the task of selecting sensor types, measurement locations and test congurations to measure efciently with limited resources. While sensor
availability is increasing with the development of new technologies, the cost of interpreting
data acquired has not changed. Indeed, in many practical situations, the cost of making sense
of the data often exceeds by many times the initial cost of sensors. Brownjohn [39] noted
that currently, there is a tendency toward the over-instrumentation of monitored structures.
He proposed that for a given budget, emphasis should be put on designing a robust and
reliable measurement system instead of trying to maximize the number of sensors used. This
is particularly relevant since the number of measurements is often not proportional to the
amount of information obtained [219].

Review of applications
Many authors have developed measurement-system design techniques based on the principle
that the quality of a measurement system depends upon the independence of the information
acquired. For example, several sensors measuring perfectly dependent quantities would
not provide more information than a single sensor. Yeo et al. [240] have used parameter
grouping techniques to nd independent measurement locations for damage localization
using static tests. Kang et al. [115] proposed a similar approach using a genetic algorithm
to position dynamic measurements. Another approach by Worden and Burrows [236] uses
neural networks to nd optimized sensor placement for dynamic measurements. Stephan
[219] proposed a methodology that placed sensors at locations where the Fisher information
matrix is maximized while minimizing the amount of redundancy in acquired data.
Papadimitriou [167] and Robert-Nicoud et al. [189] used entropy to select optimal sensor
congurations. The methods nd the conguration of sensors that maximizes the disorder in
the predicted values for model parameters. In another proposal, Papadimitou [168] used an
evolutionary algorithm to place sensors. More recently Papadimitriou and Lombaert [169]
have studied the effect of prediction error dependencies on sensor placement. The method
described by Meo and Zumpano [148] concentrate sensor positions in high energy content
regions (for dynamic measurement). This approach favors sensor locations having a high
signal to noise ratio. The main limitation of these approaches is that even if they can maximize
the performance of a measurement system with respect to some criteria, they do not quantify
in absolute terms the utility of measurements for interpreting data.

Value of information
Pozzi and Der Kiureghian [176] observed that the value of a piece of information depends on
its ability to guide our decisions. This supports the idea that measurement systems should be
designed according to interpretation goals. Currently there is a lack of systematic methodolo-

25

Chapter 1. Literature review


gies to perform such a task. Furthermore, no literature was found where systematic modeling
errors were included in measurement-system design methodologies.

1.5 Exploration of high dimensional solution spaces


Sampling in high dimensional solution spaces add the complexity challenge to system identication. Tarantola [226] underlined this aspect by saying that: "nding a needle in a haystack is
hard if the haystack has hundreds of dimensions". This summarizes well challenges associated
with high dimensional space system identication. Several alternatives exist to get around
complexity issues.

1.5.1 Surrogate models


Surrogate models are substitutes for complex models that can capture the essential behavior
while being solved quicker. The most common approach used is to build response surfaces
based on polynomial functions [64] as presented in Equation 1.9 [34, 35]. In this equation, y
is a vector containing measured values, a vector containing the parameters of the original
system used to construct the model matrix M , a vector containing the parameters of the
response surface and  a vector containing errors. The number of columns in the matrix M
depends on the degree of the polynomial function chosen.

y = M+

(1.9)

contains parameters that minimizes, in a least-squares sense, the difference between the
response surface and the model response (Equation 1.10).
= (M T M )1 M T y

(1.10)

Design-of-experiment theory provides efcient ways to decide where samples should be taken
in the parameter space in order to best t a polynomial function [72]. For instance using
the Box-Behnken design [33], a second-order polynomial function can be built for 16 initial
parameters using only 385 evaluations of the physics-based model (e.g. nite-element model).
Other designs such as central-composite design, full and fractional factorial design can be
used as well. Note that each require a specic number and conguration of samples. Figure
1.10 presents the sample congurations of a central composite design for three parameters
where each axis is the normalized parameter range of the original system.
Details regarding sample arrangements can be found in [122]. Even when more advanced
techniques can outperform polynomial regression, this method remains a good tradeoff

26

Normalized parameter #1

1.5. Exploration of high dimensional solution spaces

1
0.5
0
0.5
1
1
0.5

0
Normalized 0.5
parameter #2
1 1

1
0.5
0
0.5

Normalized
parameter #3

Figure 1.10: Central-composite design for three parameters where each axis represents the
normalized parameter range.

between performance and simplicity. The performance of other response-surface approaches


such as multivariate adaptive regression splines (MARS) [76] and Kriging estimates [213] were
reviewed by Rutherford et al. [196].

1.5.2 Space exploration approaches


Techniques were developed to explore solution spaces when looking for optimal solutions.
Several heuristic-based stochastic search techniques are available, for instance: Genetic
algorithms [53], Particle-swarm optimization [117], Simulated annealing [119], PGSL [180],
etc. Compared to gradient-based techniques [214], global-search reduces the risk of being
trapped in local minima when objective functions have complex topologies. Many of these
methodologies were developed to nd the small regions containing optimal solutions.

Greedy algorithm
One of the well known heuristic-based optimization technique uses a greedy algorithm [50].
Greedy algorithms break an optimization task in a sequence of steps, where for each, it chooses
a local optimal solution without coming back on previous choices made. This type of approach
is not guaranteed to nd the global optimum. It is however well suited for a wide range of
problems.

Random walk
Several approaches are based on random walks to guide parameter sampling depending on
the response of a likelihood function. In these approaches, samples are drawn with respect to
a likelihood function P (y|) dened for the residual (()) of the difference between predicted
27

Chapter 1. Literature review


(g()) and measured (y) values. Figure 1.11 presents a conceptual example where a parameter
space (a) is explored using random walk. First, a starting point 1 = [1 , 2 ]1 is chosen randomly
in the domain and its likelihood P (y|1 ) is computed (b). (1 ) denotes the residual of the
difference between predicted values obtained using the parameter set 1 and measured values
y. Then a random step, obtained from a normal distribution N (1 , [1 , 2 ]), leads to the
point 2 where the likelihood P (y|2 ) is computed. If P (y|2 ) > P (y|1 ), this step is accepted
and a new one is generated leading to 3 . If P (y|3 ) < P (y|2 ) this point has a probability of
being accepted corresponding to the ratioP (y|3 )/P (y|2 ). If the new point 3 is rejected, the
following step is generated from the point 2 .

Likelihood function
Parameter space

a)

b)

Figure 1.11: Conceptual illustration of random walk sampling.

This process is a Markov-chain in the sense that the generation of a new step is only based
on the current step [92]. This sampling technique ensures that the samples [1 , 2 ] are taken
such that their sampling density is proportional to the likelihood function P (y|). Note that
the random steps can be obtained using any probability distribution. The size of the steps
[1 , 2 ] inuences the acceptance rate of the random walk. Good practice rules recommends
that the step size is xed so the acceptance rate is between 30%-50% [225]. Further details
regarding the convergence of Markov-chains can be found in [51]. This concept of randomwalk sampling is behind several techniques used to explore high dimensional solution spaces.
The rst was proposed by Metropolis et al. [149]. Later, Hastings generalized their formulation
in the Metropolis-Hasting algorithm [92]. Random-walk sampling techniques are extensively
used in Bayesian inference applications (see 1.2.2). For instance Cheung and Beck [47] used a
methodology called Hybrid Monte-Carlo simulation to infer the properties of structures using
dynamic data. A Metropolis-Hasting algorithm is used by Most [153] for similar purposes. In
a context other than Bayesian inference, this technique offers the possibility to explore the
space without looking for a single optimal solution.

1.6 Conclusions
This chapter covered several concepts that are related to system identication. Figure 1.12
presents a timeline of contributions reported in this literature survey. It illustrates the temporal
relationships between existing contributions on which this thesis builds.
28

1.6. Conclusions
Hastings
1970
Random-walk
Metropolis
Jaynes
1953
1957
Random walk Bayesian probability
Legendre
1806
Gauss
Bayes
1809
1763
Bayes theorem Least-squares

Laplace
1812
Bayesian probability

Sidak
1967
Hypothesis testing

Popper
1934
Falsification

Bellman
1970
Identifiability

Beven & Binley


1992
GLUE

Pritchard & Seielstad


1999
Aproximate Bayesian computing

Shannon
1949
Entropy

1700

Present

Figure 1.12: Timeline of key contributions made in scientic elds related to datainterpretation. These contributions are a non-exhaustive survey of the vast literature available
in each eld.

An aspect discussed in this chapter is that our capacity to install measuring systems is much
greater than our capacity to interpret data. This happened, in part, because interpretation
methods are based on approaches that were originally developed in the eld of statistics,
signal processing and control engineering. Researchers in these elds commonly assume that
modeling errors can be treated as Gaussian noise (i.e. independent random variables). Such
an assumption is not generally applicable to civil infrastructures because in these systems,
systematic biases are present and their effects on the uncertainty dependencies involved in
the error structure are often difcult to quantify.
Figure 1.13 presents a schematic comparison of current data-interpretation techniques with
respect to the subjectivity involved and to the difculty to meet hypotheses. In the context of
structural identication, available system identication methodologies either rely on restrictive hypotheses that are seldom fullled in practice for complex systems or they sometimes rely
on subjective choices, not tied to a systematic methodology. Therefore, there is a need for new
approaches that offer tradeoffs between subjectivity and the difculty to satisfy hypotheses.

Subjectivity

Generalized
Likelihood (GLUE)
Approximate Bayesian computation
Need for new approaches
offering a compromise between
subjectivity and restrictivity of
hypotheses

Bayesian inference
Residual minimization
Restrictive hypotheses (Decreasing
relevance to real situations)

Figure 1.13: Schematic comparison of current data-interpretation techniques.

29

Chapter 1. Literature review


In current applications, few methodologies include explicit estimations of model simplications, mesh renement and other model-related uncertainties. Model falsication has strong
foundations in science. However, few approaches use its potential in system identication.
Model falsication can be used as the fundamental principle behind a diagnostic methodology
specically suited for complex systems such as civil infrastructure. Nonetheless, a systematic methodology is needed to determine which models can be falsied in situations where
uncertainty dependencies are unknown. A summary of the literature supporting objectives
originality is presented below.
1. Current system-identication methodologies either rely on assumptions regarding the
error structure that are seldom fullled for civil structures or sometimes rely on subjective choices not tied to a systematic framework. An approach is proposed in Chapter
2 to perform diagnosis without making assumptions about uncertainty dependencies.
This approach is based on the concepts of hypothesis falsication and uses methods
such as surrogate models and random-walk sampling techniques to efciently explore
high dimensional solution spaces.
2. No systematic approach was found to predict quantitatively, the capability of measurements to provide useful knowledge of complex systems. Chapter 3 presents such a
metric applicable in situations where there are systematic biases in models.
3. It is not a common practice to design measurement systems specically for the interpretation goals. Furthermore, no literature was found where systematic modeling errors
were included during the design of measurement-systems. Chapter 4 presents a methodology that is able to include model biases during the analysis of measurement-systems
performance.

30

2 Error-domain model falsication

Summary
This chapter describes the error-domain model falsication approach; A model
falsication methodology that correctly identify parameter values of complex
systems without requiring denitions of dependencies between uncertainties.
Prior knowledge is used to bound ranges of parameters to be identied and build
an initial set of model instances. Predictions from each model are compared
with measurements so that inadequate model instances and model classes are
falsied using threshold bounds. The probability content included in each set of

threshold bounds is adjusted using the Sidk


correction to account for multiple
measurements being used simultaneously to falsify model instances.

2.1 Introduction
Literature review reported that many system-identication methodologies require the definition of uncertainties and dependencies for comparison points1 i where predictions and
1 A comparison point is a quantity common to a model (prediction location and type) and a system (measure-

ment location and type).

31

Chapter 2. Error-domain model falsication


measurements are available. This requirement can be fullled when working with models
that exactly capture physical phenomenon studied. In such a case, the discrepancy between
model predictions and measurements is due to the choice of parameter values and the error
structure can be represented by independent Gaussian noise associated with measurements.
When identifying the behavior of civil-structures, the error structure denition associated
with model predictions is often incomplete.
Section 2.2 presents a methodology to identify the properties of systems where uncertainty dependencies are unknown. The approach proposed is named error-domain model falsication.
Unlike approaches2 presented in 1.2, this approach falsies models in absolute terms based
on error values (i.e. the difference between predictions and measurements).
In this chapter, Section 2.3 presents the effect of uncertainty dependencies on the error
structure. Section 2.4 presents how several sources of uncertainties can be combined together
for the purpose of system identication. Section 2.5 proposes exploration techniques to
generate model instances. A summary of the data interpretation methodology is presented
in Section 2.6 along with an illustrative example. Section 2.7 shows how model falsication
can be extended to use dynamic data as input. Finally, Section 2.8 draws parallels between
error-domain model falsication and Bayesian inference.

2.2 Methodology
When trying to identify the behavior of a system, there may be several potentially adequate
model classes to represent it (g(. . .), h(. . .), . . . , etc.). In the context of structural engineering, a
structure could be represented by, for example, several different nite element models. Note
that a model class is a synonym for template model. Model classes take n p physical parameters, = [1 , 2 , . . . , n p ]T , as arguments, which correspond to the system properties such
as geometry, material characteristics, boundary conditions, loading, etc. Each combination
of model class and parameter set leads to n m predictions gi () obtained at each location
i {1, . . . , n m }. When taking a model class g(. . .) and the right values for parameters , the
value corresponding to the difference between a prediction gi ( ) and its modeling error
(mod el ,i ) is equal to the true value Qi for the real system. The true value Qi is also equal
to the difference between the measured value y i and the measurement error (measur e,i ). This
relation is presented in Equations 2.1 & 2.2.
gi ( ) mod el ,i = Qi = y i measur e,i

(2.1)

gi ( ) y i = mod el ,i measur e,i

(2.2)

2 Methods such as Bayesian inference can be referred to as parameter-domain system identication because

the plausibility of models is based on a posterior probability function dened in the parameter domain.

32

2.2. Methodology
In practice, neither the true value Qi nor error values i can be known. Only uncertainties
described by a probability density function (pdf ) of errors i can be estimated. Uncertainties
are dened prior to compute threshold bounds using either statistical data or using engineering heuristics. Such a pdf, fUi (i ), represents the distribution of a continuous random
variable Ui . fUc,i (c,i ) describes the probability of the residual of differences between predicted
and measured values (c,i ). Uc,i is obtained using Equation 2.2 by subtracting the modeling
(Umod el ,i ) and measurement (Umeasur e,i ) uncertainties. The pdf of Uc,i is presented in Figure
2.1. Note that random variables are used here to describe the outcome of either stochastic or
deterministic processes.

Residual of the difference


between predicted and
measured values

Threshold lower and upper bounds

Figure 2.1: The combined probability density function describes the outcome of the random
variable Uc,i , i.e. the combination of modeling and measurement uncertainties.

Determination of threshold bounds


A model instance is falsied if the difference between its predicted and measured values is
outside the interval dened by threshold bounds [Tl ow,i , Thi g h,i ] for any comparison point
i . Thresholds correspond to the bound of the smallest domain T (see Equation 2.3) that
satises the relation expressed in Equation 2.4, where ]0, 1] is the target identication
probability dened by the user. For example, when a single comparison point is used (n m = 1)
T = [Tl ow,1 , Thi g h,1 ] (i.e. the domain T is reduced to the condence interval dened by
threshold bounds).

T = [Tl ow,1 , Thi g h,1 ] [Tl ow,2 , Thi g h,2 ] . . . [Tl ow,nm , Thi g h,nm ] Rnm

(2.3)

...

fUc (c,1 , . . . , c,nm )d c,1 . . . d c,nm

(2.4)

When no information is available to quantify dependencies between residuals c,i , threshold bounds can be computed for each residual c,i as the shortest set of threshold bounds
33

Chapter 2. Error-domain model falsication


{Tl ow,i , Thi g h,i } that satisfy the following equation.
1/nm =

Thi g h,i
Tl ow,i

fUc,i (c,i )d c,i

i {1, . . . , n m }

(2.5)

As in the previous case, for each comparison point i , Tl ow,i and Thi g h,i dene the shortest
interval including a probability content equal to 1/nm for the combined uncertainty pdf Uc,i .

In Equation 2.5, the target probability is adjusted using the Sidk


correction (1.2.4) to
account for multiple measurements being used simultaneously to falsify model instances.
When Uc,i is described by a unimodal symmetric pdf, threshold bounds can be computed using
Equations 2.6. In these equations, F X1 (x) : x [0, 1] R represents the inverse cumulative
distribution function of a random variable X .


1
(1 1/nm )
2


1
= FU1c,i 1 (1 1/nm )
2

Tl ow,i = FU1c,i
Thi g h,i

(2.6)

Threshold bounds dene the limit of an hyper-rectangular domain T that has a probability
larger or equal to of containing the correct residuals between predicted and measured values.
This relation is expressed in Equation 2.7 where Uc,i is the random variable describing the possible residual outcomes. With this methodology, the adequacy of the identication depends
on the correct denition of uncertainties associated with the model and measurements. When
using threshold bounds to falsify inadequate model instances, there is a probability larger
or equal to of not discarding valid model instances, regardless of values of dependencies
between residuals c,i and regardless of the number of measurements (n m ) used. Threshold
bounds are dened once and are then used to evaluate what initial model instances can be
falsied.

m
P (i =1
Tl ow,i Uc,i Thi g h,i )

(2.7)

Model instances are falsied if they do not satisfy the inequalities presented in Equation 2.8.
Model instances that are not falsied are candidate models and are all considered as equal in
the sense that they are all possible explanations of the observed behavior. A model class g(. . .)
is falsied if all possible sets of parameter values are falsied by observations. When a whole
model class is falsied, it is generally an indication that there are aws in assumptions related

34

2.2. Methodology
to the model adequacy.

i {1, . . . , n m } : Tl ow,i gi () y i Thi g h,i

(2.8)

Figure 2.2 illustrates the concepts of threshold denition using multiple measurements. In this
gure, threshold bounds Tl ow,i and Thi g h,i are found separately for each combined uncertainty,
for a probability 1/2 . When these bounds are projected on the bi-variate pdf, they dene
a rectangular boundary used to separate candidate and falsied models. This criterion for
falsifying models does not require knowledge of dependencies between uncertainties. Also,
the probability of falsifying an adequate model instance does not increase for any number
of measurements used. It makes the approach suitable for situations where dependencies
cannot be evaluated, such as when model simplications introduce systematic bias for several
model predictions. Examples are shown in 5.2.

Uncertainty
comparison point #1

Uncertainty
comparison point #2

Figure 2.2: Threshold denition. Threshold bounds Tl ow,i and Thi g h,i are found separately for
each combined uncertainty for a probability 1/2 . When these threshold bounds are projected
on the bi-variate pdf, they dene a rectangular boundary that is used to separate candidate
and falsied models.

When dependencies can be evaluated, it is possible to obtain rectangular threshold bounds

narrower than those obtained using the Sidk


correction by numerically solving Equation
2.4. The width of threshold bounds used to separate candidate and falsied models depends
upon the number of comparison points. In general, the width of threshold bounds at each
location i increases every time a new measurement is considered since the term 1/nm become
larger. Therefore, there is a tradeoff between the utility of the information brought by new
measurements and the loss of accuracy due to the increased width of threshold bounds. This
aspect is discussed further in Chapter 4.
35

Chapter 2. Error-domain model falsication

2.3 Uncertainty dependencies


As mentioned previously, it is common that little information is available to quantify uncertainty dependencies between predictions. Uncertainty sources such as sensor resolution are
usually independent. However, for modeling errors such as simplications and omissions,
assumptions of independence are often invalid. For example, omitting a concrete road barrier
within a numerical model for a bridge would systematically affect displacement predictions
at several degree of freedoms. These systematic effects can affect dependencies between
uncertainties (Ui ) at several prediction locations i . Here, such dependencies are represented
by uncertainty correlations.
Figure 2.3 presents what could be the error in displacements between prediction locations for
a nite element beam model. If errors are random, displacements predicted at any location
are independent of each other and appear to vary around the real displacement. On the other
hand, systematic effects can introduce dependencies in the error structure. For example,
if the boundary conditions are not adequately modeled, displacements at several degrees
of freedom may be systematically affected. The main challenge lies in the representation
and quantication of these dependencies. In the case of linear dependencies, they can
be represented using correlation coefcients. Nonetheless, for civil structures, quantifying
correlation coefcients is usually difcult. This aspect is illustrated on a full-scale example in
5.3.2.
Independent errors

Dependent errors (correlated)


Predicted
displacement

a)

Predicted
displacement

1
Real displacement

b)

Real displacement

Figure 2.3: Error dependency between degrees of freedom in a nite element beam model.
If errors are random (a), predicted displacements at any location are independent of each
other and appear to vary around the real displacement. On the other hand, systematic effects
introduce dependencies in the error structure (b). For example, if the boundary condition is
not adequately modeled, the displacement may be biased at several location.

2.4 Combination of uncertainties for system identication


For the purpose of system identication, each uncertainty source (1.3.1) is described by a
probability density function. In general, several uncertainty sources are associated with measurements and model predictions. Furthermore, in most cases, not all parameters of a model
need to be identied. Section 2.4.1 describes how to propagate uncertainties associated
with secondary-parameters of a model and 2.4.2 presents how to combine several sources of
36

2.4. Combination of uncertainties for system identication


uncertainties to obtain the combined probability distributions Uc,i .

2.4.1 Secondary-parameter uncertainty


Parameters taken as input by models are separated in two categories. Physical model parameters to be identied = [1 , 2 , . . . , n p ] are named primary model-parameters. Parameters
having a lesser inuence on model predictions and that are not intended to be identied
are named secondary model-parameters, = [1 , 2 , . . . , n sp ]. The contribution of secondary
parameters on prediction uncertainty has to be included in the combined uncertainty Uc . For
that, the uncertainty in secondary model-parameter values is propagated through the model
]) while primary parameters are kept to their most likely values .
When a primary
g([,
parameter has several most-likely values, is taken to be the mean value. n o evaluations of the
template model are created in order to obtain stable pdf s describing prediction uncertainties
(see 1.3.2). This process is illustrated in Figure 2.4. The mean of each pdf is subtracted from
the predicted values to obtain the prediction variability. This is presented in Equation 2.9.
Note that there is one vector of samples u ,i per measurement location i . The matrix U
contains n o error samples for n m predictions (see Equation 2.10).

Template

Predictied
responses (ri)

model
p

Primary
parameter
expected values
Model instances
parameter space

Secondary
parameter
values

Multiple
predicted
values

Figure 2.4: Propagation of secondary-parameter uncertainty in a physical model to obtain


prediction uncertainty. Uncertainties in secondary model-parameter values are propagated
]) while primary parameters are kept to their most likely
through the template model g([,

values . Several thousand evaluations of the template model are made to compute the
prediction uncertainty due to uncertain secondary-parameter values.


T
u ,i = r i ,1 , r i ,2 . . . r i ,no r i

U = [u ,1 , u ,2 , . . . , u ,nm ]

(2.9)

(2.10)
37

Chapter 2. Error-domain model falsication

2.4.2 Uncertainty combination


When combining all uncertainty sources together to obtain Uc,i , Monte Carlo techniques
(1.3.3) are used to draw n c samples from each uncertainty distribution Ui . Samples from
all measurement uncertainty sources are summed together resulting in a matrix Umeasur e
of n c lines by n m columns. The same is done for the modeling matrix Umod el by summing
samples generated from all modeling uncertainty sources. If the secondary model-parameter
error sample matrix U does not contain the same number of samples than Umod el , a new
matrix U containing n c lines is created by resampling from the original U . The combined
uncertainty matrix Uc containing n c realizations of the random variable Uc is obtained by
subtracting the matrix Umeasur e from Umod el .
Note that when propagating secondary-parameter uncertainties through a model for several
predicted quantities r i , uncertainty dependencies are automatically included in the combination process. For other sources of uncertainty such as model simplication and mesh
renements, often very little information is available to quantify dependencies between uncertainties. This justies the need for approaches such as error-domain model falsication
that can be used without making assumptions about the dependencies involved in the error
structure.

2.5 Generation of model instances


With the error-domain model falsication approach, the goal is neither to quantify the likelihood of models nor to nd the best ones. As pointed out in 2.3, such model ranking requires
information that is often not available. The goal is to falsify inadequate model instances.
Consequently, current optimization and search methodologies (see 1.5) are not fully adapted
to nd the populations of candidate and falsied models. This section proposes ways to explore the model-instance solution space. Note that the sampling procedure presented in this
section can be extended to include several model classes simultaneously. In this section, three
sampling strategies are proposed: grid-based sampling, surrogate modeling and random-walk
exploration.

2.5.1 Grid-based sampling


In order to discover what combinations of n p primary-parameters can be falsied by measurements, model instances are generated according to an n p -dimensional grid. The minimal
and maximal bounds for parameter ranges are dened based on engineering heuristics. Discretization intervals I = [I 1 , I 2 , . . . , I n p ] are also provided to specify the sampling density for
each parameter. Note that it is preferable to set conservative bounds for parameter ranges
because inadequate solutions will be discarded anyway. Figure 2.5 illustrates the sampling
process where a grid of parameter combinations is generated. When the model is evaluated,
secondary-parameters are kept to their most likely values.
38

2.5. Generation of model instances

Each combination
of parameter is
solved using the
template model

Template
model

Predictied
responses

Model instances
parameter space

Figure 2.5: An initial set of model instances is generated based on a grid, where the template
model is evaluated using each parameter set. The grid is dened using the minimal and
maximal bounds for parameter ranges, dened based on engineering judgment. Discretization
intervals are also provided to specify the sampling density.

The template model is evaluated for each parameter combination. This entire set of parameter
combinations is named the initial model set. Predictions from each model instance are stored
n p
in a matrix having n k = j =1 I j lines and n m columns. Inadequate instances from the initial
model set are falsied using Equation 2.8. Instances that are not falsied are classied as
candidate models. This model falsication operation is illustrated in Figure 2.6.
The size of the initial model set increases exponentially with the number of parameters to
identify (n p ). Considering that the space of possible model instances can hardly be represented
by less than three subdivisions per parameter, if ten parameters have to be identied, 310
60 000 samples are required. Such a number of evaluations is already difcult to achieve with
current computing capacity when dealing with nite-element models. For 20 parameters, the
number of evaluations required increases to more than three billion.
Initial model
instance set

Candidate models
Falsified models

2
Model
falsification

Figure 2.6: Initial model set organized in a n p -parameter grid used to explore the space of
possible solutions. inadequate model instances are falsied using Equation 2.8 and models
that are not falsied are classied as candidate models.

Instead of creating samples along a grid, the space of possible model instances can be explored
using techniques such as Monte Carlo and Latin-Hypercube random sampling [147]. However,
by proceeding in such a way, users may loose the sense about how densely the space is
sampled. Several researchers already underline that high-dimensional spaces tend to be
extremely empty [112, 231]. For instance, when sampling over 20 parameters one can be
mislead into thinking that a million samples is sufcient when the space is in fact only sparsely
39

Chapter 2. Error-domain model falsication


explored.

2.5.2 Surrogate models


Surrogate models are substitutes to complex models that can capture the essential behavior
while being solved more quickly. The most common approach used is to build response
surfaces based on a polynomial function (see 1.5.1). When using surrogate models, approximation errors are inevitable. Therefore, in addition to obtain predictions for the set of
parameters used to construct the surrogate model, predictions are computed for randomly
selected parameter samples. These are generated to evaluate the accuracy of the surrogate
model predictions. For instance, by taking an equal number of random samples as the number
of design points, the surrogate-model prediction uncertainty is computed based on a Students T-distribution [71]. This uncertainty represents the approximation error made by using
a response surface instead of the real model. This uncertainty is included in the identication
process as a source of modeling uncertainty (Umod el ).
Surrogate models do not overcome the complexity challenges related to the number of parameters to identify. They do however drastically reduce the computation time required to
evaluate each solution in the grid presented in 2.5.1, thereby making more samples feasible. Also, surrogate models may not be adequate when dealing with models having highly
non-linear responses. Such a situation can be detected by comparing the relative importance
of the response-surface uncertainty component with other sources of uncertainties. This
methodology is applied to a case-study in 5.3.3.

2.5.3 Random walk


Random walk is a sampling technique based on Markov Chain Monte Carlo (MCMC) methods
(see 1.5.2). For the purpose of nding candidate models in the solution space, a likelihood
function L can be dened to generate samples belonging to the candidate model set. Such a
likelihood function is presented in Figure 2.7 for a case using two measurements, n m = 2. In
this gure, the horizontal axes correspond to the residuals of the differences between predicted
and measured values for two comparison points.
Figure 2.7 is generated using the likelihood function presented in Equation 2.11, where Ti and
T i are computed using Equations 2.12 and 2.13. The shape of the likelihood function (Equation
2.11) is controlled by the parameter , which affects the sharpness of the function. When
, the likelihood function is a box where lateral sides correspond to threshold bounds
Tl ow,i and Thi g h,i . This function is based on the -order generalized Gaussian distributions
(see 1.2.2). Note that these equations can be extended to accommodate any number of
comparison points n m . When using random-walk, the model space is explored so that the
distribution of observed residual o corresponds to the likelihood function presented in

40

2.5. Generation of model instances

Likelihood

Figure 2.7: Two-dimensional likelihood function used to generate parameter samples. The
likelihood L cm is maximal when the observed residuals o,i are within threshold bounds
[Tl ow,i , Thi g h,i ]. This example was created using Equation 2.11 with a shape function parameter = 10.

Equation 2.11.

11/
L cm (o,1 , o,2 ) =
e
2T1 T2 (1/)

Ti =


1

|o,1 Ti |
T1

|o,2 Ti |
T2

Thi g h,i + Tl ow,i


2

Ti = Tl ow,i T i

(2.11)

(2.12)

(2.13)

Reducing the number of model evaluations by using a random walk grid sampling
Traditional random walk methodologies can create steps leading to any location in the parameter domain Rn p . The number of evaluations of the model can be reduced by performing
the random walk over points aligned on a grid, as presented in 2.5.1. If the random walk
leads to a point in the parameter domain that has already been evaluated, it can re-use the
result previously computed. The spacing and limits of the grid are dened to cover all plausible combinations of parameters and with a density sufcient so that solutions that might
be considered as equivalent are not sampled. An illustrative example of this random walk
sampling technique is presented in 2.6.1.
41

Chapter 2. Error-domain model falsication

2.6 Model falsication summary


Figure 2.8 summarizes the steps leading to the falsication of inadequate models. The rst
step is to dene the goal of the identication and to convert it into parameters to identify ()
using physics-based models (g()) and measurements (y). Uncertainties associated with both
the model and measurements are combined and used to dene threshold bounds (Tl ow , Thi g h )
including a target probability . Model instances are generated and each predicted value is
compared with the measured value. If for any comparison point the difference (gi () y i )
is outside of the interval dened by threshold bounds the model instance is falsied. This
process is repeated for all model instances and those that are not falsied are part of the
candidate model set. If all model instances are rejected, it indicates that the model class is
also falsied by measured data. In such a case, initial objectives and inputs can be reviewed.
Otherwise when candidate models are obtained, they represent the plausible explanations of
the observed behavior with respect to the initial model set, measurements and uncertainties
provided.
START
Fix objectives
-Boundary conditions?
-Material properties?
-Behavioral assumptions?

Primary parameters
to be identified

Inputs
-Template model(s)
-Uncertainties (Model & measurements)
-Measurements
-Range of parameter values

Model instance generation

New objectives

- More
measurements
- New model
class

Combination of uncertainty sources


& threshold definition

Model falsification
For all

measurement locations

Candidate models

If all model
intances
are falsified

Falsified models

Figure 2.8: Flowchart describing error-domain model falsication

2.6.1 Illustrative example


An illustrative example describes the main steps of error-domain model falsication. The
structure is a simply supported, ten-meter long, composite beam. Cross-sectional dimensions
42

2.6. Model falsication summary


and details are presented in Figure 2.9. The beam is modeled using the nite-element method
and is made entirely of shell elements. Predicted and measured vertical displacements are
compared at two locations. The rst is located at quarter-span and the second at mid-span.
The structure is loaded by a 50 kN point load located at midspan.
2000mm
200mm
15mm

600mm

25mm
400mm

Figure 2.9: Composite beam cross-section. The structure studied is a simply supported, ten
meter long, composite beam. The beam is modeled by shell elements using the nite-element
method.

In this illustrative example, the value for two primary-parameters have to be identied: the
concrete and the steel Youngs modulus. Parameter ranges are respectively [15-45] GPa and
[190-212] GPa. Each parameter range is subdivided in ten parts to generate an initial model
set containing 100 model instances.

Uncertainties
For this example, only three uncertainty sources are considered. The rst two are sensor
resolution and model simplications. They are described by a uniform distribution where
minimal and maximal bounds are presented in Table 2.1. These values were chosen arbitrarily
for illustration.
Table 2.1: Modeling and measurement uncertainty sources for the beam example
Uncertainty source
Sensor resolution
Model simplications

Vertical displacement
min
max
-0.025 mm
0%

0.025 mm
5%

The third source of uncertainty is due to secondary parameters of the model. Three secondary
parameters contribute to model prediction uncertainty: the inaccuracy in the thickness of
the slab and the two steel elements (web and ange). These inaccuracies are represented by
Gaussian distributions having a mean of zero and a standard deviation of 1 mm for concrete
and 0.05 mm for steel elements. The uncertainty in model predictions is obtained by taking
1000 combinations of these three secondary-parameters and then evaluating the template
model for each set. During these simulations, primary model-parameters are kept to their
mean values. The distribution of secondary-parameter model uncertainty is combined with
43

Chapter 2. Error-domain model falsication


the two other uncertainty sources. Figure 2.10 illustrates this combination. The outcome is
two uncertainty pdf s, Uc,i , representing the distribution of the combined uncertainty for each
comparison point.
Comparison point #1
Secondaryparameter
uncertainty

Sensorresolution
uncertainty

Combined uncertainties for comparison point #1

Modelsimplification
uncertainty

Combination
through a
numerical
sampling process

Comparison point #2
Secondaryparameter
uncertainty

Sensorresolution
uncertainty

Modelsimplification
uncertainty

Combined uncertainties for comparison point #2

Figure 2.10: Combined uncertainty probability density function for mid-span and quarterspan comparison points. For each comparison point, uncertainties are separated in modeling
Umod el ,i and measurement Umeasur e,i uncertainties. These are subtracted from each other to
obtain the combined uncertainties Uc,1 and Uc,2
.

These two pdf s are presented in Figure 2.11 in a bivariate probability density function. This
bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i ) including a target probability = 0.95. Minimal and maximal bounds for each location are found numerically by
satisfying Equation 2.14 while minimizing the area enclosed in the projection of threshold
bounds on the bivariate pdf.

Thi g h,2 Thi g h,1


Tl ow,2

Tl ow,1

fUc (c,1 , c,2 )c,1 c,2

(2.14)

Model falsication
For the rst measurement location (quarter-span) a model instance is falsied if its residual
between predicted and measured values (o,i ) is either lower than -0.09 mm or higher than
0.03 mm (i.e. the threshold bounds). For the second measurement location (mid-span) the
threshold bounds are set at -0.14 mm and 0.04 mm. These bounds dene a square coverage
region that has a probability at least equal to of not wrongly discarding the right model
instance. In order to falsify inadequate models, simulated measurements are generated by
randomly selecting the prediction from one model instance and then adding errors randomly
drawn from the combined uncertainty pdf s. The simulated measurements obtained are
-1.3 mm and -2.0 mm, for quarter and midspan vertical displacements.
The observed residuals o,1 and o,2 are computed by subtracting the measured values from
the predictions of each model instance. Then, by comparing these observed residuals to
44

2.6. Model falsication summary


Combined uncertainty pdf
for each location

Combined uncertainty pdf


including both locations

0
0.1
0

0.05
0

0.1
0.2
Residual of difference between
predicted and measured value at mid-span (mm)

0.05
0.1

Residual of difference between predicted and


measured value at quarter-span (mm)

Figure 2.11: The two combined uncertainty pdf s are presented in a bivariate probability
density function. This bivariate pdf is used to dene the threshold bounds (Tl ow,i , Thi g h,i )
including a target probability = 0.95. Minimal and maximal bounds for each location are
found numerically by satisfying Equation 2.14 while minimizing the area enclosed in the
projection of the threshold bounds. Model instances are falsied if, for any comparison point,
the difference between predicted and measured values (gi () y i ) are outside the rectangular
threshold bounds.

45

Chapter 2. Error-domain model falsication


threshold bounds, 58 model instances out of 100 can be falsied. The 42 candidate models
left are all possible explanations of the observations. These candidate models (outlined by
the shaded region) are presented in Figure 2.12. In this gure, each axis corresponds to the
possible range of a parameter value, candidate models are represented by diamonds, falsied
models by circles and the correct model used to generate simulated measurements by the
symbol *.
4

x 10
4.5

Candidate model

Falsified model

Correct model

Concrete Youngs modulus

3.5

2.5

1.5
1.9

1.95

2
2.05
Steel Youngs modulus

2.1

x 10

Figure 2.12: Representation of the initial model set with candidate and falsied models. By
comparing the difference between model predictions and measurements with threshold
bounds, 58 model instances out of 100 are falsied. The 42 candidate models are outlined the
shaded region.

The shaded region corresponding to the candidate model set includes the correct parameter
set. In this case, parameter compensation leads to more than one model that can explain
measurements when including uncertainties. If new predictions have to be made using the
template model, all combinations of parameters dened by the candidate model set should
be used to compute uncertainties for these new predictions.

Random-walk sampling
Random-walk grid sampling technique proposed in 2.5.3 can be used to reduce the number
of evaluations required for dening the candidate model set. The likelihood function used for
sampling is based on Equation 2.11 with = 100. The random walk is based on the generation
of samples from a Gaussian distribution having a mean of 0 and a standard deviation of 1
multiplied by the grid spacing. The value obtained is rounded to obtain an integer number
corresponding to a point arranged on a grid. As mentioned in 2.5.3, a step toward a new
position is accepted in two circumstances. The rst is if its likelihood is larger than the
likelihood obtained at the previous position. Secondly, if its likelihood is lower than the
likelihood obtained at the previous position, the new position has a probability of being
accepted corresponding to the ratio of these two likelihoods (see 1.5.2).
46

2.7. Model falsication using time-domain data


Using 2000 random walk-samples over the model instance space leads to the same 42 candidate models as observed in the previous example. The random-walk grid sampling technique
only required 78 evaluations of the model instead of 100 with grid-based sampling. Figure
2.13 presents the model instances evaluated. The vertices between samples correspond to the
path followed by the random walk. The starting point is highlighted by a circle.
4

Concrete Youngs modulus (MPa)

x 10
4.5

Candidate model

Falsified model

Correct model

4.0

3.5

3.0

2.5

2.0

1.5
1.90

Starting point

1.95

2.00
2.05
Steel Youngs modulus

2.10 5
x 10

Figure 2.13: Exploration of the model instance space using random walk MCMC. The same 42
candidate models were found however it required 22% less samples than with an exhaustive
grid sampling. The vertices between samples corresponds to the path followed by the random
walk. The starting point is highlighted by a circle.

This gure shows that the grid-based random walk technique can efciently create samples in
the candidate model set. This technique can reduce the number of evaluations of inadequate
model instances compared with the systematic evaluation of all possible parameter combinations. The performance of the approach depends on the size of the candidate model set
compared with the initial model set size.

2.7 Model falsication using time-domain data


Dynamic monitoring data is commonly presented under two forms, either in the time-domain
or in the frequency domain. The time-domain form is for instance, the transient signal
directly recorded on a system. Figure 2.14 presents the effect of model simplications on a
time-domain signal. Graph a) represents the true time-domain displacement of a structure
Q(t ). When monitoring such a system, noise (Umeasur e ) is usually recorded in addition to the
system signal (b)). Here, Umeasur e is a Gaussian random variable. Simulations of the system
behavior are inevitably inaccurate. Therefore, even when using the same parameter values
as the true system ( ), a time-dependent bias is present, (Umod el (t ), (c)). When comparing
the time-domain measured and predicted signals, the residual is not only Gaussian noise
(d)). Falsifying models based on such a signal is hardly achievable for civil-structures because
model-simplication errors are also time-dependent. This aspect undermines our capacity to
47

Chapter 2. Error-domain model falsication

500
Time (s)

1000

b) Measured system
5

Time-domain simulated behavior


c) using the same parameter values
as in the true system
5

500
Time (s)

1000

displacement (mm)

a) True system
5

displacement (mm)

displacement (mm)

dene the error structure and to quantify uncertainties.

500
Time (s)

1000

displacement (mm)

d) Residual of difference between the measured and predicted signals


10

10

100

200

300

400

500
Time (s)

600

700

800

900

1000

Figure 2.14: An illustration of the effects of model simplications and omissions on timedomain structural identication. Graph a) represents the true displacement of a structure
over time. When monitoring a system, noise is usually recorded in addition to the system
behavior (b). Simulations of the system behavior are inevitably inexact (c). When comparing
the time-domain measured and predicted signals (d), a bias is present. In such a situation, the
residual cannot be described using stationary Gaussian noise.

The particularity of frequency-domain dynamic measurements is that for the purpose of


comparison, the quantities measured are not the same as the quantities predicted by the model.
Experimental modal analysis is a signal processing method allowing the modal parameters
of a structure to be derived, including resonance frequencies, modal shapes and damping
ratios. Due to this reason, the time-domain acceleration data must rst be transformed into
the frequency-domain to compare the measured and predicted natural frequencies of the
structure. This operation transforms the vectors a q containing time-domain data recorded at
locations q {1, . . . , n a } in n y frequencies y j and mode shapes y j , where j {1, . . . , n y }.
Due to the limitations mentioned above, model falsication is performed in the frequency
domain using either forced or ambient vibration monitoring (AVM). The rst step is to
perform a correspondence check between the measured ( y j , j {1, . . . , n y }) and predicted
(gl () , l {1, . . . , n l }) mode shapes to ensure that only the corresponding modes are compared
together. When dealing with large nite-element models, the number of predicted modes
(n l ) is in most situations much larger than what is measured (n y ), (n l  n y ). Moreover, the
arrangement of these modes may be different from one model instance (k ) to another. The
Modal Assurance Criterion (MAC) [6] (Equation 2.15, where H denotes complex conjugate
and transpose) is used to perform mode shapes correspondence checks. When the measured
mode shapes corresponding to each predicted mode shape are found for all model instances,
model falsication is performed based on observed and predicted frequencies.
48

2.8. Compatibility between error-domain model falsication and


Bayesian inference

MAC( y j , gl (k ) ) =

2
|H
y j gl (k ) |
H
|H
y j y j ||g ( ) gl (k ) |
l

(2.15)

In order to identify the behavior of a structure, a set containing n k model instances k , k


{1, . . . , n k } is generated to explore the domain of possible solutions. Only sets { j , k, l } satisfying Equation 2.16 for all j {1, . . . , n y }, k {1, . . . , n k }, l {1, . . . , n l } are used to falsify model
instances. MAC ]0, 1] is a target MAC value used to determine if two mode shapes are similar.

{( j , k, l ) N3 : MAC( y j , gl (k ) ) MAC }

(2.16)

The result of this correspondence check is a matrix of size n k by n y where each line corresponds to a model instance and each column to a measured mode. The matrix is lled
with indexes l mapping, for each model instance, what predicted mode corresponds to what
measured mode. When a set { j , k, l } does not satisfy Equation 2.16, the index l is set to zero.
The matrix is used during the falsication process to indicate what predicted frequency is
compared with what measured frequency based on the method presented in 2.2.

2.8 Compatibility between error-domain model falsication and


Bayesian inference
Error-domain model falsication is compatible with Bayesian inference approaches. In the
case where prior knowledge is available to describe the plausibility of primary parameters, this
prior knowledge can be updated using observations. Traditional Bayesian inference methods
mostly rely on 2 -norm to represent the likelihood of models conditional on observations
(see 1.2.2). This implies that the error structure follows a Gaussian distribution and that
dependencies between uncertainties are known.
In order to overcome this restrictive requirement, Bayesian inference can use a likelihood
function based on the  -norm. By using the  -norm, the outcome of the likelihood function
will be either a constant for candidate models or 0 for falsied models. Therefore, the only
thing that could differentiate candidate model instances is prior knowledge. When prior
knowledge only describes the lower and upper bounds for each parameter using a uniform
distribution, Bayesian inference using  -norm leads to the same result as error-domain
model falsication (i.e. the same candidate model set).
Numerical techniques developed for Bayesian inference can also be used with a likelihood

49

Chapter 2. Error-domain model falsication


function based on a  -norm. In this case, only upper bounds for uncertainties need to
be provided. When the error structure is described by unbounded probability distribution
functions (such as the Gaussian distribution) with unknown dependencies, is approximated by threshold bounds Tl ow,i and Thi g h,i computed as proposed in 2.2. For practical
applications, Equation 2.13 can be used to approximate the  -norm likelihood function
where . A graphical representation of this likelihood function is presented in Figure
2.15. When uncertainty dependencies cannot be completely dened, this approach can be
used as an alternative to 2 -norm likelihood functions commonly used in Bayesian inference.

Threshold
rectangular
region

Figure 2.15: Two-dimensional  -norm likelihood function generated from Equation 2.13
using = 100. The projection of vertical walls on the horizontal plane corresponds to threshold
bounds. Large values of can be used to approximate the  -norm likelihood function. This
can be used as an alternative to 2 -norm Bayesian inference.

2.8.1 Illustrative example


The illustrative example presented in 2.6.1 is solved with Bayesian inference using a  -norm
likelihood function. Prior information related to Youngs modulus values is represented by a
uniform distribution over the range of 190-212 GPa for steel and 15-45 GPa for concrete. The
likelihood function is based on Equation 2.13 using = 100. The posterior pdf is approximated
by evaluating several combinations of parameters arranged on a grid as presented in Figure
2.16. In this gure, the horizontal axes represent possible parameter values and the vertical
axis, the probability. The contour plot of the posterior pdf is projected on the bottom plane.
Falsied models are those having a probability equal to zero. Candidate models are those
having a non-zero probability. The candidate model set found is equivalent to the set found in
2.6.1. Also, note that all candidate model instances obtained using the  -norm Bayesian
inference are equally probable. This example shows that the threshold bounds used to falsify
models can also be used by Bayesian inference methodologies to update the prior knowledge
on parameter to be identied.
50

2.9. Conclusions
Candidate models

Probability

Falsified models

1.9
1.95
2
2.05
Youngs Modulus steel (x105 MPa)

2.1

1.5

2.5

3.5

4.5

Youngs Modulus concrete (x104 MPa)

Figure 2.16: Posterior pdf for the illustrative example presented in 2.6.1. This posterior
pdf is computed using the  -norm likelihood function presented in Figure 2.15. The prior
distribution is set as uniform over the range of 190-212 GPa for steel and 15-45 GPa for concrete
Youngs modulus. The candidate model region found corresponds to the set found using errordomain model falsication.

2.9 Conclusions
This chapter introduces the error-domain model-falsication approach. Concepts related to
hypothesis falsication are used to discard inadequate model instances based on the residual
of the difference between predicted and measured values. Specic conclusions are:
1. Error-domain model falsication does not require knowledge of the dependence between uncertainties to identify possible parameter values of models. With this methodology, the probability of falsifying a correct model instance does not increase for any
number of measurement used.
2. Several sampling procedures can explore the space of possible solutions satisfactorily.
These solutions are best for different types of problems. Nonetheless, complexity is a
challenge when dealing with high-dimensional model spaces.
3. Error-domain model falsication is compatible with Bayesian inference that uses a
likelihood function based on the  -norm. An example shows that results from the two
approaches are equivalent when the same initial assumptions are made.

51

3 Expected identiability - Predicting


the usefulness of measurements

Summary
This chapter describes the expected identiability metrics. These metrics predict probabilistically to what extent measuring a system can be useful to falsify
models from an initial set and to reduce future prediction ranges (F 1C M ( ) and
F 1P R ( ) respectively). The expected identiability quanties the effects on data
interpretation of a-priori choices, such as model class, measurement location,
measurement type, sensor accuracy, and constraints, such as uncertainty level
and dependencies.

3.1 Introduction
Our capacity to interpret data depends on aspects such as the choice of model classes, model
parameters (and their range of possible values) and the extent of uncertainties inuencing
models and measurements. This chapter presents a methodology to predict probabilistically
to what degree measurements are useful for reducing the number of candidate models and
their prediction ranges, with respect to the aspects mentioned above. These metrics were introduced by Goulet and Smith as the expected identiability [87]. It is based on the generation
53

Chapter 3. Expected identiability - Predicting the usefulness of measurements


of simulated measurements to quantify probabilistically how many candidate models should
be expected and what should be their prediction ranges. The second section of this chapter
describes how simulated measurements can be generated while including the variability and
dependencies that can be present in real systems. The third section describes the two expected
identiability metrics: the expected reduction in the number of candidate models and the
expected reduction in future prediction ranges.

3.2 Generation of simulated measurements


Expected identiability metrics use simulated measurements to emulate the model falsication process. Several instances of simulated measurements are used to obtain statistical
distributions (i.e. cumulative distribution functions) of the number of expected candidate
model and future prediction ranges.
Simulated measurements y s,i are obtained using the combination of predictions gi () = r i
from instances of the initial model set (see 2.5) and uncertainties Ui . Figure 3.1 presents how
modeling uncertainties are added to the predicted behavior of a model to obtain the assumed
true behavior, Qs,i . Simulated measurements (y s,i ) are obtained by adding a measurement error to the assumed true behavior obtained previously. This procedure for generating simulated
measurements is based on Equation 2.1. For practical purposes, generating a simulated measurement can be done in a single step by subtracting a sample of the combined uncertainty
Uc,i from the predicted behavior of a randomly selected model instance.

Model uncertainty

Predicted behavior

Assumed true behavior

Measurement uncertainty

Simulated measurement

Figure 3.1: Schematic representation of the inclusion of modeling and measurement errors in
the generation of simulated measurements. Modeling error is added to the predicted behavior
of a model instance to obtain the assumed true behavior. Simulated measurements (y s,i ) are
obtained by adding a measurement error to the assumed true behavior obtained previously.

The process where simulated measurements are generated is illustrated in Figure 3.2. Before
making observations on a structure, any model instance from the initial model set can be
an adequate explanation of the system behavior. Therefore, any model can be randomly
chosen to generate simulated measurements. Each model instance in the initial model set
is made from a combination of parameters (k = [1 , 2 , . . . , n p ]k , k {1, . . . , n k }) used in a
54

3.2. Generation of simulated measurements


template model gi (k )1 . n k is the number of instances in the initial model set. Even if a
parameter set (k ) happens to have the true values, the predicted values from the template
model [r 1 , r 2 , . . . , r nm ] rarely match those measured on a system [y 1 , y 2 , . . . , y nm ]. This is because
aleatory and epistemic errors (systematic bias) are present in both model predictions and
measurements. For each simulated measurement instance, errors (c,i ) are generated for
each measurement location (i {1 . . . n m }) from the combined uncertainty pdf ( fUc,i ). As
discussed in 2.3, error dependencies are likely to occur when performing predictions for
several quantities. Therefore, the uncertainty correlation () has to be evaluated and included
in the error generation process. The next section describes how to proceed.
Random selection of model instance
Random selection
of a model
instance
(parameter set)

Template
model (

Predictied
responses (ri)

Model instances
parameter space

Simulated measurements

[ ys,i ] = [ ri ]-[ c,i ]

Uncertainties involved in the identification process


Measurement location #1

Measurement location #2
Measuring
uncertainty

Modeling
uncertainties

Modeling
uncertainties

Measuring
uncertainties

...

...

[ c,1,c,2,...,c,nm ]
error
instances (c,1)

Expected residual PDF (Uc,1)


for measurement location #1

Expected residual PDF (Uc,2)


for measurement location #2

Correlation (12)

Figure 3.2: Illustration of the process of simulating measurements based on the predictions of
model instances and on uncertainties. Note that the generation of simulated measurements
include the correlation between expected residual pdf s.

3.2.1 Correlations between uncertainties


When using the expected identiability metrics, the goal is to quantify the usefulness of measuring a structure. Assuming that there are no dependency between model predictions would
lead to simulated measurements that are not representative of civil-system behaviors (see
2.3). In Chapter 2 it was shown that when measurements are available, it is possible to conservatively falsify models while making no assumptions regarding uncertainty dependencies.
However, when generating simulated measurements, this is no longer possible. Assuming that
uncertainties are perfectly dependent would lead to an underestimation of the usefulness of
measurements and assuming that they are independent would lead to an overestimation of
1 the notation associated with secondary parameters is dropped to simplify equations

55

Chapter 3. Expected identiability - Predicting the usefulness of measurements


usefulness. A way to overcome this challenge is to provide evaluations of these dependencies
via imprecise denitions of uncertainty correlations.
Evaluating uncertainty correlations between model predictions remains a cumbersome task
since limited quantitative information is available. For the purpose of generating simulated
measurements, a method based on qualitative reasoning is proposed to estimate and use uncertainty correlations in a stochastic process. Qualitative Reasoning (QR) [234] uses common
sense reasoning to support complex decisions. QR is often used in modeling, control and to
support decisions when limited information is available.
Figure 3.3 shows the qualitative reasoning scheme proposed to qualify the uncertainty correlation between several predictions. In this gure, the correlation value is presented on the
horizontal axis. The vertical axis corresponds to the probability of occurrence of correlation
values depending on its qualitative description. It is difcult for users to provide accurate
estimates for correlation values. Indicating whether the correlation between uncertainties
is "low", "moderate" or "high" and whether the correlation is either "positive" or "negative"
is more representative than providing discrete, deterministic values. Qualitative reasoning
methods are used to represent incomplete knowledge where the denition of correlation labels
is specic to each application. 5.3.5 presents a study of the robustness of the expected identiability metric with respect to the uncertainty correlation denition used in the qualitative
reasoning scheme.

Independent
Moderate -

Low -

Low +

Moderate +

High +

Probability

High -

-0.80

1.00

-0.60

-0.75

-0.40

-0.50

-0.20

-0.25

0.20

0.40

0.25

0.60

0.50

0.80

0.75

1.00

Correlation value

Figure 3.3: Qualitative reasoning description used to dene the uncertainty correlation. The
correlation value is presented on the horizontal axis. The vertical axis corresponds to the
probability of occurrence a given correlation value depending on its qualitative description,
"low", "moderate", "high", "positive" and "negative".

Uncertainty correlations are included in the process of simulating measurements by generating correlated error samples from the combined uncertainty pdf (Uc ). Details regarding
sampling multivariate correlated random variables can be found in the references [80, 81].
Each realization of error uses a different correlation values distributed according to the density
functions provided in the qualitative reasoning scheme.
56

3.3. Computation of expected identiability

3.3 Computation of expected identiability


The expected identiability metrics are the expected number of candidate models (F 1C M ( ))
and the expected prediction ranges (F 1P R ( )). In the process of computing the expected
identiability, several thousand simulated measurement instances are generated from randomly selected model instances. Each simulated measurement instance is used to falsify
models from the initial model set using the falsication methodology presented in Chapter
2. Each time an instance of simulated measurement is used, a set of candidate models is
obtained and the number of candidate models nC M and its prediction range n P R are stored
in vectors C M and P R to be later analyzed statistically. Vectors C M and P R each contain
n s terms corresponding to the number of simulated measurement instances generated. This
procedure is intended to be used when dealing with a population of initial model instances.
For civil engineering applications it is common to have an initial model set containing several
thousand individuals (several examples are presented in Chapter 5). In order to determine if
the number of simulated measurements n s is sufcient to obtain stable statistical representations of results, sets of thousand simulated measurements are generated. If the cumulative
distribution functions cdf s obtained from successive generations differs, additional sets of
simulated measurements are required.

3.3.1 Expected reduction in the number of candidate models


The rst metric is the number of expected candidate models. The number of candidate
models obtained from each instance of simulated measurement is summarized in an empirical
cumulative distribution function (cdf, F C M (nC M )). This cdf describes the probability of
obtaining a maximal number of expected candidate models (nC M ) if measurements are taken
on the structure. The two quantities extracted from the cdf are the maximal number of
expected candidate models that should be obtained for a probability ]0, 1] (F 1C M ( ),
F 1 represents the inverse cdf ). An example of F C M is presented in Figure 3.4a) where the
horizontal axis corresponds to the expected size of the candidate model set and the vertical
axis to the probability of obtaining a maximum candidate model set size. In this example,
there is a probability of = 0.95 of falsifying at least 60% of the models (F 1C M (0.95) = 40%)
and a probability of = 0.50 of falsifying at least 75% of the models (F 1C M (0.50) = 25%).
Figure 3.4b) presents the effect of uncertainties, dependencies and target identication reliability on F C M (nC M ). When the standard deviation c of uncertainties (Uc ), when the target
identication reliability (), or when the absolute value of uncertainty correlation () are
reduced the cdf is shifted to the left. Thus, a smaller set of candidate model is expected for
a same probability. In opposition, larger uncertainties, larger absolute values of correlation
coefcients or a larger target identication reliability increases the expected size of the candidate model set for a same probability (i.e. it shifts the cdf to the right). Note that there is no
unidirectional trend associated with the choice of measurement congurations.

57

Chapter 3. Expected identiability - Predicting the usefulness of measurements

1
P=0.95

0.8

0.6

P=0.5

0.4

0.6

0.4

0.2

0.2

Probability

Probability

0.8

0
0
20%
40%
60%
80%
100%
Maximal size of the candidate model set
(% of the initial model set,
)

20%
40%
60%
80%
100%
Maximal size of the candidate model set
(% of the initial model set,
)

Figure 3.4: a) Example of empirical cumulative distribution function (cdf ), F C M (nC M ), used to
compute the expected size of the candidate model set. F C M (nC M ) depends on {,Uc , n m , },
the target identication reliability, uncertainties, the measurement conguration used and
uncertainty dependencies. In this example, there is a probability = 0.95 of falsifying at
least 60% of the models (F 1C M (0.95) = 40%) and a probability = 0.50 of falsifying at least

75% of the models (F 1C M (0.50) = 25%). b) Effect of uncertainties, dependencies and target
identication reliability on F C M (nC M ). There is no unidirectional trend associated with the
choice of measurement congurations.

3.3.2 Expected reduction in the predictions ranges


The goal of structural identication, aside from identifying model classes and physical parameter values, is to be able to make predictions related to the behavior of complex systems, such
as stress ranges, natural frequencies and reaction forces. Therefore, the second quantities
of interest are the prediction ranges of unobserved quantities (i.e. for quantities others than
those used during identication). As described in the previous paragraph, the number of
candidate models varies for each instance of simulated measurements. Consequently, the
ranges of predictions obtained from each candidate model set also vary. The prediction ranges
are stored to be presented as a cdf, F P R (n P R ) showing the probability of obtaining any prediction range if measurements are taken on the structure. The expected prediction ranges are
extracted from the cdf for a probability (F 1P R ( )).

3.3.3 Expected identiability summary & general framework


A general framework summarizing steps described previously is presented in Figure 3.5. The
rst step is to dene inputs used to predict the expected identiability; an initial model set
that could represent the behavior of the system studied, a measurement-system conguration
and estimations for uncertainties. Based on model instances contained in the initial model
58

3.4. Conclusions
set, simulated measurements are generated and used to falsify model instances. For each set
of simulated measurements, the number of candidate models nC M and the prediction ranges
n P R obtained are stored. When the number of simulated measurements generated reach
1000 instances, the cumulative probability density functions F C M (nC M ) and F P R (n P R ) are
computed. These steps are repeated again to verify if these cdf s converged to stable results. If
not, more simulated measurements are generated. Otherwise, the answer is returned to users
who decide if the expected performance is sufcient. If it is, it justies proceeding with in-situ
measurements. Otherwise, users can choose to review their initial assumptions, for instance
using more accurate sensors and by using better model classes to reduce uncertainties. If no
improvement in the initial assumptions is possible the expected identiability justies not
performing monitoring intervention on the structure.
Reductions in the expected number of candidate models and in prediction ranges are indicators of the usefulness of measurements for obtaining new knowledge and for making better
prognosis. In the case where threshold bounds are large in comparison with model prediction
variability, monitoring the system is unlikely to provide useful decision support because model
instances cannot be falsied in signicant numbers. Computing the expected identiability
using the number of candidate models and prediction ranges as metrics can help determine
whether or not measuring is likely to reveal new knowledge about the system behavior.

3.4 Conclusions
In this chapter, a methodology is proposed to predict probabilistically to what extent, measuring would be useful to falsify model instances and to reduce future prediction ranges. This
tool can be used to support decision-making regarding monitoring interventions. Specic
conclusions are:
1. The effects on data interpretation of a-priori choices, such as model classes, measurement locations, measurement types, sensor accuracy, and constraints, such as uncertainty levels and dependencies can be quantied by the expected identiability metrics.
This quantication is compatible with the falsication framework that is described in
Chapter 2.
2. Two metrics, the expected number of falsied models and the expected prediction
ranges are useful to predict in absolute terms the usefulness of measurements.
3. The methodology is able to generate simulated measurements that include locationspecic correlated and systematic uncertainties.

59

Chapter 3. Expected identiability - Predicting the usefulness of measurements


START
Aspects influencing data-interpretation
- Template model
- Monitoring system
- Uncertainties (model + measurement)
- Primary parameters to be identified

Random selection of a
model instance from the
initial model set

Model
instances

Generation of simulated measurements

Comparison of simulated measurements with


model instance predictions

Falsified
models

Store results

Candidate
models

What is the number of candidate models


What is the candidate model set
prediction range
?

is N1000

?
YES

Are there other choices


for aspects influencing
data-interpretation?

NO: N = N + 1

YES

NO

No monitoring
interventions

YES: N=1, loop=loop+1


NO
Has
varied compared
to loop-1?

NO

Is the expected
performance sufficient?

YES

Monitoring

Figure 3.5: Flowchart representing the steps involved in the computation of the expected
identiability. These metrics quantify the utility of monitoring for better understanding the
behavior of a system.

60

4 Measurement-system design

Summary
This chapter describes the measurement-system design methodology proposed
to maximize the usefulness of measurements for data-interpretation. This approach uses the expected identiability metric to evaluate the performance of
several measurement and test congurations. It shows that over instrumentation
is possible and that too much data may hinder interpretation.

4.1 Introduction
The expected identiability described in Chapter 3 is used as a performance metric to optimize
the efciency of measurement systems for falsifying model instances. In common monitoring
interventions, a balance is sought between performance and cost. Thus, the methodology
uses cost as a second objective for optimizing measurement systems. The cost of a measurement system is computed to be the sum of sensor costs and the expenses related to testing
equipment, such as trucks in the case of static-load tests.
In the absence of computational support, engineers usually measure structures where the
61

Chapter 4. Measurement-system design


largest response is expected. Understandably, this ensures that the ratio between the measured value and the sensor resolution will be the largest. However, in the case of structural
identication, sensor resolution and more generally, measurement uncertainties are often not
the dominant source of uncertainty (see 5.3.2). Furthermore, these locations may not be the
best to separate candidate and falsied model instances. In order to maximize the utility of
monitoring interventions, the performance of several measurement and load congurations
are compared based on their expected identiability (see Chapter 3).

4.2 Measurement systems and over instrumentation


Figure 4.1 schematically presents trends of two competing factors involved in the design of
measurement system. The number of measurements used is the horizontal axis and the
expected number of candidate models is the vertical axis. This last quantity is expressed as
a percentage of the initial model set size expected for a given certainty (see 3.3.1). Note
that the number of measurements used is in many cases proportional to the monitoring cost.
These curves describe the relationships between the expected number of candidate models
(see 3.3.1) and the number of sensors used. Instead of using the number of candidate models,
the expected prediction range (see 3.3.2) can also be used as performance metric.

100%

Number of candidate models


(expressed as a percentage
of the initial model set)

100%

Useful
Over
measurements instrumentation

100%

Total number of candidate models

Expected number of candidate models


without threshold adjustment

+
0

0 Number of measurements (|cost)

Limit where additional


measurements are not usefull

Number of candidate models not


rejected due to threshold adjustment

0
Number of measurements (|cost)

Number of measurements (|cost)

Figure 4.1: Schematic representation of the phenomena involved in the design of measurement
systems. The total number of candidate models decreases as the number of measurements
increases until the point where additional observations are not useful (solid curve). Over
instrumentation is due to the combined effects of the increased amount of information and
threshold adjustments (dashed curves).

In Figure 4.1 the total number of candidate models (solid line) decreases as the number of
measurements increases, until the point where additional observations are not useful. Beyond
this point, additional measurements may decrease the efciency of the identication by
increasing the number of candidate models (i.e. reducing the number of falsied models).
Over instrumentation is due to the combined effects of two competing trends: an increase in
performance due to additional information brought by new observations and a decrease in
performance due to threshold adjustments (dashed lines).
In order to avoid over-falsication, threshold bounds are conservatively adjusted using the
62

4.3. Optimization method for measurement-system design

Sidk
correction (see 2.2). Threshold corrections ensure that the reliability of the identication meets the target when multiple measurements are used simultaneously to falsify
model instances. In other words, the criteria used to falsify models (threshold bounds) depend
upon the number of measurements used. Over instrumentation occurs when including a new
measurement falsies less model instances than the number of additional instances accepted
by measurements, due to threshold bound adjustments. Such a situation is likely to happen
when the information contained in several measurements is not independent. Furthermore,
poor identication performance can be expected when modeling and measuring uncertainties
are large in comparison with the prediction variability within the initial model instance set.
Figure 4.2 presents a conceptual example where the Youngs modulus (E ) of a beam is sought
using a static load test and vertical displacement measurements. It is intuitively known that the
vertical displacement measured right over the support will not be able to distinguish between
any value of E . Nevertheless, using this measurement would require to widen threshold
bounds for the two other comparison point, indeed decreasing the overall interpretation
capacity.
Useless measurement

E=?
Vertical displacement measurement

Figure 4.2: Conceptual example used to illustrate the situation where additional measurements
can lead to over instrumentation.

4.3 Optimization method for measurement-system design


There are complexity issues involved with optimizing the performance of measurement systems. Equation 4.1 presents the number of possible sensor congurations when chosen from
n m potential measurements. When selecting congurations involving 20 measurements there
are more than a million possibilities. For 300 measurements, the number of possibilities
( 1090 ) is larger than the number of particles contained in the observable universe ( 1080 ,
[60]). This illustrates the exponential growth of the solution space with the number of potential
measurements. In order to obtain optimized solutions efciently, advanced search algorithms
are necessary. The methodology used to support measurement-system design is based the
Greedy algorithm (see 1.5.2). Nonetheless, note that the approach presented is not limited to
the optimization algorithm chosen to explore measurement-system congurations.

nm


nm !
= 2n m 1
k=1 k!(n m k)!

(4.1)

63

Chapter 4. Measurement-system design

4.3.1 Methodology
The methodology used to design efcient measurement systems is based on an Greedy algorithm. It identies what single measurement that can be removed from an initial conguration
containing n m measurements while minimizing the change in the result of an objective function. This process is repeated until a single measurement is left.
Figure 4.3 presents a owchart describing steps involved in the optimization of the performance of measurement systems. In this owchart, the vector s contains n m dummy variables
indicating whether or not a potential measurement location is used.
START
Identify initial potential measurement locations (

Store results

Compute the expected number of candidate models


using the sensor configuration and for
all permutations of N-1 sensors

Remove the measurement location minimizing

is N=1

NO, N = N - 1

YES

YES
Compute measurement configuration cost
and remove dominated solutions

Are there other potential


measurement locations
and types

NO

No monitoring
interventions

Select a monitoring system


NO
Is the expected
performance sufficient?

YES

Monitoring

Monitoring cost

Figure 4.3: Flowchart summarizes steps involved in the optimization of measurement-system


performance.

The Greedy algorithm rst evaluates the expected identiability when using simultaneously
all possible measurement types and locations. Once the expected identiability (F 1C M ( ))
is computed using all sensors each sensor is removed successively from the set of N = n m
64

4.3. Optimization method for measurement-system design


selected sensors, s(i ) = 0 for all i {1, . . . , N }. The sensor removal leading to the best performance is removed permanently from the set of selected sensors. This process is repeated
iteratively with N = N 1 sensors until a single sensor is left. In the case of static measurements,
it is useful to optimize simultaneously the number of sensors and load-cases used. Therefore,
the algorithm stops when a single sensor and load-case are obtained. The cost associated
with each solution is computed and non-optimal solutions (with respect to cost) are removed.
The results from measurement-system optimization are returned in a two-objective graph
as presented in Figure 4.3 and in a table containing the details of each measurement-system
conguration.
Users may then choose the measurement system that is the best tradeoff between the expected
performance and available resources. If the expected performance cannot justify monitoring
interventions, new measurement locations and types may be evaluated. If such possibilities
do not exist this operation could lead to a justication for performing no monitoring on the
structure, thereby redirecting monitoring resources to other structures where data would be
more useful to understand system behavior. Applied examples using this methodology are
provided in Chapter 5.

Complexity
For static monitoring, if one load case is possible the Greedy algorithm performs the measurement2
system optimization in less than n m
/2 iterations, where n m is the maximal number of measurements. Figure 4.4 compares the number of iterations required with the number of sensor
combinations possible. It shows that Greedy algorithm complexity (O(n 2 )) leads to a number of sensor combinations to test that is signicantly smaller than the number of possible
combinations (O(2n )).

Magnification

x 105

Number of combinations

12

Number of combinations

10
8

Number of possible combinations

O(2n)

6
4

4
6
8
10 12 14 16 18
Number of possible measurement locations

Number of possible
combinations O(2n)

300

Greedy algorithm
(Polynomial complexity O(n2))

200
100
0

Number of iterations required


by the greedy algorithm

2
0

400

4
6
8
10 12 14 16 18
Number of possible measurement locations

20

20

Figure 4.4: An example of the growth in the number of iterations required by the Greedy
algorithm compared with the solution space growth.

65

Chapter 4. Measurement-system design

4.4 Measurement-system design for time-domain data


Designing optimized measurement-systems using dynamic data is different from the previous
methodology because for dynamic data, the number of observations available to compare with
model predictions is not directly related to the number of locations monitored. As mentioned
in 2.7, measuring either accelerations or velocities at one location can provide information
for more than one mode. Similarly, only few modes could be observed while monitoring
several hundred locations.

4.4.1 Methodology
As it is done for static analyses, prior to acquiring data, simulated measurements can be used
to quantify the utility of measurements to falsify models. The process is divided in two steps:
the determination of the modes of interest and the determination of where to measure to
obtain information about these modes.

Quantifying the usefulness of modes for falsifying models


In a rst step, a subset of modes L {1, . . . , n l } containing n L simulated mode shapes is chosen
from the n l modes available for the randomly selected model instance v. v U (1, n k ) N is a
discrete uniform random variable dened between 1 and n k , the number of model instances.
These modes are used to generate simulated mode shapes, y s,l = gl (v ) , where l L. The
subset of modes is selected to include only global modes because the number of simulated
modes n l is usually much larger than the number of measured modes n y . It happens because
not all simulated modes are relevant for the identication of parameters and because models
can predict several local modes that are not captured by measurements.
A vector containing simulated natural frequencies y s for n L frequencies is generated by subtracting a vector Uc = [Uc,1 , . . . ,Uc,nL ]T containing realizations of the combined uncertainties
from the vector r v containing the frequency predictions (see Equation 4.2). These predictions
are obtained from a randomly selected model instance. Realizations of Uc take into account
the possible correlation between uncertainties. As dened in 3.2.1, for situations where little
information is available a qualitative reasoning scheme is used to dene the correlation values.

y s = r v Uc

(4.2)

Instances of simulated natural frequencies are used to emulate the model falsication process.
Each time simulated natural frequencies are generated, the number of candidate models
obtained nC M is stored in a vector C M . When this process is repeated a number of times
sufcient to obtain a stable statistical distribution, an empirical cumulative distribution

66

4.4. Measurement-system design for time-domain data


(cdf, F C M (nC M )) is plot as shown in Figure 3.4. The inverse cdf F 1C M ( ) is used as metric
to quantify the performance of mode combinations for falsifying model instances, where
]0, 1] is the target certainty.
In order to explore what combination of modes leads to the smallest number of candidate
models, search algorithms are used to explore the space of solutions. For each combination
of modes tested, F 1C M ( ) is computed using the methodology described above. Results are
reported in a graph describing the relation between the number of modes used and F 1C M ( ).
At this point, a subset L opt L of n L opt modes is selected. This set of modes is expected to
contain the most useful information for identifying the model physical parameters . The next
step consists in selecting where to measure in order to obtain the information to represent
adequately these mode shapes in order to built the matrix (see 2.7).

Optimization of measurement locations for time-domain monitoring


The objective function used to quantify the performance of measurement systems is the
mode-match criterion. The mode-match criterion determines the correspondence between
the matrix obtained using the initial conguration of all possible sensors {1, . . . , n a } with the
matrix  obtained using a subset of sensors Q {1, . . . , n a }. The mode-match is expressed as
a vector of length n L opt obtained by subtracting  and counting, for each column, the
number of terms either equal to 0 or equal to the terms of the matrix  .
The performance of several measurement congurations Q is tested. Measurement locations
containing redundant information can be discarded while still keeping sufcient information
to compare measured and predicted mode shapes for the subset of l L opt modes identied
in the previous steps. Figure 4.5 schematically represents the effect of the number of measurement locations on the mode-match criterion Q,l . The horizontal axis represents the number
of locations that are monitored. The right end of the curve (a) corresponds to the total number
of locations n a initially provided. The vertical axis is the mode match for all modes l L opt
expressed in percentage of the match obtained using all sensors initially provided, Q,l /r e f ,l .
When all sensors are used, the mode match is not affected by sensor removal until a point (b)
where it decreases as measurement locations are removed. This graphical representation can
be used to select either the number of sensors that contain the same amount of information
as the initial set of sensors (b) or to select a conguration of sensors that represents a tradeoff
between the amount of information lost and monitoring cost (c). This point represents the
conguration with the least number of sensors that meets a target mode match ]0, 1].
Note that with this metric, it could be possible to obtain a mode match larger than 1. This
would corresponds to the situation where some sensors degrade the MAC value instead of
improving it.
In order to minimize the effect of the choice of reference mode shapes, the process of optimizing measurement-system congurations has to be repeated several times using randomly
selected reference mode-shapes gl (v ) . This ensures that the selection of the reference mode67

Mode match
(expressed as a percentage
of mode match using all possible sensors)

Chapter 4. Measurement-system design

a. Mode match using all possible sensors


100%
b.Same information
with less sensors
c. Tradeoff between
information lost and
monitoring costs

Number of locations monitored

Figure 4.5: Schematic representation of the effect of the number of measurement locations on
the mode match criterion. The initial conguration using all sensors result in a 100% mode
match (a), the same match may be possible with a conguration using less sensors (b), and a
tradeoff between information and costs may be xed with fewer sensors (c). The mode-match
criterion quanties the capacity to nd a correspondence between predicted and measured
mode-shapes.

shapes does not inuence the measurement congurations obtained. The measurement
conguration having the smallest number of sensors and reaching a target mode-match criteria of is stored each time a reference mode-shape is tested. The results obtained using
several reference mode-shapes are summarized in an histogram describing the frequency of
use of each sensor contained in the initial conguration. Sensors that are used for a relative
frequency less than a target q [0, 1] are discarded. Stricter or looser targets values can be
used depending on applications. The set of optimal sensor found is denoted Q opt .

Optimization strategy
Each methodology presented in 4.4.1 involves a number combinations that grows exponentially with both the number of sensors and mode shapes. In most practical applications, an
exhaustive search of the solution space is not possible. In order to nd efciently optimized
sets of mode shapes and sensors, search algorithms are necessary. Many choices are available
in the literature (see 1.5.2). A methodology suited for this type of problem is the inverse
greedy algorithm proposed in 4.3.1. This heuristic-based optimization technique is used
recursively to look for either what sensor or what mode can be removed from a set while
leading to the best performance. This methodology can provide optimized sets of either of
mode shapes or sensors in less than n 2 iterations.
In order to nd optimized sets of sensors (4.4.1), multiple loops of greedy optimization may
increase the performance. For instance, when no measured data is available to describe
mode shapes, the conguration returned using a single loop of the greedy algorithm can be
over-conservative (see Figure 4.6a). A solution to this challenge is to run the greedy algorithm
several times and remove the sensors used in a frequency less than q for the subsequent
68

4.5. Conclusions
optimization loops until no additional sensor can be removed. Figure 4.6b presents the sensors
removed after several optimization iterations.
Sensors removed for the next
greedy optimization iteration

Final sensors removed

Relative
frequency

1 2 3 ...
na
Measurement location

a)

1 2 3 ...
na
Measurement location

b)

Figure 4.6: Histograms representing the relative frequency of usage of each sensor. a) This
gure represents the result obtained after the rst greedy optimization loop. Measurement
locations used in a frequency less than q are remove during the subsequent greedy optimization loop. b) This gure represents the nal sensors removed after two or more optimization
loops.

4.4.2 Summary of time-domain monitoring system optimization


Figure 4.7 presents a summary of the measurement-system design methodology presented in
this section. This methodology provides a systematic framework to evaluate the performance
of measurement systems for a specied monitoring goal represented by parameters to be
identied. This approach can be used to guide the design of new measurement systems and
to optimize existing congurations. This methodology is applied on a full-scale structure in
5.5.3.

4.5 Conclusions
This chapter describes a methodology to analyze the performance and to design optimized
measurement systems. Too many redundant measurements may decrease data-interpretation
performance. This aspect is incorporated in a systematic and quantitative framework that is
able to help prevent over instrumentation in measurement systems. Specic conclusions are:

1. The criteria used to falsify models (threshold bounds) are dependent upon the number of measurements. If the error structure is incompletely dened and too many
measurements are used, data-interpretation can be hindered by over instrumentation.
2. Optimizing measurement-system congurations involves treatment of large amounts of
data that would be unreasonable to analyze manually. The measurement-system design
methodology can be used to determine good tradeoffs with respect to interpretation
goals and available resources.
69

Chapter 4. Measurement-system design


3. The approach may prevent potentially costly over instrumentation. Moreover, it indicates situations when measuring a structure is not likely to be useful.
START
Define an initial set of sensors

Possible measurement location

Finds an optimized set of modes that


maximizes the expected identifiability

Number of mode shapes used


Remove measurement points that are
used less than a percentage
of the time
Relative
frequency
1 2 3 ...

na

Measurement location
Select an optimized set of sensors

Figure 4.7: General framework describing the measurement-system optimization methodology.

70

5 Case studies

Summary
This chapter describes the validation tests and applications using the errordomain model falsication approach as well as complementary methodologies
presented in Chapters 2-4. A rst illustrative example demonstrates the capacity of the approach to perform correct diagnosis in situations where there are
systematic errors and when the error structure is unknown. Four additional
case studies are drawn from performance-evaluation applications to show the
potential of the approaches for understanding the behavior of full-scale systems.

5.1 Introduction
Validation tests and applications of the error-domain model falsication approach are presented in this chapter. Note that the term validation refers here to the comparison of predicted
quantities with measurements. As exposed in 1.2.1, complete validation of theories and
hypotheses is often not possible. Therefore, the applicability of the solutions proposed is
demonstrated for ve case studies covering the identication of the behavior of structures
and the identication of leaks in a pressurized pipe network. Aspects covered by each case
71

Chapter 5. Case studies


study are summarized in Table 5.1.
Table 5.1: Summary of aspects covered by each case-study
Aspect illustrated
Description

Comparison
with other
approaches

Cantilever
beam example
(5.2)

Evaluate the diagnosis returned


by several methodologies, in situations where there are systematic
errors

Langensand
Bridge
(5.3)

Illustrate the applicability of


error-domain model falsication, expected identiability and
measurement-system design

S&D

Grand-Mere
Bridge (5.4)

Compare the effects of model simplications and omissions on prediction errors

Tamar Bridge
(5.5)

Illustrate the applicability of errordomain model falsication and


measurement-system design using dynamic data

Lausanne
fresh-water
distribution
network (5.6)

Illustrates how error-domain


model falsication can be used
for the detection of leaks in
pressurized pipe networks.

Case study

72

Structural
identication
(S)tatic/(D)yna.

Model
class
comparison

Measurement
system
design

5.2. Cantilever beam example

5.2 Cantilever beam example


An illustrative example is presented to compare diagnosis obtained using error-domain model
falsication with results of residual minimization (1.2.1) and Bayesian inference (1.2.2). This
example is tailored to demonstrate the effect on data-interpretation of using models that are
idealized representations of reality for identifying the physical properties of structures. The
system studied here is a cantilever beam. The true representation of the structure is shown
in Figure 5.1a, where the semi-rigid cantilever connection is modeled using a rotational spring
having a stiffness parameter K . This beam has a Youngs modulus E = 70 103 MPa and the
vertical force applied on its end is F = 5 103 N.
In order to be representative of full-scale situations where it is not possible to capture reality
in a model, an idealized beam is used as model of the true system (see Figure 5.1b). This
idealized structure does not include the partial rigidity of the cantilever connection. For
this structure, parameters to be identied are the Youngs modulus E and the value of the
force applied F . Each parameter has a possible range of values of [20,100]103 MPa and
[1,10]103 N respectively. These ranges dene the parameter domain R2 . The set of
parameters to be identied is denoted = [E , F ]. The beam is 3000 mm long and has a square
cross-section of 300 mm 300 mm. Its inertia I is 6.75108 mm4 .

(a) True cantilever beam

(b) Idealized cantilever beam

Figure 5.1: True and idealized cantilever beams. Parameters to be identied using the idealized
beam are the Youngs modulus (E ) and the value of vertical force applied (F ).
The vertical displacement v(x) of the beam at any location x [0, l ] (l = 3000 mm) is described
by Equation 5.1. For any location x the error introduced by the idealized model is  (x) =
F l x/K .

v(x) =

F x 2 (3l x)
6E I

(5.1)

Simulated measured values, y(x), are obtained according to Equation 5.2, where v (x) is the
model displacement computed with correct parameter values E and F . u meas is a realization of Umeas N (meas , meas ), a Gaussian random variable describing sensor resolution
uncertainty. The mean of this random variable is 0 and its standard deviation is 0.02 mm.
Sensor resolution errors are independent of the measured locations. The combined uncertainty variance is obtained by summing the variance of the model simplication and sensor
73

Chapter 5. Case studies


resolution uncertainties (2c = 2mod el + 2meas ). The combined uncertainty is represented by a
random variable denoted Uc N (c , c ).

y(x) = v (x)  (x) + u meas

(5.2)

5.2.1 Comparison of system-identication approaches


The effect of using an idealized model is studied for three identication methodologies;
residual minimization using weighed least-square regression, Bayesian inference and errordomain model falsication. These three approaches are compared for several cases using
different number of measurements, n m {1, 2, 10, 50}. For each value of n m , the displacement
is evaluated for x = x start + i (l x start )/n m i {1, . . . , n m }, using Equation 5.2. x start is the
minimal distance from the cantilever support where measurements are taken. In this example,
x start = 1000 mm. Also, these approaches are compared for three scenarios, rstly for a case
where there is no systematic error, secondly for a case where there are systematic errors and
these are recognized and thirdly for a case where there are unrecognized errors. For the two
rst scenarios, an identication is deemed correct if it includes the correct values E and F .
For the third scenario, the identication is correct if either it includes the correct value for E
and F or if it returns an empty set indicating that the entire model class is falsied.
For each approach, the domain of possible solutions is explored by solving the model g(. . .)
for parameter sets at the intersection of an equally spaced grid having 100100 divisions. For
each model instance, a vector o () contains the observed residual of the difference between
predicted and measured values for all predicted locations (see Equation 5.3).

o () = g() y

(5.3)

Residual minimization
that are optimal in a least-squares sense
The rst identication approach nds parameters ,
(see Equation 5.4). The weighing matrix W is set to [diag(y)]2 . In this approach, the goal is to
calibrate model parameters to obtain the smallest weighted sum of the square of the residuals.
Uncertainties are assumed to be Gaussian and independent.

= arg min o ()T W o ()

74

(5.4)

5.2. Cantilever beam example


Bayesian inference
The second identication approach infers credible regions where the correct model parameters should lie. Here, it is assumed that the only prior knowledge available is the minimal and
maximal bound for parameters to be identied. Thus, prior knowledge is represented by a
uniform distribution as described in Equation 5.5.

constant, if
P () =
0,
if

(5.5)

The function mapping the residual values to the likelihood of parameter values is presented
in Equation 5.6, where U c is a vector containing the mean of the combined uncertainty pdf
for each location i . Equation 5.6 is the multivariate Gaussian distribution. The normalization

constant P (y) (see Equation 1.2) is computed by integrating P (y|)P ()d . This integral
can be evaluated in the domain only, because the prior knowledge assigns a credibility of 0
outside this domain.

n m /2

P (y|) = (2)

||

1/2

1
exp (o () U c )T 1 (o () U c )
2

(5.6)

For this method, uncertainties are assumed to be independent. Thus, the covariance matrix
is a diagonal matrix containing the variance 2c for each comparison point where measured
and predicted values are available.

Error-domain model falsication


The third approach compared is error-domain model falsication. Here, the target reliability
is set to = 0.95 and no assumption is made regarding uncertainty dependencies. Models are
either accepted as candidate or falsied based on Equation 2.8.

First scenario: No systematic errors


In the rst scenario, the true value for the rotational stiffness K , so there is no systematic
error between the simulated measurements and model predictions. The only uncertainty is
due to sensor resolution and the choice of parameter values. Figure 5.2 compares the optimal
parameter values and the credible regions found, with true parameter values E and F .
In Figure 5.2, each graph represents the least-squares optimal solution and the Bayesian
credible region for the domain for specic values of n m . For any value of n m {1, 2, 10, 50},

75

Chapter 5. Case studies

True solution
Correct diagnostic
Biased diagnostic

6
4
2

50% credible region


95% credible region

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2

Vertical force (x1000) N

Least-sq. optimal solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

the least-squares optimal solution does not match the true solution, even if in one case (c)
results are close. This is because there is more than one optimal solution and the leastsquares optimal solution is trapped in a local minimum created by measurement noise.
Therefore, selecting a single best parameter set can lead to a biased identication. For
Bayesian inference, the true parameter values are in the 95% credible region for all values of
n m . Note that the size of the 95% credible region is reduced as n m increases. This indicates
that the more measurements are used, the more precise the identication is, because each
new measurement brings additional information. For this scenario, Bayesian inference leads
to a correct identication for any n m .

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.2: Comparison of parameter values identied using least-squares parameter identication and Bayesian inference with the correct parameter value for Youngs modulus (E ) and
vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this scenario,
there are no systematic errors and uncertainties are rightfully assumed to be independent.
The labels correct and biased identication apply to Bayesian inference.

Correct identification
Biased identification

6
4
2

Candidate model set

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Vertical force (x1000) N

True solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

Figure 5.3 presents the identication results obtained using error-domain model falsication.
The shaded region represents the candidate models and the white region the falsied model
instances. Analogously to Bayesian inference, the identication is correct for any value of n m .
However, with this method, the size of the candidate model set is larger than Bayesians 95%
credible region.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.3: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set.
For the rst scenario where no systematic errors are present, both the Bayesian inference
and error-domain model falsication approach lead to correct identications. On the other
76

5.2. Cantilever beam example


hand, least-squares residual minimization leads to biased identications because it chooses a
single optimal solution. Due to parameter compensation, the predictions from more than one
model instances match measured values. In this case, parameter sets found using residual
minimization do not match the right model because measurement errors (noise) create local
minima.

Second scenario: With recognized systematic errors


In the second scenario, systematic errors are introduced by setting the true value K for the
rotational stiffness to 121010 Nmm/rad. The magnitude of this systematic bias is chosen to
represent situations governed by omission and simplication errors rather than by measurement errors. For the purpose of this illustrative example, the effect of model simplications
is described by a Gaussian distribution having a mean U mod el corresponding to -10% of the
measured values and a coefcient of variation of 15%. This estimation of modeling uncertainties is intended to be a conservative assumption where the correct residual values lie
within the interval U c 2c . Here, the model simplication uncertainty is not centered on
zero because it is known that the simplication related to the beam connection is likely to
cause an underestimation of predicted displacement (in absolute terms). In order to represent
the situations that civil engineers face during the identication of full-scale structures, it is
assumed that there is no information available to quantify the effect of spatial dependencies
of prediction errors.
Figure 5.4 compares the identication results with true parameter values E and F for a number of measurements n m {1, 2, 10, 50}. Least-squares residual minimization fails to provide
correct identications for any value of n m . For Bayesian inference, a correct identication is
obtained only for n m {1, 2}. For larger number of measurements, the correct parameter set is
excluded from the 95% credible region. Furthermore, the higher the number of measurements
used, the more this credible region is reduced. This could have lead to the belief that the
identication is correct because of its high precision. The determination of the uncertainty
correlations requires a high level of knowledge about the error structure. Thus, even if the
magnitude of uncertainties are adequately estimated, if wrong assumptions are made regarding uncertainty dependencies, the posterior pdf using Bayesian inference can lead to biased
identications. Furthermore, the importance of the identication error increases with the
number of measurements used.
Figure 5.5 presents the envelope containing the 95% credible regions obtained when varying
the correlation from 0 to 0.99, for all covariance terms simultaneously. When varying covariance terms during the computation of the posterior pdf, Bayesian inference leads to a correct
identication. However, the size of the envelope containing all credible regions grows with the
number of measurement used.
Figure 5.6 presents the identication results obtained using error-domain model falsication.
Here, correct identications are achieved for all values of n m . The size of the candidate model
77

True solution
Correct identification
Biased identification

6
4
2

50% credible region


95% credible region

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2

Vertical force (x1000) N

Least-sq. optimal solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

Chapter 5. Case studies

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Correct identification

6
4
2

Envelope containing
95% credible regions

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Vertical force (x1000) N

True solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

Figure 5.4: Comparison of parameter values identied using least-squares parameter identication and Bayesian inference with the true parameter value for Youngs modulus (E )
and vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). In this
scenario, uncertainties are wrongfully assumed to be independent. The labels correct and
biased identication apply to Bayesian inference.

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.5: Comparison of parameter values identied using Bayesian inference with the true
parameter value for Youngs modulus (E ) and vertical force (F ). The number of measurements varies from 1 (a) to 50 (d). The shaded area represents the region including 95% certainty
credible regions obtained when varying the correlation from 0 to 0.99, for all covariance terms
simultaneously.

set increases with the number of measurement used because errors are strongly correlated.
Therefore, each new measurement brings almost no new information to further discard model
instances. When more measurements are included, threshold bounds are widened to include
the effect of unknown dependencies between uncertainties, according to Equation 2.5. With
this approach, the increase of the candidate model set size is smaller than for the sets found
when varying the correlation coefcient for the Bayesian inference.
This example illustrates that using wrong values of uncertainty correlation with Bayesian
inference may lead to biased identications. It also shows that error-domain model falsication can achieve correct identications without having to dene uncertainty dependencies.
Finally, over-instrumentation is possible when the error structure is incompletely dened
since for both Bayesian inference and error-domain model falsication, the precision of the
identication decreases when measurements are added, (e.g. the size of the candidate model
set increases).
78

Correct identification
Biased identification

6
4
2

Candidate model set

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2

Vertical force (x1000) N

True solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

5.2. Cantilever beam example

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.6: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). No assumptions are made regarding
uncertainty dependencies. The shaded area represents the candidate model set.

Third scenario: With unrecognized errors


The last scenario compares the identication performance when systematic and aleatory
errors are present and unrecognized. Here, the true value K for the rotational stiffness is set to
121010 Nmm/rad as in the previous scenario. However, in this case, modeling uncertainties
are set to 0 to represent unrecognized systematic errors. Additionally, measurement errors
have here a standard deviation of 0.05 mm while, as for the previous scenarios, this standard
deviation is assumed to be 0.02 mm.

True solution
Correct identification
Biased identification

6
4
2

50% credible region


95% credible region

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Vertical force (x1000) N

Least-sq. optimal solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

Figure 5.7 compares identication results for Bayesian inference and weighed least-squares
residual minimization with true parameter values E and F , for a number of measurements
n m {1, 2, 10, 50}. It shows that for any n m , these methods lead to biased identications. As in
the previous case, the sizes of the credible regions decrease with the number of measurements
used. Again, it can lead to the belief that the identication is correct because the identication
results are restricted to a small region. These approaches are unable to signal systematically
that initial assumptions regarding the model adequacy were wrong.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.7: Comparison of parameter values identied using least-squares and Bayesian
inference with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model predictions
and in measurement error estimation are not recognized. The labels correct and biased
identication apply to Bayesian inference.
79

Chapter 5. Case studies

Correct identification
Biased identification

6
4
2

Candidate model set

20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Vertical force (x1000) N

True solution

Vertical force (x1000) N

10

Vertical force (x1000) N

Vertical force (x1000) N

Figure 5.8 presents the identication results obtained using error-domain model falsication.
When one or two measurement locations are used, the approach leads to biased identications
because it nds candidate model sets that do not include the correct solution. For any values
n m > 2, the approach identied that assumptions made regarding the model adequacy were
wrong. It leads to a correct identication by returning no candidate model. Therefore, given a
sufcient number of measurements, error-domain model falsication can perform correct
identications even in the presence of unrecognized errors. For this third scenario, the correct
identication was to identify that initial assumptions that created the entire model class was
awed with respect to the uncertainties dened.
10
8
6
4
2
20
40
60
80
100
Young modulus (x1000) MPa

Figure 5.8: Comparison of the candidate model set found using error-domain model falsication with the true parameter value for Youngs modulus (E ) and vertical force (F ). The
number of measurements varies from 1 (a) to 50 (d). The systematic bias in model predictions
and in measurement error estimation are not recognized.

5.2.2 Summary of results


Table 5.2 summarizes the comparison of identication methodologies. Only error-domain
model falsication leads to a correct identication for all scenarios tested provided that a
sufcient number of measurements are used. Bayesian inference using a likelihood function
based on 2 -norm, lead to correct identications when systematic effects can be either fully
described or removed and when all possible uncertainty correlation values can be tested.
When Bayesian inference is used with biased uncertainty dependency values, the approach
may return biased identication as well. For all scenarios, considering more than one solution
as a candidate model is mandatory to obtain an unbiased identication.

5.2.3 Discussion
For the previous example, it is trivial to either quantify the dependencies introduced by model
simplications or to parametrize the boundary conditions as it was already proposed in
several approaches [4, 82]. However, for full-scale civil-structures, model simplications are
inevitable and uncertainty dependencies are in most cases unquantiable. Thus, most fullscale identication tasks correspond to either the second or third scenario where systematic
bias is introduced by model simplications and it may or may not be recognized.
80

5.2. Cantilever beam example


Table 5.2: Summary of the identication methodology comparison on the basis of their
capacity to provide correct identication for the cantilever beam example.

No systematic errors
Recognized systematic errors
Unrecognized errors

Residual
minimization

Bayesian
inference

Error-domain
model falsication






1




2

1 Provided that dependencies are either known or that all possible values are tested.
2 Provided that a sufcient number of measurements are used.

When using Bayesian inference to identify properties of complex structures, varying all covariance terms simultaneously may not correctly capture the effect of systematic errors. Moreover,
this could lead to computational complexity issues because the number of covariance terms
2
is dened by n m
/2 n m . Thus, when using 10 measurements, if three correlation values, say
-0.9, 0 and +0.9, are tested for each covariance term, the number of evaluations required is
340 1019 . One alternative is to combine the potential of both error-domain model falsication
and Bayesian inference as proposed in 2.8.

5.2.4 Case study conclusions


1. Error-domain model falsication provides correct identications in situations where
there are aleatory and systematic errors, without requiring denitions of dependencies
between uncertainties. Furthermore, it may detect aws in assumptions related to the
model adequacy through model-class falsication.
2. With Bayesian inference, when there are aleatory and systematic errors, assuming
that uncertainties are independent can bias the posterior pdf. Correct identications
can be obtained when the posterior pdf is evaluated for several values of uncertainty
correlations. However, credible regions become large and computational complexity
increases.
3. When taken out of the scope of model calibration, residual minimization might lead to
biased identications, especially when multiple possible solutions are not included.
4. For both error-domain model falsication and Bayesian inference the example revealed
that over-instrumentation is possible in the sense that the precision of the identication
decreased when measurements were added.

81

Chapter 5. Case studies

5.3 Langensand Bridge


In this case-study, several methodologies, proposed in Chapters 2-4, are applied using data
acquired during static and dynamic structural testing. The aspects illustrated are:

Model falsication using static and dynamic data (2.2 & 2.7)
Usage of surrogate models (2.5.2)
Expected-identiability computation (3.3)
Load-test conguration optimization (4.2)

5.3.1 Structure description

2.2m

1.1m

This case-study involves the Langensand Bridge, Lucerne, Switzerland. This structure was
under construction (half of width launched) when tested. This bridge is approximately 80 m
long and has a slender prole, see Figure 5.9.
Girder profile

Y
X

Clearance = 0.15m

Total length: 79.6m

Figure 5.9: Langensand Bridge elevation representation.

Figure 5.10 represents the parts of the bridge in place during the rst and second construction
phases. It consists of a concrete deck acting in composite action with a steel girder. The central
part of the bridge is used as roadway and the external parts are sidewalks.
124

114

Roadway

116

112

Sidewalk

Phase 1
Phase 2

Figure 5.10: Langensand Bridge cross section.

82

5.3. Langensand Bridge

5.3.2 Structural identication using static data


The ve load-cases performed are presented in Figure 5.11. Each load-case uses two test trucks
weighing 35 Ton each. The sensor conguration used during the identication is composed of
three displacement measurements (at the intersection of axis S7-112, S12-112 and S17-112,
UY), two rotations (at the axis A1 and S7, RZ) and two strain measurements (on the top (L1)
and bottom (L2) chord of the concrete slab at the section S13, EX) recorded for ve load-cases.
A total of 35 measurements are available for comparison with model instance predictions (see
Table 5.3). The nite-element (FE) model cross-section is presented in Figure 5.12.
Load case #1
112
113

Load case #2

T1

T1

T2

T2

114

116

Z
S7

A1

S17

S13

S12

Load case #3
112

T1

T2

113
114
116
S7

A1

S17

S13

S12

Load case #4
112

T1

T1

113
114
116
S7

A1

S17

S13

S12

Load case #5
112
113

T2

114
116
S7

A1

S12

S17

S13

Figure 5.11: Test-truck layout for load cases 1 to 5 (Phase 1) for the Langensand Bridge

Table 5.3: Static measurements taken on the Langensand Bridge. Mean and standard deviation
represent the measurement variability obtained by repeating each load case three times.

Load
case
1
2
3
4
5

UY-S7-112
mm
y

-17.9
-17.1
-18.9
-9.4
-8.5

0.09
0.09
0.07
0.06
0.13

UY-S12-112
mm
y

UY-S17-112
mm
y

-22.5
-26.5
-28.0
-14.6
-12.9

-17.6
-23.7
-23.1
-12.7
-11.6

0.27
0.21
0.17
0.14
0.30

0.33
0.42
0.40
0.18
0.08

Sensors
RZ-A1-113
rad
y

-1113
-1020
-1126
-553
-506

5.5
3.8
11.5
1.9
2.9

RZ-S7-113
rad
y

-644
-730
-734
-382
-377

1.7
2.1
5.0
1.3
2.0

EX-L1
mm/mm
y

-18.5
-31.9
-30.3
-17.0
-14.3

0.81
1.01
0.23
0.40
0.12

EX-L2
mm/mm
y

-14.5
-21.3
-20.6
-11.1
-9.7

0.81
0.40
0.69
0.31
0.70

83

Chapter 5. Case studies

112

Roadway

114

Road surface
Concrete
reinforcement

Concrete
barrier
Sidewalk

Transverse
stiffeners for girder

116

Orthotropic
deck stiffners

Figure 5.12: Langensand Bridge nite-element model (Phase 1).

Parameters to be identied and model instance generation


Four parameters (primary parameters) need to be identied: concrete Youngs modulus,
pavement Youngs modulus, steel girder Youngs modulus and the stiffness of the horizontal
support created by the bearing devices. The Youngs modulus sought is an average over the
structure. The plausible range for each parameter and discretization intervals are presented in
Table 5.4. The initial model set is built according to an hyper-grid containing 10 010 model
instances. The range of each parameter is chosen so that if no model falsication is performed,
the initial model set would cover all plausible combinations of parameters.

Table 5.4: Ranges and discretization intervals for parameters to be identied on the Langensand Bridge.

Parameters to be identied

Units

Range

Number of
discretization intervals

Concrete Youngs modulus


Pavement Youngs modulus
Steel girder Youngs modulus
Stiffness of the horizontal support

GPa
GPa
GPa
kN/m

20-44
2-20
200-212
0-1000

13
10
7
11

Within the initial model set, the concrete Youngs modulus has the highest inuence on
model predictions. Figure 5.13 presents the relative importance of each parameter. The high
importance of the concrete Youngs modulus is due to the combination of prediction sensitivity
and possible parameter range.
84

Relative importance

5.3. Langensand Bridge

0.6
0.4
0.2
0

Stiffness of
the horizontal
support

Steel Youngs
modulus

Concrete
Youngs
modulus

Pavement
Youngs
modulus

Figure 5.13: Relative importance of each primary parameter on the model predictions of the
Langensand Bridge.

Uncertainties
Several secondary-parameters (not to be identied) contribute to model prediction uncertainty (see sections 1.3.2 and 2.4.1). For instance, the uncertainties related to the geometry
of the structure (variation in the thickness of the elements t ), Poissons ratio of concrete
(v), truck weight and the variation of strain-sensor positioning. All of these uncertainties
are represented by normal distributions; the details of each are summarized in Table 5.5.
Estimation of these uncertainties are based on the values found in the literature (see, 1.3.2)
and on site observations. Variations in temperature during the tests affected the properties
of concrete and pavement materials. The uncertainty in the change in ambient temperature
during the tests is represented as an uniform distribution dened between 0 and 5 degrees
Celsius. The uncertainty associated with temperature is the maximal variation of temperature
measured during the tests. Based on the relationship proposed by Bangash and England
[14] the variation in percentage of the concrete Youngs modulus is equal to the variation
of temperature divided by 137. For the road surface, the relation between temperature and
Youngs modulus is taken to be the temperature variation divided by 30. This last relationship
is based on the experimental work conducted by Perret [171] on similar materials.
Table 5.5: Secondary-parameter uncertainties for Langensand Bridge
Uncertainty source

Unit

Mean

v concrete
t steel plates
t pavement
t concrete
Truck weight
Strain sensor positioning

%
%
%
Ton
mm

0
0
0
0
35
0

0.025
1
5
2.5
0.125
5

For the quantication of other uncertainty sources (except for sensor resolution), no information other than engineering heuristics is available. In all cases, values provided are intended
to be conservative evaluations of minimal and maximal bounds, that should include the
true error. Details regarding other uncertainty sources are presented in Table 5.6. In this
85

Chapter 5. Case studies


case, the model simplications decrease the stiffness of the model with respect to the real
structure. For any cross section of the bridge, the effect of these simplications combined
with the nite-element method (FEM) approximation are assumed to be under seven percent
for rotation and displacement predictions. Since strains are interpolated from the degree of
freedoms and are more sensitive to local imperfections the maximal error is estimated to be
up to 20%. Due to the subjective denition of these bounds, an extended uniform distribution (EUD) is used to account for the uncertain positions of pdf bounds. When using this
distribution, an uncertainty is dened for the position of minimal and maximal bounds. This
uncertainty is expressed as a fraction , between zero and one, of the bounds width initially
dened. Details regarding the EUD distribution are presented in Appendix A. In this case, the
uncertainty bounds could be either over or underestimated by 30% ( = 0.3) for displacements
and rotations and 50% ( = 0.5) for strains.
Uncertainties associated with sensor resolution are represented by an uniform distribution
and are taken to be twice the manufacturers specications to account for site conditions. The
instruments used were not sensitive to cable losses. A mesh renement analysis has been
conducted to determine the maximal plausible discretization error for each type of prediction.
The measurement repeatability uncertainty is obtained for each measurement location by
computing the standard deviation of three successive measurements during load tests (see
Table 5.3). Additional uncertainties are upper-bound provisions for the combination of all
phenomena that when taken individually, have a negligible impact ( 1%).
Table 5.6: Other uncertainty sources for Langensand Bridge
Uncertainty source
Sensor resolution
Model simplications & FEM
Mesh renement
Additional uncertainties

Displacement
min
max
-0.2 mm
0%
-1%
-1%

0.2 mm
7%
0%
1%

Rotation
min
max
-4 rad
0%
-1%
-1%

4 rad
7%
0%
1%

Strains
min

max

-4 m/m
0%
-2%
-1%

4 m/m
20%
0%
1%

Uncertainty correlation
Evaluation of dependencies between uncertainty sources is a difcult task since such information is rarely available. The dependence between secondary-parameter uncertainties does not
require a direct evaluation since these are obtained through their propagation in the niteelement model. Figure 5.14 shows the correlations between prediction types and locations for
secondary-parameter uncertainties. These correlations can be compared with the assumption
of independence commonly used by other identication approaches (see 1.2.2). In this
Figure, correlation matrices are presented where each axis on the horizontal plane represents
a prediction type and the height of each bar represents the absolute correlation level between
two prediction types. The predictions resulting from uncertainties in secondary-parameter
values are highly correlated for static behavior.
86

5.3. Langensand Bridge

Uncertainty
correlation

Uncertainty
correlation

1
0.8
0.6
0.4
0.2

0.8
0.6
0.4
0.2
0

0
Strains

Strains

Rotations
Displacements

Rotations

Predictions

Predictions

Displacements

Results obtained from static simulations

Idealized independent prediction uncertainty

Figure 5.14: Correlation between predictions for Langensand Bridge due to uncertainties in
secondary-parameter values. Results obtained from the Langensand Bridge model do not
reect the common assumption of independence.

Except for the secondary-parameter uncertainties, no information is available to help quantify


uncertainty dependencies between predicted and measured values. In such circumstances, it
is conservative to make no assumptions related to uncertainty dependency and to determine

threshold bounds using the Sidk


correction (1/nm ) as proposed in 2.2. Nevertheless, the
correlation introduce by secondary parameter uncertainty are used during the computation
of threshold bounds as described in Equation 2.4.

Uncertainty combination
Uncertainty sources mentioned above are combined together and threshold bounds are
computed for each measurement location based on a target reliability = 0.95. The relative
importance of each uncertainty source is presented in Figure 5.15. Labels associated with
predictions and measurements refer to the their type and location as presented in Figure
5.11. For displacement and rotation quantities the dominant uncertainty sources are model
simplications, secondary-parameter uncertainty and measurement repeatability. Other
uncertainty sources have a less important contribution to the total uncertainty. Note that for
rotations, the contribution of sensor resolution is negligible. For strain measurements the
dominant uncertainty sources are sensor resolution, measurement repeatability and model
simplications.

Model falsication
Model instances are falsied if for any measurement location, the residual of the difference
between predicted and measurement value is outside the threshold bounds. Model instances
that cannot be falsied are part of the candidate model set. Starting from an initial model set of
10 010 instances, 7 578 individuals are falsied. This leads to a candidate model set containing
87

Chapter 5. Case studies

Relative importance

UY: Vertical displacement


RZ: Rotation around Z-axis
EX: Longitudinal strain
0.5
0.4
0.3
0.2
0.1
0
Model simplifications
Secondary parameters
Measurement repeatability
Sensor resolution
Additional uncertainty
Mesh refinement
[Uncertainty source]

RZ

EX
L2
L1

EX

S
71
13
11
13
17
S
UY
111
12
S
111
07
111
[Prediction/measurement]
UY

UY

RZ

Figure 5.15: Example of uncertainty relative importance for the Langensand Bridge. Excepted
for strain, the dominant uncertainty sources are model simplication, secondary-parameter
uncertainty and measurement repeatability.

2 432 instances, a reduction of more than 75% in comparison with the initial model set. Figure
5.16 presents a matrix of plots illustrating the pairwise combinations of parameters found in
the candidate model set. The range of each plot corresponds to the range of parameters. Only
the concrete Youngs modulus parameter range has been reduced by measurements. This is
because this parameter has the largest combination of sensitivity and parameter range.
The 2 432 instances are used here to perform predictions for other quantities than those used
to obtain the candidate model set. Table 5.7 compares the prediction ranges of the initial
model set for the rst ve dynamic excitation frequencies with the ranges obtained using the
candidate model set. The frequency prediction range for all modes is reduced from 55% to up
to 82% compared to the prediction made using the initial model set. This indicates that the
prediction range of models can be reduced by using in-situ measurements.

5.3.3 Usage of surrogate models


In order to reduce the time required to evaluate each instance of the initial model set, a surrogate model can be employed as presented in 2.5.2. A surrogate model is created for the
previous example using a second-order polynomial function. 26 parameter sets corresponding
to a central-composite design (see 1.5.1) are evaluated using the bridge nite-element model.
Also, 26 randomly selected parameter sets are evaluated in the nite-element model to compare the predictions of the response surface with those of the nite-element model. Figure
5.17 presents the new uncertainty relative contribution including the surrogate approximation
error. Labels associated with predictions and measurements refer to their type and location as
presented in Figure 5.15. Even if the error contribution of the surrogate model is not negligible,
88

5.3. Langensand Bridge

UXSTIFF

1000

UX-STIFF: Stiffness of the horizontal support


EX-CONC: Concrete Youngs modulus
EX-PAV: Pavement Youngs modulus
EX-STEEL: Steel girder Youngs modulus

500
0

2.05 2.1
5
EXSTEEL x 10
5

x 10
EXSTEEL

UXSTIFF

1000
500
0

2.1
2.05
2

3
4
4
EXCONC x 10

3
4
4
EXCONC x 10
5

x 10

2.1

EXCONC

EXSTEEL

UXSTIFF

500
0

x 10

1000

2.05
2

0.5 1 1.5 2
4
EXPAV x 10

0.5 1 1.5 2
4
EXPAV x 10

4
3
2

0.5 1 1.5 2
4
EXPAV x 10

Figure 5.16: Pairwise comparison of parameters found in the candidate model set for the
Langensand Bridge.

Table 5.7: Comparison of frequency prediction ranges computed using the initial and the
candidate model sets. For these ve modes, predictions made using the candidate model set
lead to the reduction in prediction ranges from 55% to up to 82% compared to the predictions
made using the initial model set.
Predictions
Initial model
set (IMS)
predictions
Candidate model
set (CM)
predictions
Frequency range reduction

Mode # and its associated frequency (Hz)


1
2
3
4
5

min
max
range
min
max
range

0.93
1.11
0.17
1.03
1.07
0.03

2.61
3.04
0.42
2.83
2.95
0.12

3.16
3.67
0.51
3.47
3.57
0.10

4.15
4.62
0.46
4.37
4.57
0.19

7.76
8.60
0.83
8.16
8.53
0.37

(%)

-82%

-71%

-80%

-58%

-55%

89

Chapter 5. Case studies


it remains marginal compared with other sources of uncertainty. For cases similar to the
Langensand Bridge, the surrogate model can be used to evaluate the initial model set with a
minor loss in accuracy compared with using the physics-based model.

Relative importance

UY: Vertical displacement


RZ: Rotation around Z-axis
EX: Longitudinal strain
0.5
0.4
0.3
0.2
0.1
0
Model simplifications
Secondary parameters
Measurement repeatability
Sensor resolution
Additional uncertainty
Mesh refinement
Surrogate model approximation
[Uncertainty source]

E
EX XL2
L1

S7
RZ
11
A
U
UY YS1 1113 3

7
UY
S1
21 111
S
07
11
111
[Prediction/measurement]
RZ

Figure 5.17: Example of uncertainty relative importance when using a surrogate model to evaluate the initial model set. The contribution of the surrogate model approximation (uncertainty
source no.7) is small compared with other sources of uncertainties.

5.3.4 Structural identication using time-domain data


Part of the results presented in this section were published by Goulet et al. [89]. A full-scale
test using ambient vibration monitoring (AVM) has been performed by RCI Dynamics [41]
on the Langensand Bridge. Two reference 3-component sensors were placed on the bridge
deck and the walkway at respectively 47 m and 62 m from the bridge end. The twelve datasets
include a total of 52 recording points using 1 to 3 components. The accelerometer layout is
presented in Figure 5.18.

Ref.

Ref.

Figure 5.18: Accelerometer layout for the Langensand Bridge. Each triangle represents a
recording point. Labels Ref. represent reference sensors.

90

5.3. Langensand Bridge


Results of the experimental modal analysis

Average singular values of the PSD matrices (dB)

The average power spectral density function (PSD) of the Langensand bridge recordings
is displayed in Figure 5.19. The number of singular values showing a peak at a particular
frequency indicates the number of modes having energy at this frequency.

Vertical 1

Vertical 2
Torsion 1
Transverse 1

30

20

10







1.5

4 5 6 7 8 9 10
Frequency (Hz)

15

20

30

40

Figure 5.19: Average power spectral density function (PSD) of the recordings made on the
Langensand Bridge.

The modes identied are detailed in Table 5.8 and the corresponding mode shapes are displayed in Figure 5.20. The four rst modes of the girder, from 1.27 Hz up to 4.4 Hz, are easily
identiable (Figure 5.19): rst vertical bending, rst lateral bending, second vertical bending
and rst torsion modes. Other peaks in the spectra are not all linked to structural behavior for example at 1.8 Hz, where a transient disturbance is noticed. Moreover, two peaks are found
in the spectra in all datasets with the same modal shape corresponding to the rst transverse
mode (peak #2), whereas the numerical model is likely to correspond to the same mode. Both
are used for structural identication.
Between 7 Hz and 8 Hz, as mentioned above, there should be 3 close modes, but only 2 could
be found: the third vertical mode of the girder and a second torsion mode of the girder.
Another second torsion mode, affecting mostly the walkway, is found around 9.3 Hz. Then,
from 10 Hz on, 19 more modes up to 34 Hz related to the walkway or the bridge bottom ange
(not instrumented) were found.

Variations in the fundamental mode


Results for the rst recorded mode were unexpected. Even if this mode is theoretically the
easiest to measure, it was the least stable among the datasets and it was rst found to be
at a higher frequency than expected in the model. Therefore, an additional single-station
measurement was performed nine months later by the company Ziegler Consultants [246]
91

Chapter 5. Case studies

Peak # 1

Peak # 7

Peak # 2

Peak # 8

Peak # 9

Peak # 13
Peak # 14

Peak # 4

Peak # 5

Peak # 6

Peak # 10

Peak # 11

Peak # 12

Peak # 16

Peak # 17

Peak # 22

Peak # 23

Peak # 3

Peak # 15

Peak # 18

Peak # 24
Peak # 19

Peak # 25

Peak # 21
Peak # 20

Peak # 27
Peak # 26

Figure 5.20: Mode shapes of the Langensand Bridge computed from ambient vibration monitoring. Large oscillations on the left side of the structure corresponds to walkway vibration
modes.

92

5.3. Langensand Bridge


Table 5.8: Observed frequencies ( f in Hz) with their standard deviation ( same unit) for the
Langensand Bridge.
Peak #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

Interpretation

FDD method
f

Vertical 1
Transverse 1 peak 1
Transverse 1 peak 2
Vertical 2
Torsion 1
Vertical 3
Torsion 2a
Torsion 2b
Walkway 1
Walkway 2
Walkway 3
Walkway 4
Walkway 5
Walkway 6
Walkway 7
Walkway 8
Walkway 9
Walkway 10
Walkway 11
Walkway 12
Walkway 13
Walkway 14
Walkway 15
Walkway 16
Walkway 17
Walkway 18
Walkway 19

1.27
2.58
2.83
3.53
4.40
7.29
7.95
9.33
10.03
10.88
11.57
12.30
13.06
13.43
14.34
14.33
15.88
17.76
19.60
21.72
23.64
27.72
29.48
31.12
32.77
34.05
34.83

0.02
N/A
N/A
0.02
0.04
0.11
0.15
0.15
0.12
0.15
0.11
0.11
0.10
0.05
N/A
0.07
0.08
0.09
0.07
0.10
0.15
0.20
0.15
0.16
0.08
0.09
0.05

93

Chapter 5. Case studies


with a 1s velocity sensor. At this time, the trafc was enabled on the bridge and the second
part was already built but not linked to the rst one. The comparison of the spectra of the
acceleration recordings in the central point of the bridge, between the walkway and the road
is displayed in Figure 5.21 for the three components. The major differences between these two
recordings are in the vertical direction. Moreover, the fundamental frequency dropped from
1.27 Hz (rst measurement without trafc) to 1.17 Hz (second measurement with trafc). This
8% change in frequency corresponds to a 17% change in stiffness. This difference is explained
in the next section using structural identication.

Acceleration PSD Amplitude (dB)

Longitudinal direction

Transverse direction

Vertical direction

40

40

40

50

50

50

60

60

60

70

70

70

80

80

80

90

90

90

Survey without traffic


Survey under traffic

100
0.2

0.5
1
2
Frequency (Hz)

3 4 5

100
0.2

0.5
1
2
Frequency (Hz)

3 4 5

100
0.2

0.5
1
2
Frequency (Hz)

3 4 5

Figure 5.21: Comparison of two recordings taken in the centre of the Langensand bridge along
the 3 axes, with and without trafc.

Initial model set


The initial model set is the discrete representation of the solution space. The template model
used to generate model instances (parameter combinations) includes four primary parameters
to be identied ([ ] is range of possible values): concrete Youngs modulus [20, 45] GPa , pavement Youngs modulus [2, 20] GPa and the stiffness of the restriction on horizontal expansion
of the structure caused by the formwork [0, 1000] kN/m. This formwork was found during a
visual inspection of the structure [88]. The fourth parameter is the stiffness of longitudinal
springs added to simulate a movement restriction imposed on the slider bearing devices [0,
2000] kN/m. In this case, restraining the longitudinal expansion of the structure may impact
its stiffness because its longitudinal prole is slightly arched. The initial model set contains
2 400 model instances (combinations of the four parameters to identify). Model instances
are generated according to a hyper-grid bounded by the interval of each parameter. All other
model parameters have a marginal effect on model response. Therefore, these are classied as
secondary parameters and the uncertainty of each parameter value is propagated through the
model to obtain the model prediction uncertainty.
94

5.3. Langensand Bridge


Uncertainties
Values chosen for uncertainties associated with secondary parameters are presented in Table
5.9 where the mean and standard deviation are reported for each source. As mentioned above,
secondary parameters are model parameters that have a lesser inuence than the primary
parameters (parameters to be identied).
Table 5.9: Secondary-parameter uncertainties for the identication of the Langensand Bridge
using dynamic data.
Uncertainty source

Unit

Mean

v concrete
t steel plates
t pavement
t concrete
density steel
density concrete
density pavement

%
%
%
Ton/m3
Ton/m3
Ton/m3

0
0
0
0
7.85
2.40
2.30

0.025
1
5
2.5
0.025
0.05
0.05

Other uncertainty sources are reported in Table 5.10 where the lower and upper bounds for
each uncertainty source are provided. These uncertainty distributions are modeled as an
extended uniform distribution (EUD) (see Appendix A) having a value of 0.3.
Estimation of uncertainties in the experimental parameters is not straightforward. A part of the
uncertainty is related to the variability of the frequencies themselves and another part is related
to errors in the estimations. Measurement variability is assessed by estimating the distribution
of the results along the dataset (see Table 5.8) inferred by a normal distribution. The standard
deviation observed does not exceed 2%. Based on references provided in 1.3.2, ambient
vibration monitoring epistemic uncertainties are represented by an uniform distribution with
bounds at 2% of the frequency values.
Model simplication uncertainty evaluation is based on previous studies (5.3.2) that estimated the prediction uncertainty related to degrees of freedom (DOFs) (displacements and
rotations) to be between 0% and 7% of the averaged predicted value. Since the natural frequency is proportional to the square of the stiffness, the uncertainty in frequency prediction is
estimated as the square root of the DOFs uncertainty rounded towards the highest integer.
Note that the sign of model simplication uncertainty is inverted compared with values reported in 5.3.2 because simplications and omissions now decrease the model fundamental
frequencies. Mesh renement errors are evaluated by rening the mesh of the model until
it converges to a stable value. Additional uncertainties are provided to include other minor
factors that could have been neglected.
Combining all the previous sources of uncertainty for each vibration mode leads to the combined uncertainty pdfs. The relative importance of uncertainty sources, averaged for all
95

Chapter 5. Case studies


Table 5.10: Other uncertainty sources for the Langensand Bridge.
Uncertainty source

Frequency
min
max

Measurement variability
AVM epistemic variability
Model simplications & FEM
Mesh renement
Additional uncertainties

See Table 5.8


-2%
2%
-3%
0%
0%
-1%
-1%
1%

frequencies, is shown in Figure 5.22. The main components of uncertainty are the measurement variability, the uncertainty introduced by secondary-parameter uncertainties (Table 5.9)
and the model simplications. Threshold bounds are computed for each mode with a target
reliability = 0.95.
Relative importance

0.5
1 Measurement variability
2 Secondary parameters
3 Model simplifications
4 Additional uncertainty
5 AVM epistemic variations
6 Mesh refinement

0.4
0.3
0.2
0.1
0
1

3
4
Uncertainty source no.

Figure 5.22: Relative importance of uncertainty sources for the Langensand Bridge. The
dominant component of the combined uncertainty is the measurement variability.

Uncertainty dependencies
From all the uncertainty sources presented in the previous section, only the dependencies due
to the secondary-parameter uncertainty can be evaluated during their propagation through
the template model. Furthermore, the frequency correlation between modes tends to decrease
for higher modes ( f > 12 Hz). The result of this evaluation is presented in Figure 5.23. The
horizontal axes represent the modes studied and the height of each bar is the absolute value
of the correlation between pairs of secondary-parameter uncertainties.
The results presented in this gure show that a high correlation between uncertainties is
expected. This does not correspond to the idealized case assumed by traditional approaches
where uncertainties are all independent (see 1.2).

Model falsication
For a predicted mode to be associated with an observed one, the MAC value computed from
these two must be larger or equal to mac = 0.8 . Fifteen modes have such correspondence.
96

5.3. Langensand Bridge

Correlation

1
0.8
0.6
0.4
0.2
0
15

13

11
9
7
5
Mode number

11

13

15

9
Mode number

Results obtained from dynamic simulations

Figure 5.23: Correlation between the predicted frequencies of different natural modes. Predictions are obtained by varying secondary-parameter values

These modes are presented in Figure 5.24. Model instances for which predicted mode shapes
do not pass the MAC correspondence check are discarded. The remaining instances are
classied as either candidate or falsied models, based on the residual of the difference
between the predicted and observed frequency values. Figure 5.25 compares the scatter
in model predictions with the observed frequency for the rst and second modes. Model
instances are represented on the horizontal axis and the vertical axis, corresponds to either the
predicted or measured value. Falsied models are represented by points and candidate models
by crosses. Threshold bounds used to falsify models and combined uncertainty distributions
are also presented in this gure. The candidate models found all have a partial limitation in
the free-movement of the bearing devices.
The stratied pattern in model predictions for the rst frequency is due to the discrete grid
sampling used to generate the initial model set. The rst frequency is found to be mainly
inuenced by the stiffness of the bearing device hindrance. For mode 1, lower frequencies
(1.00 1.05 Hz) correspond to either low restriction or free bearing device movement and high
frequencies (> 1.15 Hz) to heavily restricted longitudinal displacement. The candidate models
are those within the threshold bounds for all modes. Thresholds falsied all model instances
having no restriction (k = 0 kN/m) on the bearing device movement. Models with high values
of restriction (k = 2000 kN/m) are also discarded. These trends are presented in Figure 5.26
where the candidate model set is shown as pairwise combinations of parameter values.
The effect of the three other parameters is not signicant enough compared with the uncertainties to reject further instances. The bearing device hindrance only signicantly inuences
the rst frequency of the structure. The effect of bearing device hindrance is not as important
for higher modes for which the scatter in the data is similar to that of the second frequency
presented in Figure 5.25. For the third frequency and higher modes, all models lie within the
threshold bounds. Therefore, measurement of these modes do not lead to the rejection of any
model instances.
97

Chapter 5. Case studies

Figure 5.24: Mode shapes computed from the Langensand Bridge model and used for the
identication
3.2

Mode #1

1.4

Predicted/measured frequency (Hz)

Predicted/measured frequency (Hz)

1.45

1.35
1.3
1.25
1.2
1.15

Models instances having no


bearing device restrictions

1.1

3.1

2.9

2.8

2.7

2.6

1.05
1

Mode #2

500

1000

1500

2000

2.5

2500

500

1000

1500

Threshold bounds

Measured value

Uncertainty pdf

2000

2500

Model instances

Model instances

Candidate model

Falsified model

Figure 5.25: Comparison of the model prediction scatter and measured value for the rst two
frequencies for Langensand Bridge.

98

5.3. Langensand Bridge

UXSTIFF

1000

UX-STIFF: Formwork horizontal restriction on expansion joint


BC-STIFF: Restriction imposed on the slider bearing device
EX-PAV: Road surface Youngs modulus
EX-CONC: Concrete Youngs modulus

500

1000
2000
BCSTIFF
2000
BC-STIFF

UXSTIFF

1000

500

1000

3
4
4
EXCONC x 10

3
4
4
EXCONC x 10
4

0.5 1 1.5 2
4
EXPAV x 10

EXCONC

500

x 10

2000
BC-STIFF

UXSTIFF

1000

1000

0.5 1 1.5 2
4
EXPAV x 10

4
3
2

0.5 1 1.5 2
4
EXPAV x 10

Figure 5.26: Pairwise comparison of parameters found in the candidate model set for the
Langensand Bridge. Candidate models were found using dynamic data.

This shows that results from ambient vibration monitoring may not always be directly interpreted. In this case, the small amplitudes of the input appear to be insufcient to overcome the
cohesion and friction involved with the longitudinal movement of the bearing devices. Due to
the arched prole of the structure, these restraints inadvertently increased the rst natural
frequency of the structure by 8% and thereby its apparent stiffness by 17%. Such apparent
stiffness was not observed during static measurements because the horizontal forces (truck
loading) were high enough to overcome friction forces in the bearing devices. The most likely
hypothesis is that during the second measurement [246], the higher noise level due to the
trafc partially freed the bearings and noticeably modies the fundamental mode causing a
decrease in the resonance frequency and an increase in the damping ratio (Figure 5.21). These
ndings can have an inuence on the remaining fatigue life of the structure by modifying the
stress cycle amplitudes under service loads.

Model-class falsication
Interpreting data using an approach adjusting the structural parameters of a structure to
minimize the discrepancy between predicted and measured values can be dangerous. If wrong
assumptions are made at the beginning, for instance if the hypothesis that bearing devices
do not work properly under the measured conditions is not included, wrong conclusions
are obtained. In the case of the approach presented here, when the comparison of model
instances and measurements was rst performed without the hypotheses of bearing device
malfunction, all model instances were falsied. Such a situation indicated that the model
99

Chapter 5. Case studies


class (template model) used was an inadequate explanation of the observations. This iterative
exploratory reasoning would not have been directly possible with methods such as Bayesian
inference 1.2.2, since this kind of approach quanties the relative plausibility of models.

5.3.5 Prediction of the usefulness of monitoring


This section tests the predictive capability of the expected identiability metric presented
in Chapter 3. Part of the results presented in this section were published by Goulet et Smith
[87]. The goal is to predict probabilistically whether or not measurements would be useful to
falsify models and to reduce prediction ranges. These predictions are compared with results
obtained previously in 5.3.2.

Generation of simulated measurements including uncertainty correlations


For the purpose of generating simulated measurements, correlations are evaluated using
the qualitative-reasoning approach presented in 3.2.1. The qualitative evaluation of the
dependency between each quantity type (displacement, rotation, strain) are summarized in
Table 5.11 for each uncertainty source. Since matrices are symmetric, only half the values are
necessary. The uncertainty correlation between load-cases is assumed to be high and positive.
Choices of uncertainty correlations are based on ndings presented in 5.3.2.
Uncertainties are based on values presented in Tables 5.5 & 5.6. Measurement repeatability
uncertainty (COV) is estimated to be 1% for displacement measurement, 0.5% for rotations and
3% for strains. These numbers are conservative upper bounds representing results obtained
from previous measurement experiences [88].

Table 5.11: Qualitative evaluation of uncertainty correlation between comparison points for a
each uncertainty source.
Measurement
/prediction type

100

Displacement

Rotation

Strain

Uncertainty source

Displacement
Rotation
Strain

High +
High +
High +

High +
High +

High +

Model
simplication
& FEM

Displacement
Rotation
Strain

High +
High +
High +

High +
High +

High +

Mesh
renement

Displacement
Rotation
Strain

Moderate +
Moderate +
Moderate +

Moderate +
Moderate +

Moderate +

Additional
uncertainties

Displacement
Rotation
Strain

Low +
Low +
Low +

Low +
Low +

Low +

Sensor
resolution

5.3. Langensand Bridge


Computation of the expected identiability
Simulated measurements are generated and used to falsify model instances. At each iteration,
the number of candidate models obtained along with prediction ranges are stored. The
generation of simulated measurements and the falsication process are stopped when the
expected identiability variability remains constant even when additional samples are taken.
In this case, it is found that generating 1000 instances is sufcient to obtain stable results.
The cumulative distribution function of the number of expected candidate models is reported
in Figure 5.27. It shows the probability of obtaining a maximum candidate model set size. In
this gure, the horizontal axis presents the expected candidate model set size as a percentage
of the initial model set. The vertical axis shows the cumulative probability (F C M ) that any percentage of reduction in the candidate model set is obtained. In this case, there is a probability
of 0.95 of reducing the initial model set by 65% (F 1C M (0.95) 35%) and a probability of 0.50 of
reducing the initial model set by 80% (F 1C M (0.50) 20%)).
1
P=95%

0.9
0.8

Size of the candidate model set


obtained from on-site observations

Probability

0.7
0.6
0.5
P=50%
0.4
0.3
0.2
0.1
0
0

20%

40%

60%

80%

100%

Maximum number of candidate models


(% of the initial model set)

Figure 5.27: Cumulative distribution function (F C M ) representing the probability of obtaining


a maximum candidate model set size. The polygonal sign corresponds to the number of
candidate models obtained after falsifying inadequate models using on-site measurements
(see 5.3.2).

The probability density function (pdf ) is the derivative of the cumulative density function
(cdf ). Therefore, the high probability content of the domain is contained where the cdf
slope is steep. The polygonal sign represents the actual candidate model set size that is
obtained in 5.3.2 using real measurements. The expected number of candidate models is in
agreement with the number obtained using real observations. In this situation a signicant
reduction in the number of candidate models ( 65%) is expected with a high probability
(95%). Therefore, if the objective is to reduce the number of possible model instances that
101

Chapter 5. Case studies


explain the measurements, proceeding with the monitoring phase can be justied by the
expected identiability indicating that there is a probability of 0.95 that more than 65% of the
models should be falsied.

Expected reduction in the prediction range


In a second case the expected reduction in the prediction ranges is studied. The results are
summarized for the rst ve predicted natural frequencies of the structure in the cdf (F P R )
presented in Figure 5.28. In this gure the horizontal axes represent the predicted frequency
range for each mode. The horizontal scale of each plot corresponds to the frequency range of
the initial model set. For every mode, a signicant reduction of prediction range (50%-70%) is
expected with a high probability (95%). This would justify performing the load tests on the
structure to reduce the variability of predictions and the number of candidate models. The
polygonal signs in Figure 5.28 represent the prediction ranges computed from the candidate
model set obtained using real data (5.3.2). For every mode, the results obtained from observations do not lie in the distribution tails. This conrms that the expected identiability metrics
can predict the usefulness of monitoring, thereby supporting infrastructure management
decisions. For example, if several structures have to be monitored under a constrained budget,
prioritization of actions can be supported by this methodology.

Robustness of expected identiability with respect to imprecise correlation denitions


The correlation choice proposed in the qualitative reasoning scheme 3.2.1 and used in the
previous section is varied from 0.2 to test the robustness of the expected identiability
predictions with respect to changes in correlation values. Figure 5.29 presents the variation
imposed on the qualitative-reasoning scheme. In addition to this variation, the expected identiability result is compared with the number of candidate models expected when assuming
that uncertainties are all independent.
Figure 5.30 compares the expected identiability cumulative distribution function (F C M )
obtained using the qualitative reasoning scheme 3.2.1, using independent uncertainties
and using the variation presented in Figure 5.29. Modifying the correlation values by 20%
only changes the expected number of candidate models by 3%. Also, assuming that uncertainties are all independent underestimates the expected number of candidate models (i.e.
the usefulness of measurements is overestimated). When assuming that uncertainties are
independent, there was a probability lower than 1% of having the number of candidate models
obtained using on-site observations. This strengthens the point that assuming independent
uncertainties is not a conservative assumption. With respect to the results obtained, the
qualitative reasoning scheme proposed to describe correlations better predict the usefulness
of measurements than assuming independent uncertainties.

102

5.3. Langensand Bridge


Initial prediciton range

1
Frequency range of the candidate
model set obtained from on-site observations

0.5
0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Maximum frequency range for mode #1


1
0.5
0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Probability

Maximum frequency range for mode #2


1
0.5
0

0.1

0.2

0.3

0.4

0.5

Maximum frequency range for mode #3


1
0.5
0

0.1

0.2

0.3

0.4

Maximum frequency range for mode #4


1
0.5
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Maximum frequency range for mode #5

Figure 5.28: Cumulative distribution functions (F P R ) representing the probability of obtaining


a maximum prediction range for the rst ve natural frequencies of the structure. Note that
the width of each graph corresponds to the prediction range from the initial model set. The
polygonal signs correspond to the prediction ranges obtained after falsifying inadequate
models using on-site measurements.

103

Chapter 5. Case studies

Independent

Probability

High +

-0.20

0.25

0.50
Correlation value

+0.20

0.75

1.00

Figure 5.29: The correlation choice proposed in the qualitative reasoning scheme (3.2.1) is
varied from 0.2. This variation is used to test the robustness of the expected identiability
with respect to uncertainty dependency choice.

Independent uncertainties
Proposed qualitative reasoning scheme -0.20
Proposed qualitative reasoning scheme
Proposed qualitative reasoning scheme +0.20
1
Predictions located in the
distribution tail
(low probability of occurence)

0.9
0.8

Predictions located in the


highly probable region

Probability

0.7
0.6
0.5
0.4
0.3

Size of the candidate model


set obtained from on-site observations

0.2
0.1
0
0

20%

40%
60%
80%
100%
Maximal number of candidate models
(% of the initial model set)

Figure 5.30: Comparison of the cumulative distribution function of the candidate model set
expected size obtained for several assumptions of correlation. The vertical dashed line corresponds to the number of candidate models obtained using on-site measurements (see 5.3.2).
Assuming that uncertainties are independent does not lead to conservative predictions.

104

5.3. Langensand Bridge

5.3.6 Optimization of measurement-system congurations


The measurement-system design methodology presented in Chapter 4 is used to optimize a
measurement system and test conguration to be used on the Langensand Bridge for future
monitoring.

Structure description
The nite-element template model used to generate model instances is presented in Figure
5.31. The primary parameters to identify are the concrete Youngs modulus for the slab poured
during construction phase one and two, the asphalt Youngs modulus for phase one and
two and the stiffness of the horizontal restriction that could take place at the longitudinally
free bearing devices. Details of the construction phases are presented in Figure 5.32. The
possible range for concrete Youngs modulus varies from 15 GPa to 40 GPa, from 2 GPa to
15 GPa for asphalt, and the bearing device restriction varies from 0 kN/mm to 1000 kN/mm.
Each parameter range is subdivided into ve intervals to generate 3 125 initial model instances.

124

Sidewalk

Roadway

112

Road surface

114

Concrete
barrier

Concrete
reinforcement

X
Z

116

Transverse stiffeners
for girder
Orthotropic
deck stiffners

Figure 5.31: Langensand Bridge cross-section details for construction phase 2.

The initial measurement system to be optimized is composed of ten displacements, four


rotations and ve strain sensors. The displacement and rotation measurements are referenced
by the prex UY for vertical displacement, RZ for the rotation around the transverse axis and
EX for strain along the longitudinal axis. The location of each sensor is referenced according to
the axes presented in Figure 5.32. The structure can be loaded using four load-cases presented
in Figure 5.33. Each test-truck weighs 35 tons and each test load-case takes two hours.
Each displacement sensor costs $200. Rotation sensors cost $600 each and strain sensors costs
$1500 per unit (optical-ber devices), including installation costs. Test-truck rental is $400 per
truck plus an additional $200 per hour of use.
105

Chapter 5. Case studies


Y

Roadway

116

Sidewalk

112

EX-1 UY

EX-4

UY

**

EX-5

UY

*RZ

113
114

UY

* *
*UY

*
*UY

*
Phase 2

RZ

112

UY

EX-2
EX-3

Phase 1

124

* EX

* EX
*UY

UY

UY

*UY

*UY

Measurements
U: Displacement
R: Rotation
E: Strain

X
S7

A1

S12

S13

S17

Figure 5.32: Langensand Bridge cross-section and potential sensor layout to be used for future
monitoring of the Langensand Bridge.

Load-Case 1

Load-Case 3
124

124

4
112

112

114

114

A1

S7

S12

S7

S12

S17

S13

Load-Case 4
124

124

1
2

112

112

3
2

3
4

114

114

A1

S17

S13

Load-Case 2

1
2
3
4

2
3

X
A1

S7

S12

S13

S17

A1

S7

S12

S13

S17

Figure 5.33: Load-case layout for the Langensand Bridge.

106

5.3. Langensand Bridge


Uncertainties
Uncertainties associated with secondary model-parameters that are not intended to be identied are summarized in Table 5.5. Additionally, the variability of steel Youngs modulus is
considered to vary according to a Gaussian distribution having a mean of 202 GPa and a standard deviation of 6 GPa. These uncertainties are propagated in the nite-element template
model to obtain the uncertainties in the model predictions.
Other modeling and measurement uncertainty sources are described in Table 5.6. For this
case-study the measurement repeatability is described by a Gaussian distribution having a
mean of 0 and a standard deviation of 1% for displacement measurements, 0.5% for rotations and 3% for strains. These numbers are conservative upper bounds representing results
obtained from previous measurement experience [88]. Except for sensor resolution, the probability distribution used to describe these uncertainties is the extended uniform distribution
(see Appendix A). This distribution is made of several orders of uniform distribution, each
accounting for the uncertainty of the uncertainty. The intervals dened in Table 5.6 are the
minimal and maximal uncertainty bounds expressed as a percentage of the mean model
prediction. For displacement and rotation predictions, the bound positions are recognized
to be potentially either under or overestimated by 30% of the initial interval width ( = 0.30).
In the case of strain, this uncertainty is 50% ( = 0.50) because of local inaccuracies in the
bridge model that have a more important effect on predicted values. Sensor resolutions are
described by uniform distributions as dened by manufacturer specications.

Uncertainty dependencies
Dependencies between uncertainty sources and locations are included in the process of
simulating measurements. These dependencies are described by correlation coefcients.
Since little information is available for evaluating these quantities, the qualitative reasoning
scheme proposed in 3.2.1 is used to describe them. Correlation denitions are the same as
those presented in Table 5.11. For each measurement location, a combined uncertainty pdf is
computed. Threshold bounds are determined for a target probability xed at = 0.95. The
denition of the threshold bounds width depends on the number of measurements used in
the falsication process. Therefore, specic threshold bounds are computed for each sensor
conguration.

Measurement-system design results


Measurement-system optimization is performed according to two criteria: load-test costs and
the expected number of candidate models. Both objectives need to be minimized. Results are
presented in Figure 5.34. Load-test costs are presented on the horizontal axis and the expected
number of candidate models on the vertical axis. The expected number of models is expressed
as a percentage of the initial model set. Each dot represents the optimal measurement-system

107

Chapter 5. Case studies


found for each cost. When using cheap measurement systems with few sensors, poor results
are expected. By choosing optimized sensors and test congurations, the performance can be
improved for a marginal cost increase. Beyond certain a point, adding sensors and load cases
not only stops improving the monitoring efciency, it decreases it. This quantitatively shows
a principle, intuitively known by engineers, that too much measurement data may hinder
interpretation.
Minimal
cost

Maximal
cost

Expected Number of Candidate Models (%)

100

40
Minimal
number of
candidate models

20
Useful Over
measurements instrumentation
0
0

0.4

0.8
1.2
1.6
Load-Test Cost (x10,000$)

2.0

Figure 5.34: Measurement-system design multi-objectives optimization results for the Langensand Bridge.

The sensor and load-case congurations associated with each dot in Figure 5.34 are reported
in Table 5.12. In this table, the columns containing diamond signs indicate which sensors and
load-cases are selected for each conguration reported in Figure 5.34. The expected number
of candidate models is obtained for a probability of 95%. This is the upper bound for the
number of candidate models that should be obtained when using real measurements. This
means that individual results are likely to be better.
In this case, the best measurement system found uses 4 sensors with 3 load-cases and would
result in almost 80% of model instances to be falsied. This measurement-system conguration is halfway between the cheapest and most expensive measurement systems. It leads
to a reduction of monitoring costs by up to 50% compared with the maximal cost. Results
presented here indicate that over-instrumenting a structure is possible. The approach proposed is not intended to replace engineering judgment; it is presented as a tool for exploring
the benets of a wide selection of possible sensor types and locations. In the case where the
optimal conguration uses too few sensors, additional provisions might have to be taken to
account for possible sensor breakage, malfunction and robustness through redundancy.

108

3400
3000

LC1
LC2
LC3
LC4

Cost ($)
Expected CM

UY-S03-114
UY-S07-114
UY-S12-114
UY-S17-114
UY-S21-114
UY-S03-124
UY-S07-124
UY-S12-124
UY-S17-124
UY-S21-124
RZ-A1-113
RZ-S4-113
RZ-S7-113
RZ-S10-113
EX-L1
EX-L2
EX-L3
EX-L4
EX-L5

Sensors &
load-cases

3400
1252

3800
1092

5300
909

8400
783


6900
861

9900
722

10200
712











11500
676








11700
694








12500
711




























12900
720

Load-test congurations

15600
720








16000
725






































16600
725
























18000
729

























18200
755

























19500
773

























19900
781


























Table 5.12: Optimized measurement congurations are shown by a vertical set of symbols  for a given sensor type and location. The cost of
the load-test along with the expected number of candidate models computed for probability of 95% are reported for each conguration.

5.3. Langensand Bridge

109

Chapter 5. Case studies

5.3.7 Case-study conclusions


This case study highlights the potential of the error-domain model falsication approach along
with methodologies that have been developed for analyzing the performance of measurement
systems. Specic conclusions are listed below:

1. Error-domain model falsication can improve the understanding of complex systems


such as full-scale civil structures. In the case of Langensand Bridge, the identication
was performed successfully using both static and dynamic data.
2. Complete model-class falsication is possible if all model instances are rejected. This
supports exploratory iterative reasoning where initial hypotheses are replaced with new
knowledge. For Langensand Bridge, it led to the falsication of a model class where
bearing devices were free to move.
3. The computational effort required during the model evaluation process can be reduced
using surrogate models. In the case of the Langensand Bridge, a signicant reduction in computation time was accompanied only by small reductions of identication
capability.
4. Computing the expected identiability leads to representative predictions of the results
obtained using measurements. Therefore, this methodology can be used in similar cases
to predict probabilistically to what extent measurements will be useful to improve the
system understanding. Assuming that all uncertainties are independent leads to an
over-estimation of measurement-system performance.
5. The measurement-system design methodology provides a distinction between useful
sensor congurations and over instrumentation. For Langensand Bridge, optimized
sensor and load-case congurations were found for a fraction of the cost of the initial
measurement-system.
6. A structure can behave differently under ambient vibrations compared with static loading. The small amplitude of ambient vibrations might be insufcient to overcome the
cohesion and friction restricting the movement of bearing devices. For the Langensand
Bridge, it affected particularly the rst fundamental frequency.

110

5.4. Grand-Mere Bridge

5.4 Grand-Mere Bridge


This case-study explores two aspects of data-interpretation. Firstly, modeling uncertainties are
quantied by comparing the prediction obtained from different model classes and secondly
the physical properties of the structure are identied using dynamic data. The uncertainty
quantication study is divided into two parts. First, the effects of two types of model simplications are studied in relation to prediction errors for strain, displacement, rotation and natural
frequency of the structure. These prediction errors are related to the choice of types and
congurations of elements. The second part is related to the inclusion/exclusion of secondary
structural-elements such as barriers, pavement, concrete reinforcement, diaphragms, and
simplied support conditions.
In 5.4.5, ambient vibration recordings are used to identify the behavior of the structure.
Ambient vibration monitoring data was provided by Professor Luc Chouinard from McGill
University (Canada). Results presented in this section were obtained in collaboration with the
master student Marie Texier.

5.4.1 Structure description


The bridge is a three-span prestressed concrete box-girder structure with variable inertia. It
was built in 1976-1977 and crosses the St-Maurice river to connect the towns of Grand-Mere
(west side) and Saint-Georges (east side). At time of construction it was the longest bridge of
its type in North America.
The bridge experienced problems after completion [141, 142]. Vertical displacements at midspan started to increase and in 1987, 10 years after construction, reached 300 mm. Localized
cracks appeared on the east side of the bridge (see Figure 5.35), on the upper deck and on the
southern web. In 1992, additional prestressing cables were added, and the cracks, attributed
to the differential shrinkage between the webs and the upper ange of the section, were lled
with mortar. The techniques used to strengthen the bridge are described by Massicotte et al.
[142]. The causes of these problems were attributed to inaccurate estimates of the loss in
prestressing, the use of poor quality materials, and the poor quality control during execution
work. These possible causes were not explicitly validated. Figures 5.35 and 5.36 present the
geometry of the bridge and its main dimensions.

West bound

12.2 m

East bound

39.6 m

39.6 m

181.4 m

12.2 m

concrete filled volume

observed cracks

sand filling

Figure 5.35: Grand-Mere Bridge elevation view

111

Chapter 5. Case studies

Variable - 2.96 to 9.6 m

12.8 m

Y
Z
X

Figure 5.36: Grand-Mere Bridge cross-section detail

5.4.2 Model-class descriptions


Using structural drawings, three nite-element model classes have been developed: a simplied shell-based model, a composite shell-solid model, and a full solid-based model. In all
models, the parabolic shape of the lower ange has been approximated to have a constant
radius. Also, sand lling used as a counterweight is represented using discrete mass elements
spread on the bottom ange of the girder. The structure has been assumed to be fully prestressed, so pretension forces are not included in the models. Full uncracked sections have
been modeled and the prestressing cables were assumed to act elastically. Figure 5.37 presents
a general overview of the nite-element models.

Figure 5.37: Grand-Mere Bridge nite-element model general overview

Simplied shell-based model


The shell-based model is built with shell elements as presented in Figure 5.38a . This model
class is representative of models that are commonly used by researchers and practitioners for
the purpose of model calibration and data interpretation. The wedge-shaped ends (see Figure
5.35) of the structure are made of solid concrete, and are therefore modeled as solid elements.
The transversal and longitudinal slopes of the deck are neglected and the openings in the
diaphragms are not modeled. The different thicknesses of the cross-sectional elements (web,
upper ange and cantilevered deck) are averaged over the length of the bridge. The variation
of the lower ange thickness is approximated by stepwise thicknesses changes along the main
span. The llets of the cross-section are not modeled. Support conditions are simplied
to linear discrete supports in the transversal direction. The cross-section of the model is
112

5.4. Grand-Mere Bridge


presented in Figure 5.38b.

pavement

Shell elements

Z
X

(a) Schematical view of the simplied shell-based model

(b) Cross-section view of the


simplied shell-based model

Figure 5.38: Grand-Mere Bridge cross-section and isometric view of the simplied shell-based
model

Shell-solid model
The shell-solid model is built from a combination of shell and solid elements as presented in
Figure 5.39a. The level of renement used in this model class is above what is usually employed
in practice because of the increase in modeling and computational effort required. Structural
elements that have a small thickness compared to their other dimensions, such as the web,
the upper anges and the cantilevered deck, are modeled as shell elements. Elements having
variable thickness are approximated by step-wise constant thicknesses. The wedge-shaped
ends are modeled as solid elements, as well as the llets of the cross-section and the roadway
barriers. To ensure the compatibility between 6 degrees of freedom per node (DOF) shell and
3 DOFs solid elements, constraint equations are applied to the nodes located at the shell-solid
interfaces. For support conditions, bearing plates are represented using constraint equations
where supports are dened for relevant master nodes only. The cross-section of the shell-solid
model is presented in Figure 5.39b.

pavement
(shell elements)

Solid elements
Shell elements

Constraint equations

Y
Z
X

(a) Schematical view of the shell-solid model

(b) Cross-section view of the


shell-solid model

Figure 5.39: Grand-Mere Bridge cross-section and isometric view of the shell-solid model

113

Chapter 5. Case studies


Solid-based model
The solid-based model is built from solid elements as present in Figure 5.40a. It is the model
with the highest geometrical accuracy. For support conditions, bearing plates are modeled
using constraint equations and supports are dened to master nodes only. For the purpose of
structural identication, obtaining several thousand solutions from such a complete model
class can be computationally prohibitive. The solid-based model cross-section is presented in
Figure 5.40b. In this model, shell elements are used to represent the pavement layer.

pavement
(shell elements)

Solid elements

Y
Z
X

(a) Schematical view of the solid-based model

(b) Cross-section view of the


shell-solid model

Figure 5.40: Grand-Mere Bridge cross-section and isometric view of the solid-based model

5.4.3 Quantication of the effect of model simplications on prediction errors


The effect of model simplications on both static and dynamic prediction is studied by comparing predicted values from the solid-based model with less accurate model classes relying
on simplifying assumptions. Quantifying errors using the direct comparison of model predictions and measurements may be misleading for two reasons. The rst reason is that model
simplications and other uncertainty sources may compensate each other resulting in better
correspondence with observations than what would be expected. The second reason is that all
models have inherent uncertainties related to their physical constitutive parameters such as
material properties and boundary conditions. Without an accurate knowledge of such uncertainties, it is not possible to quantify the specic contribution of each modeling simplication
in the total error. The comparison of several model classes is used to compute approximate
lower bounds associated with model simplication uncertainties and to obtain knowledge
of the error structure. For comparisons of static predictions, four trucks of 35 Tons each are
positioned at mid-span as presented in Figure 5.41.
Several displacement, rotation and strain predictions are compared to obtain a lower-bound
estimate on model simplication uncertainties. The results obtained from the simplied
shell-based model and the rened shell-based model are compared with those given by
the solid-based model according to Equation 5.7 & 5.8. In these equations, SS,i SO,i are
respectively the modeling simplication relative error at location i for shell-solid and shell114

5.4. Grand-Mere Bridge

L/2 = 90.7 m

Y
X
Z

Figure 5.41: Load-case description for Grand-Mere Bridge

only model class. r S,i the value predicted by the solid model, r SS,i the value predicted by the
shell-solid model and r SO,i the value predicted by the shell-based model.

SS,i =

SO,i =

r SS,i r S,i
r S,i

r SO,i r S,i
r S,i

(5.7)

(5.8)

The most detailed model (i.e. solid-based) is expected to be more rigid than the shell-solid
model and the latter more rigid than the shell-based model. This is due to load-carrying
elements that are neglected in both the simple shell-solid and shell-based models. More rigid
models lead to smaller static predictions and higher natural frequencies. Therefore, modeling
simplication errors are expected to be positive for the static predictions, and negative for
frequencies. Furthermore, errors are expected to be larger for the shell-based model than with
the shell-solid model.

Vertical displacement
Relative errors computed for vertical displacements on the central part of the main span
are presented in Figure 5.42. The average relative error on the central part of the main span
is +12% for the shell-based model, and +2% for the shell-solid model. The error from the
shell-based model increases at locations near the intermediate supports. Zones around the
supports have complex geometries (simultaneous variation in web, lower ange and llet
dimensions) that are not well captured by simplied shell-models.
The errors obtained for the shell-solid model are smaller because less geometrical simplications are made. However, negative errors are observed at the supports on the side spans due
to the higher level of simplication involved in the shell-based model and that affect the effort
redistribution.
115

Chapter 5. Case studies


SO:+20%
SS:-3%

SO:+18%
SS:+2%

SO:+18%
SS:-12%

SO:+13%
SS:+2%

SO:+11%
SS:+2%

SO: Shell only model


SS: Shell-solid model

SO:+11%
SS:+2%

SO:+19%
SS:+2%

SO:+25%
SS:-8%

SO:+13%
SS:+2%
SO:+25%
SS:+1%

Figure 5.42: Relative error for vertical displacement predictions due to model simplications.

Rotation around Z-axis


Relative errors in rotation predictions around the Z-axis are presented in Figure 5.43. Errors
computed from the shell-based model are stable with an average value of +10% except in the
zones of intermediate supports where they double. Small variations are obtained in the error
given by the shell-solid model. The two values for the central span are of +2%.
SO:+7%
SS:-1%

SO:+22%
SS:-1%

SO:+11%
SS:+2%

X
Z

SO:+10%
SS:+2%

SO: Shell only model


SS: Shell-solid model

SO:+24%
SS:+0%

SO:+13%
SS:+2%

Figure 5.43: Relative error for rotation predictions around Z-axis due to model simplications.

Longitudinal strain
Strain relative errors are computed at different locations on the sections as shown in Figure
5.44. Predictions for the cantilever deck are taken on the upper bre whereas other prediction
locations are made on the inside of the box girder. The results obtained show that strain
predictions are very sensitive to local model simplications.
SO: Shell only model
SS: Shell-solid model
SO:+30%
SS:+3%
SO:+27%
SS:+3%
SO:-2%
SS:-11%

SO:-10%
SS:-2%
SO:-5%
SS:-5%
SO:+29%
SS:+0%
SO:+2%
SS:+1%

SO:+2%
SS:+0%

SO:+11%
SS:-1%
SO:+11%
SS:+0%
SO:+16%
SS:+1%
SO:+15%
SS:-2%

SO:+28%
SS:+1%
Y

SO:-6%
SS:-9%

SO:+151%
SS:+127%
SO:+33%
SO:+14% SS:+4%
SS:-4% SO:+103%
SS:+96%

X
Z

SO:+108%
SS:+97%
SO:+108%
SS:+97%

Figure 5.44: Relative error in strain prediction along X-axis due to model simplications.

116

5.4. Grand-Mere Bridge


Even if the predicted values are taken at the same locations in the section (i.e. on the upper
bre of the lower deck in the box girder), these locations may not have the same distance
to the neutral axis in each model. This is a direct consequence of modeling simplications.
Furthermore, this also affects the load path in the structure. Due to the large variability in error
values computed, these models may not accurately predict strain patterns in the structure.

Natural frequency
The relative errors that have been computed for natural frequencies are presented in Table
5.13. The shell-based model systematically predicts lower frequencies than the solid-based
model. In the case of the shell-solid model, prediction errors are smaller than the shell-based
model for vertical bending modes. The accuracy of the shell-solid model is less for modes
corresponding to lateral bending and torsion.
Table 5.13: Relative error in predicted natural frequencies (%) and losses in the MAC criteria
due to model simplications.
Errors in natural frequencies (%)

MAC criteria

Mode number

Description

Shell-based
model

Shell-solid
model

Shell-based
model

Shell-solid
model

1
2
3
4
5
6

Vertical bending*
Lateral bending*
Vertical bending*
Lateral bending
Vertical bending*
Torsion*
Torsion-2
Vertical bending*
Torsion
Vertical bending

-4.4
-4.1
-4.4
-3.1
-2.6
-7.5
-5.3
-8.8
-6.1

-0.5
-9.6
-1.1
-16.3
-0.2
-18.9
+3.6
-1.4
+1.9
-1.2

1.00
1.00
1.00
1.00
0.98
0.99
1.00
0.97
0.99

1.00
0.99
1.00
0.89
0.99
0.51
0.59
1.00
0.61
1.00

7
8
9
*: measured mode

Mode shape vectors are extracted for each model and for each natural frequency reported
in Table 5.13. A MAC test is performed on mode shapes. The results of the comparison are
shown in Table 5.13. For the shell-solid model, there are two close modes that present a mix of
torsion and lateral bending (Mode 6) and both fail the MAC test with a value below 0.6. Mode
number 8 (torsion) also has a poor MAC value. This indicates that torsional behavior is not
adequately captured by the rened shell-solid model. Therefore, this model class should not
be used to explain observations involving torsional behavior.

5.4.4 Quantication of the effect of omitting secondary structural-elements


The effect of secondary model-features on predictions is quantied by removing them one
by one from the solid-based model. Examples of these features are: barriers, pavement,
concrete reinforcement, and diaphragms. The effect of simplied support conditions is also
evaluated by removing constraint equations used to represent rigid bearing supports. The
117

Chapter 5. Case studies


results obtained from solid-based models that do not include one of the secondary elements
are compared to predictions given by the solid-based model containing all the secondary
elements.

Vertical displacement
Relative errors for the vertical displacement predictions are presented in Figure 5.45. The
exclusion of the three rst features (barriers, pavement, and concrete reinforcement) gives
constant prediction errors of respectively +7%, +0% and +2% over the length of the bridge.
The exclusion of diaphragms and the simplication of the support conditions are discrete
simplications. Therefore, the error on the prediction is not the same for all locations. The
errors increase as the predicted value are extracted closer to the location of the omitted
element. The cumulative effect of all secondary elements signicantly affects the predicted
displacement values. Note that the sum of the effect of components is not equal to effect of all
components taken together.

G:+6%
P:+0%
R:+2%
B:-1%
S:+5%
A:+10%
G:+6%
P:+0%
R:+2%
B:-8%
S:+0%
A:-5%

G:+7%
P:+0%
R:+2%
B:+1%
S:+4%
A:+15%

G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%

G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications

G:+8%
P:+0%
R:+2%
B:+1%
S:+2%
A:+13%

G:+8%
P:+0%
R:+2%
B:+1%
S:+2%
A:+13%

G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%

G:+7%
P:+0%
R:+2%
B:+2%
S:+5%
A:+16%

G:+6%
P:+0%
R:+2%
B:-1%
S:+9%
A:+14%

G:+6%
P:+0%
R:+2%
B:-8%
S:+5%
A:+0%

Y
X
Z

Figure 5.45: Relative error for vertical displacement prediction due to secondary-elements
omission.

Rotation around Z-axis


Relative errors for the rotation predictions around the Z-axis are presented in Figure 5.46.
As for the vertical displacement, the exclusion of the three rst features (barriers, pavement,
and concrete reinforcement) give constant prediction errors over the length of the bridge of
respectively +7%, +0% and +2%. Figure 5.46 also shows that if the diaphragms are neglected,
there is no signicant effect on the rotation predictions around the Z-axis. Simplied support
conditions have a direct impact on the rotation predictions at the supports themselves. The
cumulative effect of all secondary elements signicantly affects the predicted rotation values.
118

5.4. Grand-Mere Bridge


G:+7%
P:+0%
R:+2%
B:-1%
S:+7%
A:+16%

G:+6%
P:+0%
R:+2%
B:+0%
S:+6%
A:+18%

G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%

G:+6%
P:+0%
R:+2%
B:+0%
S:+8%
A:+20%
G:+8%
P:+0%
R:+2%
B:+0%
S:+1%
A:+13%

G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications

G:+7%
P:+0%
R:+2%
B:+0%
S:+12%
A:+21%

Y
X
Z

Figure 5.46: Relative error for rotation prediction around Z-axis due to secondary-elements
omission.

Longitudinal strain
Figure 5.47 shows that the relative errors in longitudinal strain predictions are dependent
upon their location. Quantitatively, the parameters having the most inuence are the barriers.
The errors in the predictions obtained with the model that excludes the diaphragms, and the
predictions obtained with simplied support conditions present anomalies, such as localized
high values and negative values. The predictions made on the bottom chord, far away form
any secondary elements, are less affected by their omission.

G: No barrier
P: No pavement
R: No reinforcement
B: No diaphragm
S: Simplified support conditions
A: All simplifications

G:+37%
P:+0%
R:+3%
B:+0%
S:+1%
A:+42%
G:+3%
P:+0%
R:+3%
B:+0%
S:+0%
A:+8%

G:+12%
P:+1%
R:+2%
B:-1%
S:+1%
A:+15%
G:+1%
P:+0%
R:+1%
B:+2%
S:+3%
A:+2%

G:+28%
P:+1%
R:+2%
B:+0%
S:+0%
A:+33%

G:+2%
P:+0%
R:+2%
B:+8%
S:-11%
A:+16%
G:+3%
P:+0%
R:+3%
B:+0%
S:+0%
A:+8%

G:+12%
P:+1%
R:+2%
B:+2%
S:-1%
A:+19%
G:+12%
P:+1%
R:+2%
B:+1%
S:-9%
A:+23%

G:+10%
P:+1%
R:+2%
B:+2%
S:-5%
A:+24%

Y
X
Z

Figure 5.47: Relative error for strain prediction along X-axis due to secondary-elements omission.

119

Chapter 5. Case studies


Natural frequency
The estimation of relative errors in natural frequencies are presented in Table 5.14. Modes 2
(lateral bending) and 5 (torsion) are more sensitive to the exclusion of secondary elements than
the others. The models that either exclude pavement or concrete reinforcement give a positive
error, meaning the natural frequency is higher than when accounting for these features. It
shows that the mass of these elements may contribute more to the natural frequency than
their stiffness.
Table 5.14: Relative error in natural frequencies (%) due to the exclusion of secondary
structural-elements.
Secondary element
excluded

Mode 1
Vertical
bending

Mode 2
Lateral
bending

Mode 3
Vertical
bending

Mode 4
Vertical
bending

Mode 5
Torsion

Mode 6
Vertical
bending

Barrier
Pavement
Reinforcement
Diaphragms
Support conditions
All parameters

-0.07
+3.5
+0.08
-0.3
-1.1
+2.5

-5.4
+3.6
+0.06
-8.9
-3.1
-10.8

-0.2
+3.2
+0.03
-0.4
-1.4
+1.5

-0.4
+2.9
-0.05
-0.1
-0.9
+1.5

+12.9
+3
+0.08
-16.6
-1.8
-12.9

+0.3
+2.9
+0.01
-0.2
-1.0
-7.4

Result summary
The level of complexity of the model in terms of geometry simplication, element type combinations, as well as secondary structural-elements have a signicant inuence on predictions.
Also, zones having localized simplications such as intermediate supports that present complex geometries, are more sensitive to simplications and result in higher prediction errors.
The estimation of the prediction errors in natural frequencies shows that modes involving
lateral bending and torsion are more sensitive than the others to the exclusion of secondary
structural-elements. Furthermore, simplied models may not adequately represent local
behavior since prediction errors in longitudinal strain may be important (>100%). Global
behavior, such as natural frequencies, displacements and rotation around the transversal axis,
should be favored as quantities to be compared with measurements during structural identication. Although the solid-based model accurately represents the geometry of the bridge, it
remains an approximation of the real structure. Additional errors should be expected between
the predictions given by this model and the real behavior. The results of this study provide
lower bounds of model simplication errors that can be used for structural identication.
The most important aspect of this study is that it shows that model simplications and
omissions systematically affect the error structure. This means that when an aspect of a
structure is neglected in the model, it may systematically affect the prediction errors for
several prediction locations and for several prediction types to varying degrees. Modeling
each of these errors by zero-mean independent Gaussian noise would not represent the error
structure observed in this case study.
120

5.4. Grand-Mere Bridge

5.4.5 Structural identication using time-domain data


This section presents the condition evaluation of the Grand-Mere Bridge using ambient
vibration monitoring data. Modeling uncertainties, such as those arising from the model
simplications estimated in the previous section, are included in the analysis along with
measurement uncertainties.

Ambient vibration recordings


Dynamic measurements were recorded on the bridge in 2003. Accelerometers were positioned
inside the box girder on the web faces at about 500 mm above the lower ange of the box girder.
The locations and measured directions of the accelerometers are given in Figure 5.48.
Y
X
Z

Y
Z

Acceleration along Y-axis

Acceleration along Y-axis and Z-axis

Figure 5.48: Accelerometer layout for Grand-Mere Bridge monitoring system.

In order to be interpreted, measured accelerations have to be synthesized into natural frequencies and mode shapes (experimental modal analysis). The technique used here is the
Frequency Domain Decomposition (FDD) method [38]. The six rst singular values of the
averaged power spectral density matrices (FDD spectrum) are shown in Figure 5.49. The
number of singular values showing a peak in a given frequency range corresponds to the
number of modes in this range. For instance, around the frequency 1.04 Hz, two singular
values show a peak indicating the presence of two modes. A total of six modes were identied
with condence. The natural frequencies of these modes and plots of the mode shapes are
given in Figure 5.50.

Model parameters and uncertainties


From the three model classes available, the shell-based class is chosen to perform structural
identication because the falsication method requires solving the model many times (>1000).
Moreover, as shown in the previous section, the shell-based model adequately captures the
dynamic characteristics of the solid-based model.

Parameters to be identied and model instance generation


Predicted natural frequencies are most sensitive to variation in the concrete Youngs modulus,
in the bearing device condition, and in the level of cracking (see Figure 5.51 & Table 5.15).
Therefore, these parameters are chosen as primary parameters to be identied. For the Youngs
121

Average singular values of the PSD matrices (dB)

Chapter 5. Case studies


90
3-Vertical
bending (2.12Hz)

1-Vertical
bending(1.04Hz)

4-Vertical
bending (3.71Hz)
5-Torsion (5.66Hz)

80
70

6-Vertical
bending (5.66Hz)

2-Lateral
bending (1.04Hz)

60
50
40
30
20
10
.2

.5

1.5 2
5
Frequency (Hz)

10

15 20

50

Figure 5.49: Averaged singular values of the power spectral density matrices for Grand-Mere
Bridge.

10

50

6
4
2
0

10
0

50

50

100

100

100

150

150

200

150

200

0246

200
06

105 0

(a) Mode shape number one - Verti- (b) Mode shape number two - Lat- (c) Mode shape number three - Vercal bending (1.04Hz)
eral bending (1.04Hz)
tical bending (2.12Hz)
20
15
10
5
0
5
10

10
6
2

50

50

100
100

10
5
0
50

150

100

150

150

200

200

200

X
0 6

0 6

(d) Mode shape number four - Verti- (e) Mode shape number ve - Tor- (f) Mode shape number six - Vertical
cal bending (3.71Hz)
sion (4.68Hz)
bending (5.66Hz)

Figure 5.50: Measured mode shapes and frequencies for Grand-Mere Bridge

122

5.4. Grand-Mere Bridge


modulus, the value sought is an average over the structure. The bearing device parameter
accounts for the possibility that longitudinal support displacement may be restricted under
ambient vibrations due to friction forces, as found in 5.3.4 for Langensand Bridge. This
phenomenon is simulated using longitudinal springs having unknown stiffness constants.
Finally, the possibility that cracked zones affect the structures behavior is tested by reducing
the stiffness of concrete in regions located on the upper ange at the intermediate supports
and on the lower ange at mid-span. This simplied representation of the cracking mechanism
is intended to identify the eventual presence of undetected cracks.
Five primary-parameters are selected to describe these zones: an equivalent Youngs modulus
of the cracked concrete for each zone is expressed as a fraction of the average Youngs modulus;
the length of the cracked zones at the supports, and the length of the cracked zone at mid-span.
Overall, there are seven primary-parameters that need to be identied as shown in Figure
5.51. The values used for each parameter are presented in Table 5.15. An initial model set is
made of 2 187 (37 ) combinations of these parameters. These combinations were selected over
a seven-dimensions grid where each parameter value can take the lower, higher and midrange
of its interval. This choice of discretization is governed by the computing time available.
West bound

East bound

E1

L1

E3
E2

L1

L2

E0

E0 : Concrete Youngs modulus


K : Bearing device restriction
E1 : Effective Youngs modulus of cracked concrete at west pile
E2 : Effective Youngs modulus of cracked concrete at mid-span
E3 : Effective Youngs modulus of cracked concrete at east pile
L1 : Length of crack zone at supports
L2 : Length of crack zone at mid-span

Figure 5.51: Primary parameters to be identied for the Grand-Mere Bridge.

Table 5.15: Values for parameters (3 for each parameter) used to create the initial model set for
the Grand-Mere Bridge.
Free parameter
Concrete Youngs modulus
Bearing device restriction
Effective Youngs modulus of cracked concrete at west pile
Effective Youngs modulus of cracked concrete at mid-span
Effective Youngs modulus of cracked concrete at east pile
Length of crack zone over supports
Length of crack zone at mid-span

Units

Values

GPa
kN/mm
m
m

{15, 27.5, 40}


{0, 2E2 2E4}
{0.1, 0.5, 1}
{0.1, 0.5, 1}
{0.1, 0.5, 1}
{4, 10, 16}
{4, 10, 16}

Uncertainties
Secondary parameters are those that have a marginal effect on the structural response and are
considered as uncertainties. Natural frequencies are inuenced by the mass and the rigidity
of the structure. Therefore, the secondary-parameters are the concrete, pavement, and sand123

Chapter 5. Case studies


lling densities, the pavement Youngs modulus and the steel reinforcement Youngs modulus.
Except for the uncertainty attributed to the steel Youngs modulus that follows a Gaussian
distribution, secondary-parameter uncertainties are all represented by uniform distributions.
The details of each uncertainty distribution are presented in Table 5.16. These uncertainties
are propagated in the template model to obtain the uncertainties associated with predicted
frequencies.
Table 5.16: Secondary-parameter uncertainties for Grand-Mere Bridge.
Uniform distribution
Unit
min
max

Uncertainty source
Concrete density
Pavement Youngs modulus
Pavement density
Sand density

T/m3
MPa
T/m3
T/m3

2.2
1000
2.0
1.1

2.6
10000
2.4
2.0

Gaussian distribution
Unit
Mean

Steel Youngs modulus

MPa

202000

6000

Details of other uncertainty sources are summarized in Table 5.17. These sources are described
by the extended uniform distribution (see Appendix A). This distribution includes uncertainty
regarding the bounds dening the uniform distributions. The parameter represents the
uncertainty of bound positions as a fraction of the initial interval width. The element discretization error estimation is based on a mesh renement analysis is conducted to determine
the maximal plausible prediction error for each natural frequency. The evaluation of uncertainties related to model simplications is based on the study presented in 5.4.3. An uncertainty
of 4% is added to the errors estimated in the previous study to represent the fact that the
solid-based model used as reference is also an approximation of the real structure. Additional
provisions are taken for mode #5 since predictions of the torsional behavior were found in
5.4.3 to be less accurate than other predictions.
Table 5.17: Other uncertainty sources for Grand-Mere Bridge.

Uncertainty source
AVM epistemic variation
Model simplications
Mode 1
Mode 2
Mode 3
Mode 4
Mode 5
Mode 6
Mesh renement
Additional uncertainties

Extended uniform distribution


Lower bound
Upper bound

-2%

2%

0.30

-8%
-8%
-8%
-7%
-15%
-9%
0%
-1%

-1%
-1%
-1%
-0%
-1%
-2%
1%
1%

0.30
0.30
0.30
0.30
0.30
0.30
0.30
0.30

Gaussian distribution
Mean

Measurement variability

124

0%

2%

5.4. Grand-Mere Bridge


Based on values reported in 1.3.2, ambient vibration monitoring epistemic variability is represented by an extended uniform distribution with bounds at 2% of the measured frequency
values. Observed variability across the datasets is represented by a Gaussian distribution
having a standard deviation of 2% of the measured frequencies. This number is an upper
bound based on previous experiments (see 5.3.4).

ea
su
Relative importance
M red
od f
el req
s
u
AV C imp en
c
o
M
nc lific ies
ep re at
i
Ad ist te on
di em de s
tio ic ns
Pa
na va ity
ve
r
l
m Me un iati
en s ce on
t Y h r rta
ou efi int
ng ne y
P
St a s me
ee ve mo nt
l Y me du
ou nt
lu
ng de s
s ns
Sa mod ity
nd ul
de us
ns
ity

The relative importance of uncertainties is shown in Figure 5.52. The most important sources
of uncertainty are frequency measurements, model simplications and concrete density.
All uncertainty sources are combined together to obtain threshold bounds for each mode.
Thresholds are computed for a target reliability = 0.95.
0.30

0.20

0.10

Figure 5.52: Uncertainty sources relative importance for Grand-Mere Bridge

Mode-shape accordance
Predicted and measured mode shapes are compared using the MAC test to verify that the
comparison of natural frequencies is performed over the right modes. Only modes 1, 3, 4
and 6 passed the test (MAC value > 0.8). Modes 2 and 5 give a MAC value below 0.6 and
are therefore not used to falsify models. As expected, the template model could not predict
torsional behavior (mode number 6). Poor correspondence between mode shapes may also
be attributed to predictions that give unexpectedly rigid behavior (see Figure 5.50b ). In any
case, not including these two modes in the model-falsication process remains conservative.

Model falsication
Candidate and falsied models for modes 1, 3, 4 and 6 are presented in Figure 5.53. This
gure shows the comparison between the measured natural frequencies and predictions
from all the model instances. Model instances are represented on the horizontal axis, and
the predicted and measured values are shown on the vertical axis. The position of each dot
corresponds to a prediction given by a model instance. The continuous line is the measured
natural frequency and the two dashed lines are the threshold bounds that separate candidate
model instances from falsied ones. The circles are instances that remain in the candidate
125

Chapter 5. Case studies


model set. They correspond to the models whose predictions are simultaneously included
between the threshold bounds for all modes {1, 3, 4, 6}.
The predicted frequencies for modes 3, 4 and 6 are inuenced by the value of the concrete
Youngs modulus. The plots for modes 3, 4 and 6 in Figure 5.53 show distinct groups of
predictions related to the possible values for this physical parameter. Model instances are
falsied in a similar way for these three modes : models having the high value of 40 GPa for
the concrete Youngs modulus are selected as candidates for these modes. The rst mode
falsies model instances in a different way. The scatter in model predictions for the rst mode
is inuenced by a combination of the Youngs modulus and the bearing device movement
parameters. Models with a low and moderate value for concrete Youngs modulus (15-27.5 GPa)
are discarded regardless of the level of cracking and bearing device restriction.
A pairwise representation of the candidate model set parameter values is presented in Figure
5.54. Each graph represents the possible range for each parameter and dots corresponds to
candidate models. All 258 candidate models have a high value for concrete Youngs modulus.
Some of these candidates models showed that the midrange value for the bearing device
stiffness is possible when combined with a high level of cracking. All parameter values are
possible for other parameters (E1, E2, E3, L1, L2).
The ranges of values identied for the concrete Youngs modulus and the parameters related
to the crack pattern can be used to obtain long-term creep and shrinkage behavior predictions with the template model. In accordance with results found in 5.3.2, the identication
shows that the bearing device behavior could have been hindered during ambient vibration
monitoring. As in the other case, under ambient vibrations, restricting the longitudinal movement of bearing devices mainly affects the rst vertical bending mode. Although the bearing
device parameter can inuence the structure under ambient vibrations, it is likely not to
have an effect on its displacement under high loads and over a long time-span. Therefore,
this parameter could be removed when performing long-term behavior simulations. The
system identication study can support future long-term behavior simulations such as those
performed by Bazant et al. [1517] on similar structures.

5.4.6 Case study conclusions


This study serves as a basis for future studies to encourage better evaluation and quantication
of uncertainties related to modeling simplications. It showed that for civil-structures such as
the Grand-Mere Bridge, the structure of errors due to model simplications have a dominant
systematic character and are interdependent. Specic conclusions are:
1. Model simplications such as the degree of renement of a model and the exclusion of
secondary features have a signicant inuence on the prediction errors. These errors
are systematic and inter-dependent. Therefore, uncertainties should not be represented
by zero-mean independent Gaussian noise.
126

5.4. Grand-Mere Bridge


Mode 3
2.6

1.2

2.4
Predicted/measured values

Predicted/measured values

Mode 1
1.3

1.1
1
0.9
0.8
0.7

2
1.8
1.6
1.4

0.6
0.5
0

500

4.5

1000

1500
Models

2000

2500

Mode 4

500

1000

1500
Models

2000

2500

2000

2500

Mode 6

6.5
6
Predicted/measured values

Predicted/measured values

2.2

3.5

5.5
5
4.5
4

2.5
3.5
2

500

1000

1500
Models

2000

2500

Rejected model

Threshold bounds

Candidate model

Measured frequency

3
0

500

1000

1500
Models

Distribution of the expected error

Figure 5.53: Comparison between the measured and predicted frequencies for modes 1, 3, 4
and 6 for the Grand-Mere Bridge.

127

Chapter 5. Case studies

E0

4
3
2
1

x 10

2
K

x 10

10
2
10

E0

4
3
2
1

E0: Concrete Young's modulus


K: Bearing device restriction
E1: Effective Young's modulus of cracked concrete at west pile
E2: Effective Young's modulus of cracked concrete at mid-span
E3: Effective Young's modulus of cracked concrete at east pile
L1: Length of crack zone over supports
L2: Length of crack zone at mid-span

0.5 1
E1

0.5 1
E1

10
2
10

E1

x 10

E0

4
3
2
1

0.5 1
E2

0.5

0.5 1
E2

0.5 1
E2

0.5 1
E3

1
E2

10
2
10

E1

x 10

E0

4
3
2
1

0.5

0.5 1
E3

0.5

0.5 1
E3

0.5 1
E3

0
0.5 1 1.5 4
L1 x 10

1
E2

E1

1
0.5

0.5 1 1.5 4
L1 x 10

0.5

1
E3

10
2
10

E0

x 10
4
3
2
1
0.5 1 1.5 4
L1 x 10

0.5 1 1.5 4
L1 x 10

0.5
0.5 1 1.5 4
L1 x 10
4

L1

E3

E2

E1

E0

x 10
x 10
4
4
1
1
1
10
1.5
2
3
10
1
0.5
0.5
0.5
2
0.5
1
0
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
0.5 1 1.5
4
4
4
4
4
4
L2 x 10
L2 x 10
L2 x 10
L2 x 10
L2 x 10
L2 x 10

Figure 5.54: Pairwise representation of the candidate model set parameter values for GrandMere Bridge.

128

5.4. Grand-Mere Bridge


2. Simplied models of the Grand-Mere Bridge inadequately represent local behaviors.
Errors in longitudinal strain predictions are in many cases important (>100%). Therefore,
global behaviors such as natural frequencies, displacements and rotations around the
transversal axis should be favored as quantities to be compared with measurements.
The results for this study give lower bounds of the model simplication error estimates
that can be used as input for system identication.
3. In accordance with results found in 5.3.2, bearing device behavior could have been
biased during ambient vibration monitoring. The lack of longitudinal movement in
bearing devices mainly affected the behavior related to the rst vertical bending mode.
Also, all candidate models found have a high value of Youngs modulus .

129

Chapter 5. Case studies

5.5 Tamar Bridge


This case-study employs error-domain model falsication methodology to identify the behavior of the Tamar Bridge. Measurements were performed on this cable-stayed bridge in 2006
by a team from the University of Shefeld [121]. In addition to identifying the properties of
the structure using dynamic data, the measurement-system design methodology proposed in
4.4 is used to optimize the accelerometer conguration for future monitoring activities. The
data and model from this case-study have been obtained form Prof. J.M.H Brownjohn and R.
Westgate from University of Shefeld (UK).

5.5.1 Structure and tests description


The bridge has an overall length of 642 m and the height of towers is 73 m. Figure 5.55a
presents the nite-element model used to predict the structure behavior. Details regarding
the structure and its model can be found in [40, 233].
Accelerations were measured at locations shown in Figure 5.55b. Lateral and vertical accelerations were recorded at deck level and lateral and longitudinal accelerations were recorded for
tower upper portals.

Longitudinal and lateral acceleration recordings


Vertical and lateral acceleration recordings

(a) Finite element model

(b) Accelerometer layout

Figure 5.55: Tamar Bridge model and accelerometer layout for Tamar Bridge.
In order to be interpreted, acceleration recorded need to be synthesized into natural frequencies and mode shapes. The technique used here is the Frequency Domain Decomposition
(FDD) method [38]. The conversion of the bridge time-domain signal gave the averaged
singular values of the power spectral density shown in Figure 5.56.
Eighteen modes were identied in total. The natural frequencies of these modes are presented
in Table 5.18 and plots corresponding to each mode shapes are given in Figure 5.57. Note that
the mode shapes of torsional modes (3, 6, 7, 10, 12, 14, 15, 17) only include two measurement
points in the transverse direction. This explains why torsional mode shapes are only partially
represented.
Measured mode shapes are compared with predicted ones using MAC values. A summary of
130

Average singular values of the PSD matrices (dB)

5.5. Tamar Bridge


60

50

40

30

20

10

0
.2

Structure natural frequencies

.5

1
1.5
2
Frequency (Hz)

Figure 5.56: Power spectral density showing the modes extracted.

this comparison is presented in Figure 5.58. In this gure, the relative frequency of the MAC
values for each measured mode is presented. Relative frequency is used because measured
mode shapes are compared with 3 125 model instances (see 5.5.2). Here, 13 modes have
an acceptable correspondence between predicted and measured values (MAC 0.8). Modes
having a MAC value below 0.8 are not used for further comparison.

5.5.2 Structural identication using time-domain data


Structural identication is performed to identify the possible physical parameter values of
the structure nite-element model. The following subsection presents the steps performed to
interpret the data.

Parameters to be identied and model instance generation


The template model used to generate model instances (parameter combinations) has ve
primary parameters to be identied: Main-cable initial strain [5E-4, 3E-3] mm/mm, sidespan
cable initial strains [5E-4, 3E-3] mm/mm, Plymouth-side support longitudinal stiffness 1E [4,
11] kN/mm, Saltash-side support longitudinal stiffness 1E [4, 11] kN/mm and Saltash-side
deck expansion joint longitudinal stiffness 1E [4, 11] kN/mm. These parameters are illustrated
in Figure 5.59.
The initial model set is the discrete representation of the solution space. The interval of each
parameter value is discretized in ve parts to generate a hyper-grid containing 3 125 (55 )
combination of parameters. The result of this process is an initial model set containing the
131

60

40

40

60
40
20

20

20

60

Chapter 5. Case studies

0
0

50

100

150

200

250

300

350

400

450

500

50

100

150

200

20

20

0
150

200

250

300

350

400

450

500

20

0
250

300

350

400

450

500

100

150

200

350

400

450

200

250

300

350

400

450

100

150

200

250

300

350

400

450

200

250

300

350

400

450

100

150

200

250

300

350

400

450

20
0

50

100

150

200

250

300

350

400

450

500

100

150

200

250

300

350

400

450

500

50

100

150

200

250

300

350

400

450

500

(l) Mode 12 - Torsion (3.063Hz)

40
20
0

100

150

200

250

300

350

400

450

500

50

100

150

200

250

300

350

400

450

500

(o) Mode 15 - Torsion (4.156Hz)

60
40
20
0
0

50

100

150

200

250

300

350

400

450

500

50

(p) Mode 16 - Vertical bending


(4.516Hz)

500

40

60

450

40

500

(n) Mode 14 -Torsion (3.625Hz)

20
0

50

40

400

60

50

60

350

(m) Mode 13 - Vertical bending


(3.469Hz)

300

20

50

40

500

250

60

Z
150

200

(i) Mode 9 - Vertical bending


(1.703Hz)

0
100

150

20

0
50

100

40

500

60

20

500

(f) Mode 6 - Torsion (1.219Hz)

(k) Mode 11 - Vertical bending


(2.172Hz)

40

50

60

450

0
50

(j) Mode 10 - Torsion (1.859Hz)

400

20

500

350

0
150

300

40

500

20

300

40

60

20

250

(h) Mode 8 - Vertical bending


(1.516Hz)

40

250

60

60

100

200

(g) Mode 7 - Torsion (1.313Hz)

50

150

0
50

100

(c) Mode 3 - Torsion (0.719Hz)

40

60

20
200

50

(e) Mode 5 - Vertical bending


(1.063Hz)

40

150

20
0

60

100

500

(d) Mode 4 - Vertical bending


(0.969Hz)

50

450

60

400

40

60

40

100

350

(b) Mode 2 - Vertical bending


(0.594Hz)

60

50

300

(a) Mode 1 - Vertical bending


(0.391Hz)

250

100

150

200

250

300

350

400

450

500

(q) Mode 17 - Torsion (5.094Hz)

(r) Mode 18 - Vertical bending


(5.265Hz)

Relative frequency

Figure 5.57: Measured mode shapes and frequencies for Tamar Bridge.

0.5

0
0.2
0.4
0.6
0.8
MAC value 1.0

10

12

14

16

18

Measured mode number

Figure 5.58: MAC value relative frequency quantifying the correspondence between predicted
and measured mode shapes for Tamar Bridge.

132

5.5. Tamar Bridge


Table 5.18: Summary of observed frequencies for Tamar Bridge.
Mode number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

Saltash side

Saltash side
deck expansion joint

Label

Frequency (Hz)

MAC 0.8

Vertical 1
Vertical 2
Torsion 1
Vertical 3
Vertical 4
Torsion 2
Torsion 3
Vertical 5
Vertical 6
Torsion 4
Vertical 7
Torsion 5
Vertical 8
Torsion 6
Torsion 7
Vertical 9
Torsion 8
Vertical 10

0.39
0.59
0.72
0.97
1.06
1.22
1.31
1.52
1.70
1.86
2.17
3.06
3.47
3.63
4.16
4.52
5.09
5.27




















Sidespan cables
initial strains

Saltash tower
deck expansion joint

Main cable
initial strains

Plymouth side

Plymouth side
deck expansion joint

Figure 5.59: Schematic representation of parameters to be identied for Tamar Bridge.

133

Chapter 5. Case studies


predicted frequencies and mode shapes for all 3 125 model instances. In order to compare
predicted modes with measured values up to 5 Hz, 120 modes were obtained from the niteelement template model for each instance of the initial model set. Since most of these modes
involve suspension-cables that have not been monitored, only 13 modes found correspondence between measured and predicted values. This points out why an automated mode
classication approach, such as the methodology presented in 2.7, is essential to extract only
the modes of interest from simulated data.

Relative importance

The relative importance of each primary parameter is shown in Figure 5.60 for each mode
having passed the MAC correspondence test. This gure shows that the contribution of each
parameter varies according to the mode studied. For the rst modes, the main-cable initial
strain dominates the structures behavior. For higher modes, the relative importance tends to
be more evenly distributed.

1
0.5
0

Plymouth side deck expansion joint stiffness


Salash side deck expansion joint stiffness
Salash tower deck expansion joint stiffness
Main cable initial strain
Sidespan cable initial strain

12

13

14

16

18

11
10
Mode number

Figure 5.60: Primary-parameter relative importance for each mode of the Tamar Bridge

Uncertainties
The rst set of uncertainties described here is associated with secondary model-parameters
that are not intended to be identied. Table 5.19 presents each uncertainty source and the
properties of the Gaussian distribution used to describe them.
The second set of uncertainties is described in Table 5.20. These other sources describe uncertainties associated with measurements and the template model itself. Since these uncertainty
evaluations are based on experience and heuristics, the extended uniform distribution is used
to represent the inaccurate knowledge of uncertainty bounds (see Appendix A).
Based on values reported in 1.3.2, an extended uniform distribution with bounds at 2%
of the frequency values is used to describe measurement uncertainty due to the epistemic
variability associated with ambient vibration monitoring. Measured frequencies variability is
represented by a Gaussian distribution having a mean of zero and a standard deviation of 2%
of the measured frequency. This uncertainty is an upper bound based on other experiments
134

5.5. Tamar Bridge


Table 5.19: Secondary-parameter uncertainties for Tamar Bridge.
Gaussian distribution
Unit
Mean

Steel density
Concrete density
Deck density
Steel Youngs modulus
Concrete Youngs modulus
Cable Youngs modulus
Steel Poissons ratio
Concrete Poissons ratio
Concrete tower thickness
Orthotropic steel deck thickness
Concrete concrete deck thickness
Main-cable area
Hanger-cable area

kg/m3
kg/m3
kg/m3
GPa
GPa
GPa
%
%
%
m2
m2

7850
2400
7850
202
30
155
0.29
0.23
0
0
0
0.088
0.0024

2%
5%
5%
6
4.5
6
3%
3%
2
1
2
1%
1%

reported in 5.3.4.
Uncertainty in model simplications and the nite element method (FEM) include bias caused
by omissions in the model along with simplifying hypotheses and numerical errors made
during the model resolution. An upper bound of the model uncertainty is evaluated based
on experience gained during the identication of previous civil structures. Mesh renement
uncertainty represents an upper bound for the effect of the approximation made by using
a nite number of elements to model the structure. Additional uncertainties are conservative provisions for other negligible uncertainty sources that may add up to the model and
measurement uncertainties.
Table 5.20: Other uncertainty sources for Tamar Bridge.
Uncertainty source
AVM epistemic variation
Model simplications & FEM
Mesh renement
Additional uncertainties

min

Frequency
max

-2%
-4%
0%
-1%

2%
1%
2%
1%

0.3
0.5
0.3
0.3

Gaussian distribution
Mean

Measurement variability

0%

2%

All uncertainty sources are combined together to describe the total uncertainty associated
with each mode. The relative importance of uncertainty sources averaged over all modes,
is presented in Figure 5.61. Measurement variability, ambient vibration monitoring (AVM)
epistemic variations and model simplication are the dominant uncertainty sources. When
taken individually, secondary parameters have a marginal inuence on the total uncertainty.
135

Relative importance

Chapter 5. Case studies

0.30
0.20
0.10

AV Me
M as
ep ur
is em
te e
M mi nt
St od c un var
ee el c iab
l s e
i
Ad Yo imp rtin lity
di un lifi atie
tio g ca s
ti
n s
C
a b M al u mo ons
O l e e n du
r
C tho Yo sh r cer lus
on tr u e ta
C
cr op ng fin int
on
et ic s em y
cr
e d m e
et
Yo ec o n
e
un k s d u t
de
g tif lus
ns
ity D s m fne
(e ec od ss
xc k ul
ep de us
n
St ted sit
e d y
C el d ec
To M abl en k)
a e
C we i n de sity
on r c a n
cr wa b si
St ete ll t l e a ty
ee d hic r e
e
a
C H l Po ck kne
on a is d s
cr gn so en s
et er n si
e c s ty
Po ab ra
is le tio
so a
n re
s a
ra
tio

Figure 5.61: Tamar Bridge uncertainty relative importance.

Comparison of measurements with model instance predictions


Figure 5.62 presents plots where measurement and model predictions are compared for mode
numbers 1, 2, 3, 10, 12 and 14. In each graph, the model instances are presented on the
horizontal axis. The values predicted by each model instance corresponds to its position on
the vertical axis. The measured frequency is represented by a solid line and threshold bounds
by dashed lines.
The combined uncertainty distribution is presented vertically as outlined in Figure 5.62a.
Models lying outside threshold bounds for any mode are falsied. These modes are divided
into two categories: global modes (1 and 2) and torsional modes (3, 10, 12 and 14). Here, only
global modes have a sufcient prediction variability to falsify a signicant number of models.
Figure 5.63 presents similar plots for modes 4, 5, 8, 11, 13, 16 and 18. Contrarily to modes
presented in Figure 5.62, these modes (excepted for #5) show a systematic bias in predicted
frequencies. This bias underestimates the measured frequencies by approximately 15%. All
these modes correspond to deck vertical higher bending modes. Several model instances for
mode number 5 remain not falsied. Nevertheless, this mode is not used for falsifying models
because large discrepancies are observed for all other deck vertical bending modes.
These results indicate that the template model adequately captures the global and torsional
behavior of the structure (modes 1, 2, 3, 10, 12 and 14). However, as described above, the
model may not adequately represent the deck vertical exural behavior (modes number 4, 5,
8, 11, 13, 16 and 18). Possible causes are that either the deck stiffness is underestimated or its
mass is overestimated. Considering the systematic bias present in modes 4, 8, 11, 13, 16 and
18, the identication of the structure physical parameters can only be made using modes 1, 2,
3, 10, 12 and 14. These modes are less inuenced by the deck behavior. Figure 5.62 indicates
that only mode 1, 2 & 3 falsify model instances. For other modes (10,12 & 14) all models lies
within threshold bounds. Therefore, model falsication is performed using only the three rst
136

5.5. Tamar Bridge


Candidate model Falsified model

0.46

Uncertainty distribution

Predicted/measured values

0.44

Upper threshold bound

0.42
0.40

Measured frequency
0.38
0.36

Lower threshold bound

0.34
0.32

500

1000

1500

2000
Models

2500

3000

3500

4000

0.80

0.75

0.9
Predicted/measured values

Predicted/measured values

(a) Mode number 1

0.70
0.65
0.60
0.55
0.50

0.8
0.7
0.6
0.5

0.45

0.4

0.40
0.35
0

500

1000

1500

2000
Models

2500

3000

3500

4000

500

(b) Mode number 2

2000
Models

2500

3000

3500

4000

3500

4000

3.5
3.4

2.2

3.3

2.1

Predicted/measured values

Predicted/measured values

1500

(c) Mode number 3

2.3

2
1.9
1.8
1.7

3.2
3.1
3
2.9
2.8

1.6
1.5
0

1000

2.7

500

1000

1500

2000
Models

2500

3000

3500

2.6

4000

500

(d) Mode number 10

1000

1500

2000
Models

2500

3000

(e) Mode number 12

4.4

Predicted/measured values

4.2
4
3.8
3.6
3.4
3.2
3
0

500

1000

1500

2000
Models

2500

3000

3500

4000

(f ) Mode number 14

Figure 5.62: Comparison of model prediction scatters with measured values for global and
torsional modes (modes number 1, 2, 3, 10, 12 and 14).

137

1.15

1.25

1.1

1.2
Predicted/measured values

Predicted/measured values

Chapter 5. Case studies

1.05
1
0.95
0.9
0.85

1.1
1.05
1
0.95

0.8
0.75
0

1.15

500

1000

1500

2000
Models

2500

3000

3500

0.9

4000

500

(a) Mode number 4

1500

2000
Models

2500

3000

3500

4000

3500

4000

3500

4000

(b) Mode number 5


2.5

1.75
1.7

2.4
Predicted/measured values

Predicted/measured values

1000

1.65
1.6
1.55
1.5
1.45
1.4

2.3
2.2
2.1
2

1.35

1.9
1.3
1.25
0

500

1000

1500

2000
Models

2500

3000

3500

1.8
0

4000

500

5.2

3.8

3.6

3.4

3.2

2.8
0

1500

2000
Models

2500

3000

(d) Mode number 11

Predicted/measured values

Predicted/measured values

(c) Mode number 8

1000

4.8
4.6
4.4
4.2
4

500

1000

1500

2000
Models

2500

3000

3500

3.8
0

4000

500

(e) Mode number 13

1000

1500

2000
Models

2500

3000

(f) Mode number 16

Predicted/measured values

5.5

4.5
0

500

1000

1500

2000
Models

2500

3000

3500

4000

(g) Mode number 18

Figure 5.63: Comparison of model prediction scatters with measured values for vertical bending deck modes (modes number 4, 5, 8, 11, 13, 16 and 18).
138

5.5. Tamar Bridge


modes.

Identication results
Using mode numbers 1, 2 & 3 falsies 2 601 model instances out of 3 125. This leads to a
limited number of candidate models (524), reducing signicantly the number of possible
combinations of physical parameters that are able to explain the observed frequencies.

1011
Main_s: Main-cable initial strain
Sidespan_s: Sidespan cable initial strains
Support_ply: Plymouth side support longitudinal stiffness
Support_sal: Saltash side support longitudinal stiffnes
Deck_exp: deck expansion joint longitudinal stiffness

104
104
1011
Support_sal

104

1011

104

1011

104

5
10
Deck_exp
Support_sal

Support_ply

5
10
Deck_exp

1 2 3 3
Main_s x 10

1011

1011
Deck_exp

1011

Support_sal

Support_ply

Support_ply

Figure 5.64 presents the candidate model set parameter values using pairwise parameter
graphs. The range of each plot corresponds to the range of each parameter. In order to better
represent results, axes do not have a linear scale. These plots show that the parameter range is
reduced for the main cable initial strain where all higher values are discarded. Additionally,
the number of possible permutations of other parameters is also reduced.

104

104

1 2 3 3
Main_s x 10

1 2 3 3
Main_s x 10

1 2 3
Sidespan_s
3
x 10

104
1 2 3
Sidespan_s
3
x 10

1011

104
1 2 3
Sidespan_s
3
x 10

3
Main_s

1011
Deck_exp

104

Support_sal

Support_ply

1011

x 10

2
1
1 2 3
Sidespan_s
3
x 10

Figure 5.64: Pairwise representation of the candidate model set parameter values for Tamar
Bridge.

The candidate models found serve as a baseline for comparing the evolution of the bridge
condition. These models can be used to compare the actual behavior with results from future
monitoring campaigns to detect changes in the candidate model set. Such changes could
indicate that the state of the structure has changed and thus allow for preventive interventions.
A study is presented in the next section where the accelerometer conguration used for this
study is optimized for future monitoring activities.
139

Chapter 5. Case studies

5.5.3 Optimization of measurement-system congurations


This section studies the performance of the measurement conguration used to identify
the behavior of the structure. Part of results presented in this section were obtained in
collaboration with the master student Alban Nguyen. First, the expected identiability is
computed for the measurement conguration and modes used in 5.5.2. Table 5.21 presents
the qualitative labels used to generate simulated measurements (see 3.2.1).
Table 5.21: Qualitative labels describing uncertainty correlation for the generation of simulated
measurements.
Uncertainty source
AVM epistemic variation
Model simplications & FEM
Mesh renement
Additional uncertainties
Measurement variability

Qualitative label
Independent
High+
High+
Low+
Independent

The cumulative distribution function describing the expected size of the candidate model set
is presented in Figure 5.65. Expected identiability indicates that there was a probability of
approximately 15% of obtaining a candidate model set containing 524 models or less. This
observation supports the validity of the expected identiability metric because the result is
well within the variability expected. Note that for high cumulative probabilities (> 0.9), the
expected identiability varies signicantly for small changes of cumulative probability. This
variability is due to the wide and non-uniform dispersion in predicted values for the two rst
modes (see Figure 5.62 a&b). Therefore, it is not possible to provide robust predictions for the
expected size of the candidate model set for such high probability.

Probability

1
0.8
0.6

Size of the candidate model set


obtained from on-site observations

0.4
0.2
0

20%

40%
60%
80%
100%
Maximal number of candidate models
(% of the initial model set)

Figure 5.65: Cumulative distribution function (F C M ) showing the probability of a maximum


candidate model set size for Tamar Bridge. The polygonal sign corresponds to the number of
candidate models obtained using on-site measurements.

The measurement-system design methodology presented in Chapter 4 is employed to predict


which modes to use in order to minimize the number of expected candidate models. Figure
5.66 presents the number of candidate models expected for a probability of 0.50 (F 1C M (0.50)),
depending on the number of modes used during the identication. This plot shows that the
performance of the identication decreases with the number of modes used to falsify model
140

5.5. Tamar Bridge


instances (i.e. the number of candidate models increase with the number of modes used).
Such results indicate that the information contained in several modes is either redundant
or unusable due to uncertainties. Table 5.22 describes each optimal mode conguration
presented in Figure 5.66. Label FQ refers to the mode number ordered according to measured
frequency numbers presented in Table 5.18. Here, the measurement-system design methodology predicted that the mode number 2 has the largest capacity to falsify model instances.
When more than one mode are used, the number of models identied is expected to increase.
The number of candidate models obtained using real data are respectively 489, 505 and 524,
when using the three rst mode congurations reported in Table 5.22. This shows that the
expected identiability metric adequately predicted that the number of candidate models
would increase if redundant measurements are added. The number of candidate models
obtained for other mode congurations were not computed because, as mentioned in 5.5.2
the model was found to be systematically biased for modes 4, 8, 11, 13, 16 and 18. Furthermore,
as predicted, the mode falsifying the largest number of model instances is the mode #2 (see
Figure 5.62b).
Expected number of cancidate models
expressed as a percentage of the
initial model set (%)

100
80
60
40

20
0
1

4 5 6 7 8 9 10 11 12 13
Number of modes used

Figure 5.66: Measurement-system design multi-objective optimization results for Tamar


Bridge. The expected number of candidate models is reported for a probability = 0.50
(F 1C M (50%)).

Even if when using several modes to falsify model instances the predicted decrease in performance is negligible, it is desirable to obtain data for more than one mode. Selecting several
modes is intended to increase the robustness of the identication. Here, the measurementsystem design methodology presented in 4.4 is used to nd which accelerometers in the
layout presented in Figure 5.55b can be removed while keeping the same interpretation performance. The modes of interest are modes 1, 2 & 3, the rst two vertical deck-bending modes
and the rst torsional mode.
Results are reported in Figure 5.67, showing the mode-match criterion value reached for mode
1, 2 & 3 depending on the number of locations monitored. The mode-match criteria quantify
the capacity to nd correspondence between predicted and measured mode shapes. Using
only sixteen modal points leads to a negligible loss in the mode-match criteria compared with
141

Chapter 5. Case studies


Table 5.22: Optimized mode selection are shown by a vertical set of symbols . The expected
number of candidate models computed for probability = 0.50 (F 1C M (0.50)).
Mode selections

Mode
FQ-1
FQ-2
FQ-3
FQ-4
FQ-5
FQ-8
FQ-10
FQ-11
FQ-12
FQ-13
FQ-14
FQ-16
FQ-18
Number of modes
Expected CM

































































7
1082

8
1103

8
1103

9
1121

10
1131

11
1153

12
1172

13
1209


1
986

2
991

3
1032

4
1034

5
1045

6
1068

















































using all 37 modal points. Using less modal points signicantly diminishes the capability to
link predicted and measured mode shapes. This drop is more important for mode #2 and #3.
The target mode match criteria is set to = 0.99. Therefore, the conguration with at least 16
sensors is necessary to satisfy the mode match Q,l /r e f for all three modes.

Mode match

1
Number of sensors
satisfying a mode-match

0.8

0.6
0

10
15
20
25
30
Number of locations monitored
Mode #1

Mode #2

35

Mode #3

Figure 5.67: The effect of the number of acceleration sensors on mode match criterion used to
associate predicted and measured mode shapes. A minimum of 16 sensors are necessary to
satisfy a mode match Q,l /r e f , where = 0.99.

Figure 5.68 presents the optimized accelerometer layout using the sixteen acceleration sensors.
The approach detected that it is necessary to keep several measurement location transversally to be able to separate bending and torsional mode shapes. Provided initial parameters,
reference mode shapes and features to be identied, the approach can nd optimized accelerometer layouts. In this case, the measurement system optimization methodology resulted
in a reduction in the number of accelerometers required by more than 55% while only allowing
for a negligible loss in the interpretation capacity.

Limitations
During the optimization of the accelerometer layout, a reference mode shape was available
from the measurements made on the bridge. In situations where such reference mode shapes
are not available, the capacity to perform the placement of accelerometers using simulated
142

5.5. Tamar Bridge

Selected measurement location


Possilbe measurement location

Figure 5.68: Optimized accelerometer layout Q opt for Tamar Bridge obtained using existing
mode-shape data. This conguration with 16 sensors corresponds to the layout identied in
Figure 5.67.

reference mode shapes remains to be demonstrated. Nevertheless, in practical situations it


is generally possible to perform pre-studies on structures to obtain initial guidance for the
design of complete measurement systems.

5.5.4 Case study conclusions


Error-domain model falsication contributed to improving the understanding of the structure
behavior using dynamic data. The candidate models found serve as a baseline for future structural evaluation activities. These models can be used to compare actual behavior with results
from future monitoring to detect changes in the structural condition. Specic conclusions are:

1. Structural identication found an upper bound for the main cable initial strain. It also
reduced by 55% the number of parameter permutations possible compared with the
initial model set.
2. The model of the structure adequately represents the bridge global and torsional behavior. Higher exural modes in the deck appear to be systematically biased. For these
modes, predicted frequencies are underestimated by approximately 15%.
3. Given initial parameters, reference mode shapes and features to be identied, optimized accelerometer layouts can be determined. For Tamar Bridge the optimized
measurement-system found reduced the number of accelerometers required by more
than 55% while only causing a negligible loss in the interpretation capacity.

143

Chapter 5. Case studies

5.6 Leak detection in pressurized pipe networks


This case-study describes how error-domain model falsication, along with methods developed for designing measurement systems, can be extended for the detection of leaks in
pressurized pipe networks. The leak-detection methodology presented in this chapter builds
upon the methodologies presented in Chapters 2, 3 and 4. The illustrative example presented
is based on the freshwater distribution network of the city of Lausanne (Switzerland). Results
presented in this section were obtained in collaboration with Sylvain Coutu.

5.6.1 Leak-detection methodology


Starting with a model of the pipe network, Figure 5.69 presents schematically the steps used
to identify possible leak locations that are identied indirectly by ow measurements. First,
the model is used to compute ow predictions across the network for several hundred leak
scenarios. Each scenario is generated so that it represents a leak at a node of the network. For
each scenario, predictions (r i ) are obtained for locations i where ow is monitored by sensors
(y i ). When measurements are taken on the system, they are compared with the predictions
from each leak scenario and inadequate ones are falsied using the methodology presented in
Chapter 2.

for

Database of flow distribution predictions computed


from simulated leak scenarios for
monitored locations

locations

..

Flow measurements

..

Leak at location #

..

Leak at location #3
Leak at location #2
Leak at location #1

Leak location

..

Leak locations falsified


by flow measurements

Candidate leak locations


that cannot be falsified
by flow measurements

Figure 5.69: General framework for the detection of leaks in pressurized pipe networks.

144

5.6. Leak detection in pressurized pipe networks


Measurement-system design
A challenge is to dene, prior to instrumenting a pipe network, the number of sensors required
and their locations to most effectively detect leaks. Two objective functions that dene the
performance of measurement-system congurations are the number of expected candidate
leak scenarios and their spatial distribution across the network (i.e. are they grouped or
dispersed?).
These objective functions are evaluated for several measurement congurations prior to installing sensors on a water distribution network. Simulated measurements are generated from
the ow velocities predicted by simulated leak scenarios. Several thousand instances of simulated measurements are used to simulate the process of falsifying inadequate leak scenarios.
Using each of these simulated measurement instances, the size of the set of candidate leak
scenarios and the smallest radius including every scenarios is computed and stored. The
results for these two quantities, evaluated for several thousand instances, are presented as
cumulative distribution functions (cdfs) expressing the probability of obtaining any number
of expected candidate leak scenarios (C S ) and radius (C R ) if measurements are taken on
the network.
The number of expected candidate leak scenarios (C S ), and the expected radius including these leaks (C R ) are computed for several sensor congurations to design optimized
measurement-systems. Possible sensor congurations are presented in a graph where a relation can be drawn between the number of sensors used and the expected number of candidate
leak-scenarios obtained for a probability = 0.95 (F 1C S (0.95)). In the case where the expected identiability indicator is poor (i.e. too many candidate leak-scenarios are expected)
users may review the initial inputs. Better expected results can be obtained by reducing uncertainties, for example, by using a more accurate ow model, by providing sensors having a
higher sensitivity and by measuring other parameters to reduce uncertainties.

5.6.2 Network description


The methodology proposed is applied to the water distribution network of the city of Lausanne,
Switzerland. This network is made of several independent sub-networks of different sizes.
In this study, we focus on one of the sub-networks shown in Figure 5.70. The model of the
network is created using the software EPANET.
This network contains pumps that feed a tank located at the top of the network. This reservoir
provides water to the network made of 295 pipes and 263 junctions (nodes). Consumption
varies between 50 and 250 m3 /h, with an average demand of 150 m3 /h. Leaks in the network
are reported at nodes. In this study, only one leak is considered at a time. Therefore, there are
263 possible leak scenarios.
To identify leaks accurately, ow velocities has to be monitored when water consumption is
low. If measurements are taken during high consumption hours, the effect of the demands is
145

Chapter 5. Case studies

1200

Pipe
Network node
Tank
Pump
Reservoir

Relative Coordinates (m)

1000

800

600

400

200

500

1000

1500

2000

Relative Coordinates (m)

Figure 5.70: Schematic representation of the water distribution network studied

too large to differentiate between what is attributed to consumption and what is attributed to
leaks. Therefore, every time the total consumption in the network goes below the minimal
hourly ow for one day, the ow velocities in pipes are recorded.

Monitoring devices
Data can be acquired on water distribution networks using ultrasonic ow-meters. These devices are chosen for their non-invasive characteristics and their high accuracy. Ultrasonic ow
meters measure the difference in travel time between pulses. The accuracy of commercially
available devices is 2% of the measured ow. Considering that a ow sensor can be installed
on each pipe, there are 295 possible sensor locations. Note that in this proof of concept study,
no real data is available. Therefore, simulated measurements are generated.

Uncertainties
There are several sources of uncertainty associated with the ow velocity model, the measurements and the water network itself. All uncertainties are represented by random variables
described as follows.
Parameter uncertainties are those having a direct inuence on the network ow-velocity
model. In water distribution networks, common secondary-parameters uncertainties are:
node elevations, pipe diameters, minor losses, roughness coefcients and water demand.
The uncertainty in the elevation of nodes is described using a Gaussian distribution with an
average of 0 and standard deviation of 50 mm. Such a level of accuracy on the node position
can be obtained using local non-invasive measurements such as ground penetrating radars
to accurately locate pipe depth [32, 102]. Uncertainties in pipe diameters are described by a
146

5.6. Leak detection in pressurized pipe networks


Gaussian distribution with a mean of 0 mm and a standard deviation of 5 mm. Uncertainties
in minor-loss coefcients are described by a Gaussian distribution of mean 1.8 and standard
deviation 0.1. This value is taken from specications found in the EPANET documentation
[195]. Uncertainty in roughness coefcients is represented by a Gaussian distribution having
a mean of 1 mm and a standard deviation of 0.1 mm.
Uncertainty in the water demand at each node is modeled by an exponential distribution
with a mean equal to the minimal water demand on the entire network divided by 263 nodes.
The instantaneous minimal water demand is xed at 25 m3 /h. This ow demand is based
on hourly averaged ow recordings made over a year on the network. The minimum hourly
demand is of approximately 50 m3 /h. The values for hourly averaged demand is presented in
Figure 5.71.

Average hourly
consumption (m3/h)

250
200
150
Minimum hourly
consumption

100
50
0

10

15

20

Hour

Figure 5.71: Typical hourly averaged water consumption measured over one day

These uncertainty sources correspond to uncertain parameters of the model. Their inuence
on predicted ow velocities is found by propagating the uncertainties on parameter values in
the model to obtain the uncertainties on predicted velocities for each pipe.
The uncertainty on sensor resolution is taken as a uniform distribution having as lower and
higher bound 2% of the measured value. The uncertainty associated with model simplications is represented by an extended uniform distribution (EUD, see Appendix A) having a lower
and higher bound of 20% of the predicted value and a factor of 0.3. The EUD represents
the uncertainty of bound positions as a fraction [0, 1] of the initial interval width.
Additional uncertainties attributed to other minor sources are also modeled by an extended
uniform distribution. The lower and higher bounds have a value of 1% of the predicted
value and a parameter = 0.3. For the purpose of generating simulated measurements,
uncertainties are assumed to be independent due to the close coupling of the system.

5.6.3 Optimization of measurement-system congurations


Simulated measurements issued form potential leak scenarios are generated to design an
efcient measurement-system and to test the applicability of the approach. Iteratively, a
random leak scenarios is chosen from the 263 possible scenarios. For the purpose of designing
the monitoring system, the leak level is taken to be 100 L/min. The ow velocity predictions
147

Chapter 5. Case studies


associated with each leak are transformed in simulated measurements by subtracting errors
from the predicted values. Error realizations are generated randomly from the combined
uncertainty pdf Uc including all uncertainty sources mentioned above. Each set of simulated
measurements is compared with the ow predicted by the 263 leak scenarios. For each
iteration, improbable leak scenarios are falsied resulting in a number of candidate leakscenarios and the radius including all candidate leaks. These two quantities are stored and the
evaluation is repeated for 1000 simulated measurement instances. This number is sufcient
to obtain stable cumulative distribution functions. This procedure is repeated for several
sensor congurations. Examples of simulated measurements are presented in Figure 5.72. In
this gure, squares represent the location of ow measurements, circles with a cross are the
simulated leaks and lled circles are the candidate leak scenarios.
Flow measurement
Identified leak
Simulated leak

1200

1000
800
600
400
200
0

Simulated leak no. 3

Simulated leak no. 1


Relative Coordinates (m)

Relative Coordinates (m)

1200

500

800
600
400
200

nb. of leaks rejected : 258 (98%)


0

1000

nb. of leaks rejected : 262 (99%)

1000
1500
Relative Coordinates (m)

2000

500

1000
1500
Relative Coordinates (m)

Simulated leak no. n

Simulated leak no. 2


1200
Relative Coordinates (m)

Relative Coordinates (m)

1200
1000
800
600
400
200
0

2000

500

1000
1500
Relative Coordinates (m)

800
600
400
200

nb. of leaks rejected : 243 (92%)


0

1000

nb. of leaks rejected : 242 (92%)


2000

500

1000
1500
Relative Coordinates (m)

2000

Figure 5.72: Examples of simulated measurements for Lausanne fresh-water distribution


network.

For this network, it is not feasible to perform an exhaustive search to nd the optimal
measurement-system since more than 1088 combinations of sensors are possible. The inverse
greedy algorithm is used to nd optimized congurations of sensors. Figure 5.73 presents the
expected number of falsied leak scenarios for the optimized sensor congurations found. In
this gure, the horizontal axis corresponds to the number of sensors used and the expected
number of candidate leak scenarios is plotted on the vertical axis. The dashed line shows the
boundary of the feasible domain.
In Figure 5.74, the expected radius including all leak scenarios is presented for the sensor
148

Expected number of candidate


leak scenarios (%)

5.6. Leak detection in pressurized pipe networks

100
80
60
40
20
0

50
100
150
250
0
200
Number of flow velocity measurement points

Figure 5.73: Relation between the expected number of candidate leak scenarios and the
number of ow measurements used.

Expected radius including all


candidate leak scenarios (m)

congurations tested. In this graph, the horizontal axis corresponds to the expected radius
within which all potential leaks are included. In both cases, the expected performance increases rapidly with the number of sensors used until it reaches an asymptote. For engineering
purposes, a good tradeoff between the expected performance and the number of sensors used
is reached with 14 sensors. The measurement conguration is presented in Figure 5.75 where
the 14 sensors chosen are represented by squares.

1000
800
600
400
200
0
0
50
100
150
200
250
Number of flow velocity measurement points

Figure 5.74: Relation between the radius including all leak scenarios and the number of ow
measurements used.

Using the optimized sensor conguration found, the expected number of leak scenarios is
studied for several levels of leak ow. This expected identiability of the monitoring system is
presented in Figure 5.75 for leak levels of 100, 75, 50, and 25 L/min. The cumulative distribution
function for each leak level is shown in Figure 5.76.
For high probability levels ( = 0.95), the expected number of leak scenarios identied
remains low for leaks under 75 L/min. For lower probability levels ( = 0.50) good results can
be expected up to 50 L/min. The sensor placement optimization procedure is repeated for a
leak level of 25 L/min to test if by increasing the number of measurements, the performance
can be increased. Results are presented in Figure 5.77. In this case, when monitoring 100 pipes,
149

Chapter 5. Case studies

Flow sensor

Relative coordinates (m)

1200
1000
800
600
400
200
0
0

500

1000
1500
Relative coordinates (m)

2000

Figure 5.75: Optimized sensor conguration using 14 ow velocity measurements.

Probability

0.8

100L/min

75L/min

50L/min

0.6
25L/min
0.4

0.2

0
20
40
60
80
100
Maximal number of candidate leak scenarios (%)

Figure 5.76: Expected number of candidate leak scenarios identied for several leak intensities.

150

5.6. Leak detection in pressurized pipe networks

Expected number of candidate


leak scenarios (%)

the number of expected leak scenarios can be reduced by half compared with the situation
where only 14 sensors are used. Therefore, efciently locating leaks in the water distribution
network for a leak of 25 L/min is feasible if a sufcient number of pipes can be instrumented.

100
80
60
40
20
0

50 100 150 200 250


Number of flow velocity
measurement point

Figure 5.77: Relation between the expected number of candidate leak scenarios and the
number of ow measurement points used, for a leak level of 25 L/min.

If the identication of lower leak ow levels is required, an option could be to reduce uncertainties associated with the model and measurements. The most important uncertainty
sources are the water consumption at network nodes and the model simplications. If these
uncertainties can be reduced, a better performance can be expected for lower leak levels.

5.6.4 Case study conclusions


The error-domain model falsication methodology is extended for detecting leaks in pressurized pipe networks. The water distribution network of Lausanne is used as proof of concept
for the method. Specic conclusions are:

1. Based on simulated measurements, the approach is expected to be able to identify


the location of a leak of 50 L/min using a limited number of sensors. For lower leak
ows, the performance of this approach can be improved, for instance, by increasing
the number of sensors in the network and by improving knowledge related to the water
consumption demand.
2. The data interpretation approach presented in Chapters 2, 3 and 4 is not limited to the
identication of structures. The methodology proposed can be applied in other elds to
detect anomalies.

151

Chapter 5. Case studies

5.7 General conclusions of the case studies


In this chapter, the validity and applicability of the approaches proposed was demonstrated
through several case studies. General conclusions are:

1. Error-domain model falsication correctly identies the properties of complex systems


such as civil infrastructures where there are aleatory and systematic errors. This is done
without requiring to make assumptions about dependencies between uncertainties.
Furthermore, it detects aws in assumptions related to the model adequacy by falsifying
entire model classes.
2. Omission and simplications introduced in models are likely to cause systematic bias in
values predicted over several prediction locations. This was demonstrated by comparing
the predictions from model classes having different level of renement.
3. The predictions of the utility of measurements made using the expected identiability
metrics are in agreement with results obtained using on-site measurements. Overinstrumentation of structures is possible when uncertainty dependency are unknown
and when too many measurements are used.
4. Data-interpretation techniques developed for civil-structures can be extended to diagnosis of other complex systems.

152

6 Conclusions

In this thesis, a methodology is proposed to identify the physical properties of infrastructure


applicable even when the error structure is incompletely dened. Complementary methodologies are proposed to quantify probabilistically, the utility of monitoring interventions and
to design measurement-systems. General conclusions, valid for the scope of the applications
studied in this thesis, are presented below.

6.1 Error-domain model falsication


1. Error-domain model falsication identies the physical properties of structures without
requiring a knowledge of uncertainty dependencies dening the error structure. With
this methodology, the probability of falsifying an adequate model is kept below a userdened target for any number of measurement used.
2. Error-domain model falsication methodology detects aws in initial assumptions by
falsifying entire model classes.
3. Sampling methodologies adapted for model falsication help to tackle challenges related
to high dimensional sampling through improvements to simple grid-based sampling. It
offers a viable solution for practical applications.
4. Error-domain model falsication is compatible with Bayesian inference when a  norm likelihood function is used.

6.2 Expected identiability


1. Expected identiability quanties the effect of choices, such as model class, measurement location, measurement type, sensor accuracy, and constraints, such as uncertainty
level and dependencies on data interpretation performance.
2. Two metrics, the reduction in the expected number of candidate models and the reduc153

Chapter 6. Conclusions
tion in prediction ranges, quantify probabilistically the utility of measuring.

6.3 Measurement-system design


1. A measurement-system design methodology based on the expected identiability metric optimizes the conguration of measurement systems with respect to costs and
performance criteria.
2. When the error structure is incompletely dened, using too many measurements may
decrease the data-interpretation performance.

6.4 Case studies


1. Error-domain model falsication leads to correct identication without requiring the
denition of uncertainty dependencies.
2. Predictions made using full-scale infrastructure behavioral models are likely to be
affected by systematic bias due to omissions and simplications. When elements of a
system are neglected, it can systematically affect the predictions over several locations.
3. Error-domain model falsication identied characteristics of structures such as abnormal bearing device behavior and material properties. The methodology is shown to be
able to reduce prediction ranges and to establish baseline models to be used in future
monitoring and prognosis tasks.

6.5 Discussion and limitations


This section discusses the applicability and limitation of solutions that are proposed in this
thesis. In previous chapters, several successful applications were described. Nonetheless,
some limitations create challenges for general application of the methodology.

6.5.1 Denition of an upper bound for uncertainty


An assumption made by most inference methodologies (see 1.2) is that uncertainties are
either known or quantiable. Furthermore, as reported in the literature review it is a common
practice to assume that the error structure is known. In the error-domain model falsication
methodology, this assumption is relaxed by requiring only the estimation of conservative
bounds for uncertainties.
Minimal and maximal bounds are among the lowest amount of information one can provide for quantifying uncertainties. Nevertheless, for some applications this is a restrictive
requirement. For these applications, system identication should be seen as a tool used
154

6.5. Discussion and limitations


to systematically integrate fragmented subjective knowledge for the interpretation of large
amounts of data.

6.5.2 Redundancy in measurement systems


During the design of measurement systems, only criteria related to efciency and costs were
addressed (see Chapter 4, 5.3.6 & 5.5.3). Robustness with respect to potential sensor losses,
malfunctions and inappropriate initial assumptions were not studied. This criterion should
be considered during measurement-system design.

6.5.3 Data-interpretation and high dimensional solution spaces


Sampling the model space is intrinsically an exponentially complex task with respect to the
number of parameters studied (O n ). For models showing linear and quasi-linear responses
with respect to parameter value variations, the surrogate model approach proposed in 2.5.2
leads to a reduction in the need for computing resources. However this approach is limited
to the study of ten to twelve parameters. Beyond this, using surrogate models for datainterpretation may become computationally prohibitive.
For models having non-linear responses, the grid-based random-walk sampling technique
proposed in 2.5.3 accommodates a limited number of parameters because, as mentioned
in 1.5, high dimensional spaces can only be sparsely sampled. Finding solutions to sample
solution space efciently remains a scientic challenge common to all elds. Furthermore,
visualization and knowledge extraction of high dimensional data are current topics in mathematics and computer science research.

6.5.4 Reserve capacity evaluation of existing structures


This thesis focussed on illustrating the potential reduction in parameter and prediction ranges
that system identication can provide. Further work needs to be done to fully quantify the
benet of system identication on the reserve capacity evaluation of existing structures

155

7 Future work

In addition to the points reported in 6.5 that need to be addressed, this chapter identies
promising paths to tackle current issues related to the management of infrastructures and
more generally, to the diagnosis of complex systems.

7.1 Diagnosis robustness toward inaccurate uncertainty denitions


Model-based diagnosis often depends on engineering heuristics to estimate uncertainties
related to the effect of model incompleteness. Because heuristics are by denition fallible,
diagnosis robustness with respect to inaccurate uncertainty denitions could be improved in
many ways.

7.1.1 Measurement-system design and diagnosis errors


Methodologies proposed in this thesis were developed so that the probability to have Type-I
diagnosis errors1 does not increase for any number of measurements used (see 2.2). This is
based on the hypothesis that the magnitude of uncertainties are adequately estimated.
The probability of diagnosis errors is sensitive to uncertainty misevaluations. When uncertainties are underestimated, the worst case involves accepting wrong models while falsely
rejecting right model(s). Minimizing the risk of this scenario requires further investigation the
effect of uncertainty misevaluation as well as the effect of the number of measurements on
diagnosis errors (Type-I&II).

Uncertainty misevaluation and Type-I diagnosis errors


When uncertainties are underestimated for several comparison points, the probability of
having a Type-I diagnosis error (falsely rejecting a correct model) grows exponentially with
1 Type-I diagnosis errors correspond to wrongly discarding a correct model.

157

Chapter 7. Future work


the number of measurements used. It makes the interpretation and measurement-system
design potentially sensitive to misevaluations of uncertainties. For example, when using
two measurements, if uncertainties are in reality twice as large as the value employed, the
coverage region dened by threshold bounds is only a quarter of what is should be; for three
measurements it is 1/8. This relationship is reected on the probability of committing a Type-I
error as presented in Figure 7.1. This observation is not specic to the approach proposed in
this thesis; in the context of complex systems such as civil infrastructure, this issue is expected
to affect most inference methodologies, including those mentioned in 1.2.
Probability of commiting a
Type-I diagnosis error

Number of measurements
used for data interpretation

Figure 7.1: Schematic representation of the relationship between the number of measurements
used for data interpretation and the probability of committing a Type-I diagnosis error, in case
of misevaluation of uncertainties.

Uncertainty misevaluation and Type-II diagnosis errors


When a biased conceptualization of the system studied leads to inadequate initial model
classes, it is desirable that all model instances are falsied by measurements. Type-II diagnosis
errors occur if incorrect models remain in the candidate model set, when all model instances
should have been falsied. This type of diagnosis error may be prevented if a sufcient number
of measurements are used. It happens because predictions using a wrong model have a small
probability of matching a large number of measurements. Figure 7.2 schematically presents
the relationship between the number of non-redundant measurements used during data
interpretation and the probability of a Type-II diagnosis error.

Maximizing the identication robustness toward diagnosis errors


A deeper study of relationships between Type-I & Type-II errors as well as the number of
measurements used could lead to improvement in the diagnosis robustness with respect to
misevaluation of uncertainties and inadequate conceptualization of systems (awed model
classes). Figure 7.3 presents the competing effects of Type-I & Type-II diagnosis errors and
how these lead to an optimal number of measurements that minimizes the probability of
both types of errors. Further research is needed to dene this optimal point for a range of
applications.
158

7.1. Diagnosis robustness toward inaccurate uncertainty denitions

Probability of commiting a
Type-II diagnosis error

Number of non-redundant
measurements used for data interpretation

Figure 7.2: Schematic representation of the relationship between the number of nonredundant measurements used for data interpretation and the probability of a Type-II diagnosis error, in case of misevaluation of uncertainties.

Probability of
a diagnosis error

Type-II error

Type-I error

Number of measurements
minimizing the probability of
either diagnosis error due
to uncertainty misevaluations

Number of measurements
used for data interpretation

Figure 7.3: Schematic representation of the relationship between the number of measurements
used during data interpretation and the probability of either a Type-I or a Type-II error, in case
of misevaluation of uncertainties.

159

Chapter 7. Future work


The sensitivity to diagnosis errors can be tested by generating simulated measurements where
additional uncertainties would also be included for representing the effect of unknown events.
The sensitivity to diagnosis error can be quantied by looking at the relative frequency of
the simultaneous occurrence of Type-I and Type-II diagnosis errors. During the evaluation,
additional uncertainties should not be included in the threshold computations to study the
effect of the measurement-system congurations on the diagnosis robustness. In addition to
the performance and cost objectives studied in this thesis, this additional robustness metric
would add a third objective to measurement-system design to quantify the sensitivity to
misspecication of uncertainties.

7.1.2 Benchmarks for quantifying modeling uncertainties


This thesis noted that for civil structures, model simplications is one of the dominant uncertainty source. Only little literature is available where the effect of these simplications is
quantied. Therefore, there is a need for more theoretical and experimental benchmarks for
characterizing the effects of model simplications on civil structure models.
Such studies can hardly be conducted on full-scale systems because test conditions need to be
controlled up to the point where the only difference between predicted and measured values
is due to model simplications. This implies accurate knowledge of:
Environmental effects
Construction tolerances
Load history in the case of prestress sensitive structures (for instance suspension and
cable-stayed bridges)
Measurement device resolution
Material properties and their possible spatial variability
Boundary conditions
Loading
Therefore, there is a need for more laboratory and simulation-based studies.

7.1.3 Uncertainties due to the interactions between primary and secondary parameters
The methodology presented in Chapter 2 separates primary parameters to be identied and
secondary parameters contributing to prediction uncertainties. Techniques presented in
1.5.1 could be used to quantify the effect of the interactions between primary and secondary
160

7.2. Material properties spatial variability and stochastic elds


parameters in order to include it in prediction uncertainty. Such addition could improve the
accuracy of threshold estimations.

7.1.4 Imprecise probabilities


Uncertainty sources are currently described by probability density functions based on statistical methods and on heuristics. Other approaches such as imprecise probabilities (1.3.1)
could be used to increase the approach robustness with respect to the choice of probability
distribution used to describe uncertainties. This addition could be included in the model falsication approach proposed in this thesis. For that, only the way uncertainties are combined
(see 2.4) would need to be redened.

7.2 Material properties spatial variability and stochastic elds


In this thesis, material properties had uniform spatial distributions. However, in some cases
there can be spatial variability in material properties. Such variability can be captured using
stochastic elds. Figure 7.4 presents an example of how stochastic elds can be used to
describe low and high spatial correlation in the Youngs modulus of concrete decks. The
relative height and color of each deck representation corresponds to the variation of the
Youngs modulus with respect to the nominal value.

Low spatial correlation

High spatial correlation

Figure 7.4: Two-dimensional stochastic elds representing the Youngs modulus spatial variability in concrete bridge decks.

Identifying precisely the Youngs modulus patterns using system identication is not foreseeable for two reasons. First, the inuence of such local variability is likely to be insufcient to be
distinguished from other sources of uncertainties. Second, it would involve a large number of
parameters to be identied, undermining the ability to correctly explore the space of possible
solutions.
An alternative is to include the spatial variability of materials as a secondary-parameter uncertainty (see 2.4.1). This additional source of uncertainty would have the effect of widening
threshold bounds. The mean value for the material property would be the parameter to be
identied.
161

Chapter 7. Future work

7.3 Sampling high-dimensional solution spaces


Sampling high-dimensional solution spaces is a scientic challenge common to all elds. The
rst proposal presented in this section aims at exploring model spaces more efciently. A
second proposes a test to verify if the Greedy optimization algorithm is likely to outperform
other heuristic-based stochastic-search methods.

7.3.1 Falsication-limit sampling


As presented in 2.2, all candidate models are considered to be possible behavioral models
of the system measured. Therefore, when exploring the model space looking for candidate
solutions, generating samples only on the boundary separating candidate and falsied models
could be more efcient than sampling within the candidate model set.
For the purpose of nding the limit between candidate and falsied models, a likelihood
function can be dened to generate samples along this limit using random-walk techniques.
Such a likelihood function is presented in Figure 7.5 for a case where two measurements are
used (n m = 2).

Likelihood
0

Figure 7.5: Two-dimensional likelihood function (Equation 7.1), used to generate parameter
samples on the limit separating candidate and falsied models. The likelihood is maximal
when the observed residuals o,i are equal to threshold bounds [Tl ow,i , Thi g h,i ]. This example
was created for shape functions = 10, = 0.8.

Figure 7.5 is generated using the likelihood function presented in Equation 7.1 where Ti and T i
are computed using Equations 2.12 and 2.13. The shape of the likelihood function (Equation
7.1) is controlled by the parameters which affects the sharpness of peaks and which affects
their symmetry. This function is based on the subtraction of two -order generalized Gaussian
distributions (see 1.2.2) scaled by the factor . Note that these equations can be extended for
162

7.3. Sampling high-dimensional solution spaces


any number of comparison points n m .




 
| |
|c,2 |
| |
|c,1 |
2 11/
1 c,1
1 T
+ c,2
+ T
T1
T2
1
2
L T (c,1 , c,2 ) =
e
e
2T1 T2 (1/)

(7.1)

Figure 7.6 presents a comparison between the random-walk grid-based sampling proposed in
2.5.3 and the methodology presented above. The example is based on the composite beam
presented in 2.6.1 where the shaded area is the candidate model set.
Grid-based random-walk sampling

Falsification-limit random-walk sampling

4.5

Concrete Youngs Modulus (MPa)

Conctrete Youngs Modulus (MPa)

4.5

x 10

3.5

2.5

1.5
1.9

x 10

3.5

2.5

1.94

1.98
2.02
2.06
Steel Youngs Modulus (MPa)

1.5
1.9

2.1
5

x 10

1.94

1.98
2.02
2.06
Steel Youngs Modulus (MPa)

2.1
5

x 10

Figure 7.6: Comparison of model instance space exploration using Grid-based random-walk
and falsication-limit sampling. The vertices between samples correspond to the path followed by the random walk. The shaded area is the candidate model set identied in the
example presented in 2.6.1.

This procedure subtracts one dimension from the space of possible solutions because the
boundary of the n p -dimensional candidate model set has n p 1 dimensions. Even if this approach is not intended to overcome all difculties associated with high-dimensional sampling,
it could reduce exploration time for practical applications.

7.3.2 Greedy algorithm applicability test


Greedy algorithms can signicantly outperform other stochastic search approaches when
there is no interaction between the effects of parameters involved in the optimization. In
the context of measurement-system design, the Greedy algorithm is efcient when there is
no interaction between the effects of individual sensor removal. In practice this is seldom
completely true. Nonetheless, Greedy algorithms should still be efcient if the effect of
interactions is signicantly less than the effect of single sensor removal.
Sensitivity analyses based on methods presented in 1.5.1 could be used for quantifying the
163

Chapter 7. Future work


relative importance of single sensor removal compared to the effect multiple sensors removal.
Figure 7.7 presents an example where the applicability test is performed. This preliminary
study uses design-of-experiment theory to perform the sensitivity analysis based on the
example presented in 5.3.6.
Relative importance of sensor removal
on the number of candidae models

Single sensor
removal

Multiple sensors
removal

Sensor contributing the most to falsifying models

Figure 7.7: Sensor interaction relative importance quantifying the contribution of single sensor
removal compared to multiple sensors removal.

In this example the effect of single sensor removal on the number of candidate models is
signicantly higher (in absolute magnitude) than the effect of multiple sensors removal.
Therefore, Greedy algorithms would be likely to outperform other heuristic-based stochastic
sampling techniques.

7.4 Model-class validity exploratory tool


This thesis proposed a methodology for falsifying model classes (see 2.2) in cases where
initial assumptions are awed. However, when all model classes initially provided are falsied,
no tool is available to perform quick iterative trials to explore the possible solutions regarding
why the system is not behaving as expected.
A new exploration methodology could use model falsication and data-mining tools to identify
behavioral features that could lead to candidate models. Such approach could use automated
model-class generation and multiple interpretation cycles to successfully identify the system
behavior.

7.5 Infrastructure management perspectives


This thesis addressed issues related to model-based diagnosis and system identication.
Nevertheless, to better support infrastructure management, the steps following data interpretation need to be further investigated. Figure 7.8 presents the scope of future work related to
structural evaluation and infrastructure management. The main challenge lies in having a
systematic procedure for choosing whether to perform a rened reliability analysis, additional
164

7.5. Infrastructure management perspectives


site investigations or immediate interventions. Future research should be able to answer questions such as: "what needs to be measured next?" and "how can new measurements support
intervention avoidance?".
START
Scope of this thesis

Scope of future work related to


Infrastructure management

Limit-state verification
of an existing structure

Apply code procedures


on simple
conservative models

Adequate
performance?

Site
investigation

Use in-situ monitoring


to improve
behavior models
NO

YES
Adequate
performance?

NO

Refine reliability
analysis

YES
Adequate
performance?

Interventions
required
NO

YES
No interventions

Figure 7.8: Future work in relation with the general framework for structural evaluation of
existing structures presented in Figure 3.

7.5.1 Fatigue remaining life analysis supported by structural performance monitoring


In cases studies presented in 5.3 and 5.4 structural identication showed that bearing
devices may behave differently under trafc loading than under static load-tests. Further
investigations are required to better quantify its consequences as well as to predict its effect on
remaining fatigue life calculations. Fatigue life estimation is particularly suited for structural
performance monitoring because the prognosis is based on serviceability criteria made in the
linear elastic range of the structure behavior.

7.5.2 Measurement system overall cost optimization


In the methodology presented in Chapter 4 relationships are drawn between measurementsystem costs and expected data interpretation performance. In order to better support end
users (i.e. decision makers) measurement systems should be optimized according to investments required versus expected return on investments in terms of savings on maintenance.
An example is presented in Figure 7.9 where a front of optimized test setups indicates what
165

Chapter 7. Future work


monitoring interventions are expected to be protable. To be protable, the expected return of
structural performance monitoring must be signicantly higher than the initial investments.
Investments in strucutural performance
monitoring is expected to lead to
significant savings

Investments required are in a gray zone


where monitoring costs may be justified
by other reasons than expected savings

Expected return on investments in terms


of savings on maintenance

Investments required are


too important compared
with the expected return

Front of optimized
measurement systems

0
1
0
Investments in structural performance monitoring

Figure 7.9: Future perspectives for measurement system and test setup design where the
objective functions are money invested versus return on investments in terms of savings on
maintenance.

Several challenges need to be overcome to perform such an optimization. Firstly, it requires a


deep knowledge of the monitoring intervention costs, which are usually not considered during
data interpretation. Secondly, new sampling techniques have to be explored to overcome
computational complexity challenges. As presented in Figure 7.10, these challenges lie in
the additional steps required to perform a measurement system overall cost optimization.
Additional steps necessary to compute prediction ranges for critical scenarios, to evaluate
serviceability and safety criteria and to determine if interventions are required involve sets of
nested procedures adding to the complexity of an already exponentially complex problem.
A synergy between stochastic search algorithms (see 1.5.2) and domain heuristics could
provide promising ways to tackle these challenges.

166

7.5. Infrastructure management perspectives


Scope of this thesis

START
Simulated measurements

Candidate models
Scope of future work
Prediction ranges
for critical scenarios

Store results

Evaluation of servicability
or safety criteria

Additionnal steps required


contain several nested
loops

Choice 1: No intervention
Choice 2: Minor intervention
Choice 3: ...
Choice X: Major intervention

Enough
samples?

NO

Expected return on
investments

YES

Investments in structural
performance monitoring

Figure 7.10: Framework representing steps leading to a measurement system overall cost
optimization.

167

A Extended uniform distribution

Quantifying uncertainties is a task intrinsic to structural identication. Uniform distribution


is often used to describe uncertainties related to model predictions. In these cases upper and
lower bounds are provided according to subjective knowledge and heuristics. This is called
the zero-order uncertainty. An additional order of uncertainty can be provided to include in
this distribution the uncertainty in the position of the zero-order uncertainty bounds. This
process can go on for several orders of uncertainty. Combining all these distributions leads to
the extended uniform distribution (EUD) as introduced by Goulet et al. [86, 89]. The extended
uniform distribution is presented in Figure A.1.
A

n = 0(B-A)

n=0
n=1

n=2
n=3

Combination

...

Zero-order
uncertainty

Multiple orders
of uncertainty
(Uncertainty of
uncertainty)

Extended Uniform
Distribution (EUD)

Figure A.1: Extended uniform distribution that included several orders of uncertainty

Each distribution associated with any order of uncertainty can be dened independently.
However, for practical applications such level of renement is often not available. A simplied
method for dening multiple orders of uncertainty is to provide a fraction that can take
values between 0 and 1. This fraction denes the width of the n t h order of uncertainty using
the relation n (B A), where A and B are the bounds of the zero-order uncertainty.

L=

n (B A)

2
n=0

(A.1)
169

Appendix A. Extended uniform distribution


For any value of smaller than one, the distribution converges to a nite limit L expressed in
Equation A.1. As n becomes large, the contribution of each order tends to zero. Furthermore,
the upper bound for the error must always remain larger than its lower bound. When EUD is
approximated numerically, it is possible to generate orders of uncertainty that do not respect
that criterion. Therefore, a check must be performed to verify that at zero-order the upper
bound (B ) is larger or equal to the lower bound (A). Note that for values smaller or equal to
0.5 this criterion is always satised. This formulation is not intended to be used with values
larger or equal than one since such high values imply that higher orders of uncertainty are
greater than the zero-order uncertainty.

170

Bibliography
[1] The AASHO road test, report 2, materials and construction. Technical Report Special Report 61b, pp.151-154,
1962.
[2] A nite element primer. NAFEMS, Glasgow, UK, 2003.
[3] H. Abdi. Encyclopedia of measurement and statistics, chapter The Bonferonni and idk corrections for
multiple comparisons. Sage, 2007.
[4] H. Ahmadian, J.E. Mottershead, and MI Friswell. Physical realization of generic-element parameters in
model updating. Journal of vibration and acoustics, 124:628, 2002.
[5] H. Akcay, H. Hjalmarsson, and L. Ljung. On the choice of norms in system identication. In IEEE Transactions
on Automatic Control, volume 41, pages 13671372. IEEE, 1996.
[6] R.J. Allemang and D.L. Brown. A correlation coefcient for modal vector analysis. In Proceedings of the 1st
international modal analysis conference, volume 1, pages 110116, Schenectady, NY, USA, 1982. Union Coll.
[7] K.F. Alvin. Finite element model update via Bayesian estimation and minimization of dynamic residuals.
Technical report, Sandia National Labs, Albuquerque, NM, 1996.
[8] I.G. Araujo, E. Maldonado, and G.C. Cho. Ambient vibration testing and updating of the nite element
model of a simply supported beam bridge. Frontiers of Architecture and Civil Engineering in China, 5(3):
344354, 2011.
[9] D. Arroyo and M. Ordaz. Multivariate Bayesian regression analysis applied to ground-motion prediction
equations, part 1: Theory and synthetic example. Bulletin of the Seismological Society of America, 100(4):
15511567, 2010.
[10] ASCE. 2009 report card for Americas infrastructure. Technical report, American Society of Civil Engineers,
Washington, 2009.
[11] ASME. Guide for verication and validation in computational solid mechanics. ASME, 2006.
[12] S.F. Bailey, A. Radojicic, and E. Brhwiler. Case studies in optimal design and maintenance planning of civil
infrastructure systems, chapter Structural Safety Assessment of the Dornaz Bridge, pages 112. ASCE, 1999.
[13] O. Balci and R.G. Sargent. Validation of simulation models via simultaneous condence intervals. American
Journal of Mathematical and Management Sciences, 4(3):375406, 1984.
[14] M.Y.H. Bangash. Manual of numerical methods in concrete: modelling and applications validated by
experimental and site-monitoring data. Thomas Telford, London, 2001. ISBN 0727729462.
[15] Z.P. Bazant and S. Baweja. Creep and shrinkage prediction model for analysis and design of concrete
structures: Model b3. ACI Special publications, 194:184, 2000.

171

Bibliography

[16] Z.P. Bazant, G.H. Li, Q. Yu, G. Klein, and V. Krstek. Explanation of excessive long-time deections of
collapsed record-span box girder bridge in Palau. In Proceedings of the 8th Int. Conf. on Creep, Shrinkage
and Durability of Concrete and Concrete Structures, T. Tanabe et al., eds., The Maeda Engineering Foundation,
Ise-Shima, Japan, pages 131, 2008.
[17] Z.P. Bazant, Q. Yu, G.-H. Li, G.J. Klein, and V. Kristek. Excessive deections of record-span prestressed box
girder: Lessons learned from the collapse of the koror-babeldaob bridge in palau. ACI Concrete International,
32(6):4452, 2010.
[18] M.A. Beaumont, W. Zhang, and D.J. Balding. Approximate Bayesian computation in population genetics.
Genetics, 162(4):20252035, 2002.
[19] J.L. Beck and L.S. Katafygiotis. Updating models and their uncertainties. i: Bayesian statistical framework.
Journal of Engineering Mechanics, 124(4):455461, 1998.
[20] J.L. Beck and K.V. Yuen. Model selection using response measurements: Bayesian probabilistic approach.
Journal of Engineering Mechanics, 130(2):192203, 2004.
[21] M. Beer. Engineering quantication of inconsistent information. International Journal of Reliability and
Safety, 3(1):174200, 2009.
[22] R. Bellman and K.J. strm. On structural identiability. Mathematical Biosciences, 7(3-4):329339, 1970.
ISSN 0025-5564.
[23] Y. Ben-Haim and F.M. Hemez. Robustness, delity and prediction-looseness of models. Proceedings of the
Royal Society A: Mathematical, Physical and Engineering Science, 468:227244, 2011.
[24] J.O. Berger and L.R. Pericchi. The intrinsic Bayes factor for model selection and prediction. Journal of the
American Statistical Association, 91(433):109122, 1996.
[25] K. J. Beven. Towards a coherent philosophy for modelling the environment. Proceedings of the Royal Society
of London. Series A: Mathematical, Physical and Engineering Sciences, 458:120, 2002.
[26] K. J. Beven. A manifesto for the equinality thesis. Journal of Hydrology, 320(1-2):1836, 2006.
[27] K. J. Beven. Environmental modelling: an uncertain future? Routledge, New-York, 2009.
[28] K. J. Beven and A. Binley. Future of distributed models: Model calibration and uncertainty prediction.
Hydrological processes, 6(3):279298, 1992.
[29] K.J. Beven. Uniqueness of place and process representations in hydrological modelling. Hydrology and
Earth System Sciences, 4(2):203213, 2000.
[30] K.J. Beven, P.J. Smith, and J.E. Freer. So just why would a modeller choose to be incoherent? Journal of
hydrology, 354(1-4):1532, 2008.
[31] C.E. Bonferroni. Teoria statistica delle classi e calcolo delle probabilita. Libreria internazionale Seeber, 1936.
[32] U. Boniger and J. Tronicke. Improving the interpretability of 3D GPR data using targetspecic attributes:
application to tomb detection. Journal of Archaeological Science, 37:360367, 2010.
[33] G.E.P. Box and D.W. Behnken. Some new three level designs for the study of quantitative variables. Technometrics, 2(4):455475, 1960.
[34] G.E.P. Box and N.R. Draper. A basis for the selection of a response surface design. Journal of the American
Statistical Association, 54(287):622654, 1959.

172

Bibliography

[35] G.E.P. Box and N.R. Draper. Empirical model-building and response surfaces. John Wiley & Sons, New-York,
1987.
[36] G.E.P. Box and G.C. Tiao. Bayesian inference in statistical analysis. Wiley, New-York, 1992.
[37] S. Brenner. Sequences and consequences. Philosophical Transactions of the Royal Society B: Biological
Sciences, 365:207212, 2010.
[38] R. Brincker, L. Zhang, and P. Andersen. Modal identication of output-only systems using frequency domain
decomposition. Smart Materials and Structures, 10:441445, 2001.
[39] J.M.W. Brownjohn. Structural health monitoring of civil infrastructure. Philosophical Transactions of the
Royal Society A-Mathematical Physical and Engineering Sciences, 365(1851):589622, 2007.
[40] J.M.W. Brownjohn, A. Pavic, P. Carden, and C. Middleton. Modal testing of Tamar suspension bridge. In
Proceedings of the IMAC XXV conference, pages 1922, Orlando, USA, 2007.
[41] R. Cantieni. Langensandbrucke neubau bruckenhalfte seite pilatus - identikation der eigenschwingungen dynamische belastungsversuche. Technical Report Bericht Nr.081231, RCI Dynamics, Dubendorf,
Switzerland, 2008.
[42] D.S. Carder. Observed vibrations of bridges. Bulletin of the Seismological Society of America, 27(4):267303,
1937.
[43] F.N. Catbas, S.K. Ciloglu, O. Hasancebi, K. Grimmelsman, and A.E. Aktan. Limitations in structural identication of large constructed structures. Journal of Structural Engineering, 133(8):10511066, 2007.
[44] F.N. Catbas, T. Kijewski-Correa, and A.E. Aktan, editors. Structural Identication of Constructed Facilities.
Approaches, Methods and Technologies for Effective Practice of St-Id. American Society of Civil Engineers
(ASCE), in press, 2012.
[45] D. Chapelle and K.J. Bathe. Fundamental considerations for the nite element analysis of shell structures.
Computers & Structures, 66(1):1936, 1998.
[46] Y. Chen, M.Q. Feng, and C.A. Tan. Bridge structural condition assessment based on vibration and trafc
monitoring. Journal of Engineering Mechanics, 135(8):747758, 2009.
[47] S.H. Cheung and J.L. Beck. Bayesian model updating using hybrid Monte Carlo simulation with application
to structural dynamic models with many uncertain parameters. Journal of Engineering Mechanics, 135(4):
243255, 2009.
[48] S.H. Cheung and J.L. Beck. Calculation of posterior probabilities for Bayesian model class assessment and
averaging from posterior samples based on dynamic system data. Computer-Aided Civil and Infrastructure
Engineering, 25(5):304321, 2010.
[49] S.H. Cheung, T.A. Oliver, E.E. Prudencio, S. Prudhomme, and R.D. Moser. Bayesian uncertainty analysis
with applications to turbulence modeling. Reliability Engineering & System Safety, 96(9):11371149, 2011.
[50] T.H. Cormen. Introduction to algorithms. The MIT press, Cambridge, MA, 2001.
[51] M.K. Cowles and B.P. Carlin. Markov chain Monte Carlo convergence diagnostics: a comparative review.
Journal of the American Statistical Association, 91:883904, 1996.
[52] M.G. Cox and B.R.L. Siebert. The use of a Monte Carlo method for evaluating uncertainty and expanded
uncertainty. Metrologia, 43:178188, 2006.

173

Bibliography

[53] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II.
In IEEE Transactions on Evolutionary Computation, volume 6, pages 182 197, 2002.
[54] R. Dellacroce, P.A. Schieb, and B.Stevens. Pension funds investment in infrastructure - A survey. International
Futures Programme. OECD, 2011.
[55] A.P. Dempster. A generalization of Bayesian inference. Journal of the Royal Statistical Society. Series B
(Methodological), 30(2):205247, 1968.
[56] Y. Dodge. An introduction to statistical data analysis l1-norm based. Statistical data analysis based on the L,
1:122, 1987.
[57] P.J. Dossantos-Uzarralde and A. Guittet. A polynomial chaos approach for nuclear data uncertainties
evaluations. Nuclear Data Sheets, 109(12):2894 2899, 2008.
[58] C.D. Eamon and A.S. Nowak. Effects of edge-stiffening elements and diaphragms on bridge resistance and
load distribution. Journal of Bridge Engineering, 7(5):258266, 2002.
[59] C.D. Eamon and A.S. Nowak. Effect of secondary elements on bridge structural system reliability considering
moment capacity. Structural Safety, 26(1):2947, 2004.
[60] E.N. Economou. The observable universe. In A Short Journey from Quarks to the Universe, volume 1 of
SpringerBriefs in Physics, chapter 13, pages 109121. Springer, 2011.
[61] B. Ellingwood, T.V. Galambos, J.G. MacGregor, and CA Cornel. Development of a probability based load
criterion for American National Standard A58: Building code requirements for minimum design loads in
buildings and other structures. US Dept. of Commerce, National Bureau of Standards, Washington, 1980.
[62] M.P. Enright and D.M. Frangopol. Condition prediction of deteriorating concrete bridges using Bayesian
updating. Journal of Structural Engineering, 125(10):11181125, 1999.
[63] H.P. Fagagnini. De la valeur et la non-valeur de linfrastructure suisse de transport. Technical report, LITRA Service dinformation pour les transports publics, Bern, Switzerland, 2011.
[64] H. Fang, M. Rais-Rohani, Z. Liu, and M.F. Horstemeyer. A comparative study of metamodeling methods for
multiobjective crashworthiness optimization. Computers & structures, 83(25):21212136, 2005.
[65] C.R. Farrar, G. Park, D.W. Allen, and M.D. Todd. Sensor network paradigms for structural health monitoring.
Structural control and health monitoring, 13(1):210225, 2006.
[66] S. Ferson and L.R. Ginzburg. Different methods are needed to propagate ignorance and variability. Reliability
Engineering & System Safety, 54(2-3):133144, 1996. ISSN 0951-8320.
[67] S. Ferson and W.L. Oberkampf. Validation of imprecise probability models. International Journal of
Reliability and Safety, 3(1):322, 2009. ISSN 1479-389X.
[68] S. Ferson, J. Hajagos, D. Berleant, J. Zhang, W.T. Tucker, L. Ginzburg, and W. Oberkampf. Dependence
in Dempster-Shafer theory and probability bounds analysis. Technical Report SAND2004-3072, Sandia
National Laboratories, Albyquerque, NM, 2004.
[69] S. Ferson, R. Nelsen, J. Hajagos, D. Berleant, J. Zhang, W.T. Tucker, L. Ginzburg, and W.L. Oberkampf. Myths
about correlations and dependencies and their implications for risk analysis. Submitted to Human and
Ecological Risk Assessment, 2008.
[70] S.E. Fienberg. When did Bayesian inference become bayesian. Bayesian analysis, 1(1):140, 2006.
[71] R.A. Fisher. Applications of "Students" distribution. Metron, 5:90104, 1925.

174

Bibliography

[72] R.A. Fisher. The design of experiments. Oliver & Boyd., Oxford, 1935.
[73] D.M. Frangopol, A. Strauss, and S. Kim. Bridge reliability assessment based on monitoring. Journal of Bridge
Engineering, 13(3):258270, 2008.
[74] D.M. Frangopol, A. Strauss, and S. Kim. Use of monitoring extreme data for the performance prediction of
structures: General approach. Engineering structures, 30(12):36443653, 2008.
[75] M. Frchet. Gnralisations du thorme des probabilits totales. Fundamenta Mathematica, 25:379387,
1935.
[76] J.H. Friedman. Multivariate adaptive regression splines. The annals of statistics, 19(1):167, 1991.
[77] M.I. Friswell. Damage identication using inverse methods. Philosophical Transactions of the Royal Society
A-Mathematical Physical and Engineering Sciences, 365(1851):393410, 2007.
[78] T.V. Galambos and M.K. Ravindra. Properties of steel for use in LRFD. Journal of the Structural Division, 104
(9):14591468, 1978.
[79] A. Gelman and C. Shalizi. Oxford handbook of the philosophy of the social sciences, chapter Philosophy and
the practice of Bayesian statistics in the social sciences. Oxford University Press, Oxford, UK, 2010.
[80] P.E. Gill, W. Murray, and M.H. Wright. Practical optimization. Academic press, London, 1981.
[81] M. Gilli, D. Maringer, and E. Schumann. Numerical methods and optimization in nance. Academic Press,
2011.
[82] GML Gladwell and H. Ahmadian. Generic element matrices suitable for nite element model updating.
Mechanical Systems and Signal Processing, 9(6):601614, 1995.
[83] B. Goller and G.I. Schueller. Investigation of model uncertainties in Bayesian structural model updating.
Journal of Sound and Vibration, 25(5):61226136, 2011.
[84] B. Goller, J.L. Beck, and G.I. Schuller. Evidence-based identication of weighting factors in Bayesian model
updating using modal data. Journal of Engineering Mechanics, in press, 2012.
[85] J. Gordon and E.H. Shortliffe. Rule-based expert systems: The MYCIN experiments of the stanford heuristic
programming project, chapter 13 - The Dempster-Shafer theory of evidence, pages 272292. Reading,
Massachusetts: Addison-Wesley, 1984.
[86] J.A. Goulet and I.F.C. Smith. Extended uniorm distribution accounting for uncertainty of uncertainty. In
International Conference on Vulnerability and Risk Analysis and Management/Fifth International Symposium
on Uncertainty Modeling and Analysis, pages 7885, Maryland, USA, 2011.
[87] J.A. Goulet and I.F.C. Smith. Predicting the usefulness of monitoring for identifying the behaviour of
structures. Journal of Structural Engineering, In press, 2012.
[88] J.A. Goulet, P. Kripakaran, and I.F.C. Smith. Multimodel structural performance monitoring. Journal of
Structural Engineering, 136(10):13091318, 2010.
[89] J.A. Goulet, C. Michel, and I.F.C. Smith. Hybrid probabilities and error-domain structural identication
using ambient vibration monitoring. Mechanical Systems and Signal Processing, In press, 2012.
[90] S. Greenland. Induction versus Popper: substance versus semantics. International journal of epidemiology,
27(4):543548, 1998.
[91] R. Hadidi and N. Gucunski. Probabilistic approach to the solution of inverse problems in civil engineering.
Journal of Computing In Civil Engineering, 22(6):338347, 2008.

175

Bibliography

[92] W.K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57
(1):97, 1970.
[93] T. Haukaas and P. Gardoni. Model uncertainty in nite-element analysis: Bayesian nite elements. Journal
of Engineering Mechanics, 137(8):519526, 2011.
[94] T. Hellen. How to- Use elements effectively. NAFEMS, 2003.
[95] T. Hellen. How to- Use beam, plates and shell elements. NAFEMS, 2007.
[96] J.C. Helton. Quantication of margins and uncertainties: Conceptual and computational basis. Reliability
Engineering & System Safety, 96(9):9761013, 2011.
[97] J.C. Helton and J.D. Johnson. Quantication of margins and uncertainties: alternative representations of
epistemic uncertainty. Reliability Engineering & System Safety, 96(9):10341052, 2011.
[98] J.C. Helton and W.L. Oberkampf. Alternative representations of epistemic uncertainty. Reliability Engineering
& System Safety, 85(1-3):110, 2004. ISSN 0951-8320.
[99] M. Hiatt, A. Mathiasson, J. Okwori, S.S. Jin, S. Shang, G.J. Yun, J. Caicedo, R. Christenson, C.B. Yun, and
H. Sohn. Finite element model updating of a PSC box girder bridge using ambient vibration test. Advanced
Materials Research, 168:22632270, 2011.
[100] D. Hitchings. A nite element dynamics primer. NAFEMS, Glasgow, 1992.
[101] J.W. Hollenbach. Verication, validation, and accreditation (VV&A) recommended practices guide. Technical
report, Department of Defense, USA, 1996.
[102] J. Hugenschmidt and A. Kalogeropoulos. The inspection of retaining walls using GPR. Journal of Applied
Geophysics, 67(4):335344, 2009.
[103] R. Jafarkhani and S.F. Masri. Finite element model updating using evolutionary strategy for damage detection.
Computer-Aided Civil and Infrastructure Engineering, 2011.
[104] E.T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620630, 1957.
[105] JCGM. Evaluation of measurement data Guide to the expression of uncertainty in measurement. Number
ISO/IEC Guide 98-3:2008. JCGM Working Group of the Expression of Uncertainty in Measurement, 2008.
[106] JCGM. Guide to the expression of uncertainty in measurement supplement 1: Numerical methods for the
propagation of distributions. Number ISO/IEC Guide 98-3:2008/Suppl 1:2008. JCGM Working Group of the
Expression of Uncertainty in Measurement, 2008.
[107] JCGM. Evaluation of measurement data Supplement 2 to the Guide to the expression of uncertainty in
measurement Extension to any number of output quantities, volume JCGM 102:2011. JCGM Working
Group of the Expression of Uncertainty in Measurement, 2011.
[108] JCSS. Probabilistic model code, part 3, 2007. URL http://www.jcss.ethz.ch.
[109] W.H. Jefferys and J.O. Berger. Ockhams razor and Bayesian analysis. American Scientist, 80:6472, 1992.
[110] H. Jeffreys. Theory of probability. Oxford University Press, Oxford, third edition, 1998.
[111] X. Jiang and S. Mahadevan. Bayesian validation assessment of multivariate computational models. Journal
of Applied Statistics, 35(1):4965, 2008.

176

Bibliography

[112] L.O. Jimenez and D.A. Landgrebe. Supervised classication in high-dimensional space: Geometrical,
statistical, and asymptotical properties of multivariate data. In IEEE Transactions on Systems, Man, and
Cybernetics, volume 28, pages 3954, 1998.
[113] A.I. Johnson. Strength, safety and economic dimensions of structures. Statens Kommittee for Byggnadsforskning Meddelanden, Stockholm, Sweden, 1953.
[114] R.N. Kacker and J.F. Lawrence. Rectangular distribution whose end points are not exactly known: curvilinear
trapezoidal distribution. Metrologia, 47(3):120126, 2010.
[115] F. Kang, J. Li, and Q. Xu. Virus coevolution partheno-genetic algorithms for optimal sensor placement.
Advanced Engineering Informatics, 22(3):362370, 2008.
[116] L.S. Katafygiotis and J.L. Beck. Updating models and their uncertainties. II: Model identiability. Journal of
Engineering Mechanics, 124(4):463467, 1998.
[117] J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings of IEEE International Conference
on Neural Networks, volume 4, pages 19421948. IEEE, 1995.
[118] M.C. Kennedy and A. OHagan. Bayesian calibration of computer models. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 63(3):425464, 2001.
[119] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. science, 220(4598):
671679, 1983.
[120] J.M. Ko and Y.Q. Ni. Technology developments in structural health monitoring of large-scale bridges.
Engineering structures, 27(12):17151725, 2005.
[121] K.Y. Koo, J.M.W. Brownjohn, D.I. List, and R. Cole. Structural health monitoring of the Tamar suspension
bridge. Structural Control and Health Monitoring, In Press, 2012.
[122] R.O. Kuehl. Design of experiments: statistical principles of research design and analysis. Duxbury/Thomson
Learning, Pacic Grove, CA, 2000.
[123] C.P. Lamarche, P. Paultre, J. Proulx, and S. Mousseau. Assessment of the frequency domain decomposition technique by forced-vibration tests of a full-scale structure. Earthquake Engineering and Structural
Dynamics, 37:487494, 2008.
[124] I. Laory, T.N. Trinh, and I.F.C. Smith. Evaluating two model-free data interpretation methods for measurements that are inuenced by temperature. Advanced Engineering Informatics, 2011.
[125] P.S. Laplace. Essai philosophique sur les probabilits. M. Courcier, Paris, France, 1814.
[126] E.L. Lehmann and J.P. Romano. Testing statistical hypotheses. Springer, third edition, 2005.
[127] I. Lira. The generalized maximum entropy trapezoidal probability density function. Metrologia, 45(4):
L17L20, 2008.
[128] M. Liu, D.M. Frangopol, and S. Kim. Bridge safety evaluation based on monitored live load effects. Journal
of Bridge Engineering, 14(4):257259, 2009.
[129] Y. Liu, J. Freer, K. J. Beven, and P. Matgen. Towards a limits of acceptability approach to the calibration of
hydrological models: Extending observation error. Journal of Hydrology, 367(1-2):93103, 2009.
[130] L. Ljung. System identication: theory for the user. Prentice-Hall, Englewood Cliffs, NJ, 1987.
[131] L. Ljung and T. Glad. On global identiability for arbitrary model parametrizations. Automatica, 30(2):
265276, 1994.

177

Bibliography

[132] Y.H. Loo and G.D. Base. Variation of creep Poissons ratio with stress in concrete under short-term uniaxial
compression. Magazine of Concrete Research, 42(151):6773, 1990.
[133] T.J. Loredo. Maximum entropy and Bayesian methods, chapter From Laplace to supernova SN 1987 A:
Bayesian inference in astrophysics, pages 81142. Kluwer Academic Publishers, Dordrecth, Netherland,
1990.
[134] H. Ludescher and E. Brhwiler. Dynamic amplication of trafc loads on road bridges. Structural Engineering International, 19(2):190197, 2009.
[135] J.P. Lynch and K.J. Loh. A summary review of wireless sensors and sensor networks for structural health
monitoring. Shock and Vibration Digest, 38(2):91130, 2006.
[136] R.H. MacNeal. Finite elements: their design and performance. CRC, New-York, 1994.
[137] R.H. MacNeal and R.L. Harder. A proposed standard set of problems to test nite element accuracy. Finite
Elements in Analysis and Design, 1(1):320, 1985.
[138] P. Mantovan and E. Todini. Hydrological forecasting uncertainty assessment: Incoherence of the glue
methodology. Journal of Hydrology, 330(1-2):368381, 2006.
[139] Jean-Michel Marin, Pierre Pudlo, Christian Robert, and Robin Ryder. Approximate Bayesian computational
methods. Statistics and Computing, in press.
[140] P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar. Markov chain Monte Carlo without likelihoods. Proceedings
of the National Academy of Sciences of the United States of America, 100(26):1532415328, 2003.
[141] B. Massicotte and A. Picard. Monitoring of a prestressed segmental box girder bridge during strengthening.
PCI Journal, 39(3):6680, 1994.
[142] B. Massicotte, A. Picard, Y. Gaumond, and C. Ouellet. Strengthening of a long span prestressed segmental
box girder bridge. PCI Journal, 39(3):5265, 1994.
[143] E. Matta and A. De Stefano. Generating alternatives from multiple models: How to increase robustness
in parametric system identication. In 5th International Conference on Structural Health Monitoring on
Intelligent Infrastructure (SHMII-5), page 83, Cancun, Mexico, 2011.
[144] J. McFarland. Uncertainty analysis for computer simulations through validation and calibration. PhD thesis,
Vanderbilt University, Nashville, Te, 2008.
[145] J. McFarland and S. Mahadevan. Error and variability characterization in structural dynamics modeling.
Computer Methods In Applied Mechanics and Engineering, 197(29-32):26212631, 2008.
[146] J. McFarland and S. Mahadevan. Multivariate signicance testing and model calibration under uncertainty.
Computer Methods in Applied Mechanics and Engineering, 197(29-32):24672479, 2008.
[147] M.D. McKay, R.J. Beckman, and W.J. Conover. A comparison of three methods for selecting values of input
variables in the analysis of output from a computer code. Technometrics, 21(2):239245, 1979.
[148] M. Meo and G. Zumpano. On the optimal sensor placement techniques for a bridge structure. Engineering
Structures, 27(10):14881497, 2005.
[149] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, et al. Equation of state calculations
by fast computing machines. The journal of chemical physics, 21(6):10871092, 1953.
[150] C. Michel, P. Guguen, and P.-Y. Bard. Dynamic parameters of structures extracted from ambient vibration
measurements: An aid for the seismic vulnerability assessment of existing buildings in moderate seismic
hazard regions. Soil Dynamics and Earthquake Engineering, 28(8):593604, 2008.

178

Bibliography

[151] S.A. Mirza and J.G. MacGregor. Variations in dimensions of reinforced concrete members. Journal of the
Structural Division, 105(4):751766, 1979.
[152] B. Moller and M. Beer. Engineering computation under uncertainty-capabilities of non-traditional models.
Computers & Structures, 86(10):10241041, 2008.
[153] T. Most. Assessment of structural simulation models by estimating uncertainties due to model selection
and model simplication. Computers & Structures, 89(17-18):16641672, 2011.
[154] J.E. Mottershead and M.I. Friswell. Model updating in structural dynamics: a survey. Journal of sound and
vibration, 167(2):347375, 1993.
[155] J.E. Mottershead, M. Link, and M.I. Friswell. The sensitivity method in nite element model updating: A
tutorial. Mechanical Systems and Signal Processing, 25(7):22752296, 2011.
[156] A.S. Nowak and R.I. Carr. Sensitivity analysis for structural errors. Journal of Structural Engineering, 111(8):
17341746, 1985.
[157] A.S. Nowak and M.M. Szerszen. Bridge load and resistance models. Engineering structures, 20(11):985990,
1998.
[158] W.L. Oberkampf and M.F. Barone. Measures of agreement between computation and experiment: validation
metrics. Journal of Computational Physics, 217(1):536, 2006.
[159] W.L. Oberkampf and T.G. Trucano. Verication and validation benchmarks. Nuclear Engineering and Design,
238(3):716743, 2008.
[160] W.L. Oberkampf, S.M. DeLand, B.M. Rutherford, K.V. Diegert, and K.F. Alvin. Error and uncertainty in
modeling and simulation. Reliability Engineering & System Safety, 75(3):333357, 2002.
[161] W.L. Oberkampf, J.C. Helton, C.A. Joslyn, S.F. Wojtkiewicz, and S. Ferson. Challenge problems: uncertainty
in system response given uncertain parameters. Reliability Engineering & System Safety, 85(1-3):1119, 2004.
[162] W.L. Oberkampf, T.G. Trucano, and C. Hirsch. Verication, validation, and predictive capability in computational engineering and physics. Applied Mechanics Reviews, 57(5):345385, 2004.
[163] OECD. Infrastructure to 2030 - Mapping policy for electricity, water and transport, volume 2. Paris, France,
2007.
[164] OECD. Policy brief - infrastructure to 2030. OECD Observer, 2008.
[165] N.M. Okasha, D.M. Frangopol, and A.D. Orcesi. Automated nite element updating using strain data for the
lifetime reliability assessment of bridges. Reliability Engineering & System Safety, 99(139150), 2012.
[166] Q. Pan, K. Grimmelsman, F. Moon, and E. Aktan. Mitigating epistemic uncertainty in structural
identicationa case study for a long-span steel arch bridge. Journal of Structural Engineering, 137
(1):113, 2010.
[167] C. Papadimitriou. Optimal sensor placement methodology for parametric identication of structural
systems. Journal of sound and vibration, 278(4-5):923947, 2004.
[168] C. Papadimitriou. Pareto optimal sensor locations for structural identication. Computer methods in
applied mechanics and engineering, 194(12-16):16551673, 2005.
[169] C. Papadimitriou and G. Lombaert. The effect of prediction error correlation on optimal sensor placement
in structural dynamics. Mechanical Systems and Signal Processing, 28:105127, 2012.

179

Bibliography

[170] B. Peeters and C. Ventura. Comparative study of modal analysis techniques for bridge dynamic characteristics. Mechanical Systems and Signal Processing, 17(5):965988, 2003.
[171] J. Perret. Dformations des couches bitumineuses au passage dune charge de trafc. PhD thesis, Swiss Federal
institute of technology (EPFL), Lausanne, Switzerland, 2003.
[172] V. Plagnol and S. Tavar. Monte Carlo and quasi-Monte Carlo methods, chapter Approximate Bayesian
computation and MCMC, pages 99114. Springer, Berlin, 2004.
[173] K.R. Popper. The logic of scientic discovery. Routledge, New-York, third edition, 2002.
[174] D. Posenato, F. Lanata, D. Inaudi, and I.F.C. Smith. Model-free data interpretation for continuous monitoring
of complex structures. Advanced Engineering Informatics, 22(1):135144, 2008.
[175] D. Posenato, P. Kripakaran, D. Inaudi, and I.F.C. Smith. Methodologies for model-free data interpretation of
civil engineering structures. Computers & Structures, 88(7-8):467482, 2010.
[176] M. Pozzi and A. Der Kiureghian. Assessing the value of information for long-term structural health monitoring. In Proceedings of SPIE, volume 7984, 2011.
[177] F. Press. Earth models obtained by Monte Carlo inversion. J. Geophys. Res., 73(16):52235234, 1968.
[178] J.K. Pritchard, M.T. Seielstad, A. Perez-Lezaun, and M.W. Feldman. Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Molecular Biology and Evolution, 16(12):17911798,
1999.
[179] B. Raphael and I.F.C. Smith. Finding the right model for bridge diagnosis. In Articial Intelligence in
Structural Engineering, Computer Science, LNAI 1454, pages 308319. Springer, 1998.
[180] B. Raphael and I.F.C. Smith. A direct stochastic algorithm for global search. Applied Mathematics and
Computation, 146(2-3):729758, 2003.
[181] S. Ravindran, P. Kripakaran, and I.F.C. Smith. Evaluating reliability of multiple-model system identication.
In 14th EG-ICE workshop, Maribor, Slovenia, 2007.
[182] R. Rebba and S. Mahadevan. Model predictive capability assessment under uncertainty. AIAA Journal, 44
(10):23762384, 2006.
[183] R Rebba and S. Mahadevan. Validation of models with multivariate output. Reliability Engineering & System
Safety, 91(8):861871, 2006.
[184] R. Rebba, S. Mahadevan, and S. Huang. Validation and error estimation of computational models. Reliability
Engineering & System Safety, 91(10-11):13901397, 2006.
[185] J.A. Rice. Mathematical statistics and data analysis. Thomson Learning, Belmont, CA, 2006.
[186] P. J. Roache. Perspective: Validation-what does it mean? Journal of Fluids Engineering-Transactions of the
Asme, 131(3), 2009.
[187] C.P. Robert, J.M. Cornuet, J.M. Marin, and N.S. Pillai. Lack of condence in approximate Bayesian computation model choice. Proceedings of the National Academy of Sciences, 108(37):1511215117, 2011.
[188] Y. Robert-Nicoud, B. Raphael, O. Burdet, and I.F.C. Smith. Model identication of bridges using measurement
data. Computer-Aided Civil and Infrastructure Engineering, 20(2):118131, 2005.
[189] Y. Robert-Nicoud, B. Raphael, and I.F.C. Smith. Conguration of measurement systems using Shannons
entropy function. Computers & Structures, 83(8-9):599612, 2005.

180

Bibliography

[190] Y. Robert-Nicoud, B. Raphael, and I.F.C. Smith. System identication through model composition and
stochastic search. Journal of Computing In Civil Engineering, 19(3):239247, 2005.
[191] M. Ro. Pont Adolphe - rsultat des essais de surcharge. Technical report, Laboratoire fdral dessai des
matriaux, Zurich, Switzerland, 1933.
[192] M. Ro. La mesure directe des contraintes dans les ouvrages construits. In Centre dtudes suprieures:
Sance du 18 janvier 1939. Institut technique du btiment et des travaux publics, 1939.
[193] M. Ro. Robert Maillart 1872-1940: Ingenieur. Schweizerischer Verband fr die Materialprfungen der
Technik, 1940.
[194] M. Ro. Essais et expriences sur des constructions mtalliques en Suisse, 1925-1950. Quaderni della
costruzione metalica. Associazione fra i costruttori in acciaio italiani, 1954.
[195] L.A. Rossman. EPANET 2: users manual. US Environmental Protection Agency, Cincinati, OH, 2000.
[196] B.M. Rutherford, LP Swiler, TL Paez, and A. Urbina. Response surface (meta-model) methods and applications. In Proc. 24th Int. Modal Analysis Conf., pages 184197, St. Louis, MO, 2006.
[197] T. Saito and J.L. Beck. Bayesian model selection for ARX models and its application to structural health
monitoring. Earthquake Engineering & Structural Dynamics, 2010. ISSN 1096-9845.
[198] S. Saitta, B. Raphael, and I.F.C. Smith. Data mining techniques for improving the reliability of system
identication. Advanced Engineering Informatics, 19(4):289298, 2005.
[199] S. Saitta, B. Raphael, and I.F.C. Smith. Combining two data mining methods for system identication.
Intelligent Computing In Engineering and Architecture, 4200:606614, 2006.
[200] S. Saitta, B. Raphael, and I.F.C. Smith. A comprehensive validity index for clustering. Intelligent Data Analysis,
12(6):529548, 2008.
[201] S. Saitta, P. Kripakaran, B. Raphael, and I.F.C. Smith. Feature selection using stochastic search: An application
to system identication. Journal of Computing In Civil Engineering, 24(1):310, 2010.
[202] M. Sanayei, E.S. Bell, C.N. Javdekar, J.L. Edelmann, and E. Slavsky. Damage localization and nite-element
model updating using multiresponse ndt data. Journal of Bridge Engineering, 11(6):688698, 2006.
[203] H.R. Schalcher, H.J. Boesch, K. Bertschy, H. Sommer, D. Matter, J. Gerum, and M. Jakob. Quels seront les
cots futurs des btiments et des infrastructures suisses et qui les paiera ? Technical report, VDF Zrich,
Zurich, Switzerland, 2011.
[204] H. Schlune and M. Plos. Bridge assessment and maintenance based on nite element structural models and
eld measurements. Technical Report 2008:5, Chalmers University of Technology, Sweeden, 2008.
[205] M.B. Seasholtz and B. Kowalski. The parsimony principle applied to multivariate calibration. Analytica
chimica acta, 277(2):165177, 1993.
[206] R.G. Selby, F.J. Vecchio, and M.P. Collins. The failure of an offshore platform. Concrete International, 19(8):
2835, 1997.
[207] K. Sentz and S. Ferson. Probabilistic bounding analysis in the quantication of margins and uncertainties.
Reliability Engineering & System Safety, 96(9):11261136, 2011.
[208] G. Shafer. A theory of statistical evidence. Foundations of probability theory, statistical inference, and
statistical theories of science, 2:365436, 1976.

181

Bibliography

[209] J.P. Shaffer. Multiple hypothesis testing. Annual Review of Psychology, 46(1):561584, 1995.
[210] C.E. Shannon and W. Weaver. A mathematical model of communication. The Bell System Technical Journal,
27:379423, 1948.

[211] Z. Sidk.
Rectangular condence regions for the means of multivariate normal distributions. Journal of the
American Statistical Association, 62:626633, 1967.
[212] L. Simes da Silva, C. Rebelo, D. Nethercot, L. Marques, R. Simes, and PMM Vila Real. Statistical evaluation
of the lateral-torsional buckling resistance of steel I-beams, part 2: Variability of steel properties. Journal of
Constructional Steel Research, 65(4):832849, 2009.
[213] T.W. Simpson, T.M. Mauery, J.J. Korte, and F. Mistree. Kriging models for global approximation in simulationbased multidisciplinary design optimization. AIAA journal, 39(12):22332241, 2001.
[214] J.A. Snyman. Practical mathematical optimization: an introduction to basic optimization theory and classical
and new gradient-based algorithms, volume 97. Springer, 2005.
[215] H.W. Sorenson. Least-squares estimation: from Gauss to Kalman. Spectrum, IEEE, 7(7):6368, 1970.
[216] B.F. Spencer, M.E. Ruiz-Sandoval, and N. Kurata. Smart sensing technology: opportunities and challenges.
Structural Control and Health Monitoring, 11(4):349368, 2004.
[217] M.S. Srivastava. Methods of multivariate statistics. Wiley, New-York, 2002.
[218] P.B. Stark and L. Tenorio. Large-scale inverse problems and quantication of uncertainty, chapter A Primer of
Frequentist and Bayesian Inference in Inverse Problems, pages 932. Wiley, 2010.
[219] C. Stephan. Sensor placement for modal identication. Mechanical Systems and Signal Processing, 27(0):
461470, 2012.
[220] G.W. Stewart. Gauss, statistics, and Gaussian elimination. Journal of Computational and Graphical Statistics,
94:111, 1995.
[221] A. Strauss, D.M. Frangopol, and S. Kim. Use of monitoring extreme data for the performance prediction of
structures: Bayesian updating. Engineering Structures, 30(12):36543666, 2008.
[222] A. Strauss, D.M. Frangopol, and K. Bergmeister. Assessment of existing structures based on identication.
Journal of Structural Engineering, 136(1):8697, 2010.
[223] B. Sudret and A. Der Kiureghian. Stochastic nite element methods and reliability: a state-of-the-art report.
Technical report, University of California Berkeley, Dept. of Civil and Environmental Engineering, Berkeley,
CA, 2000.
[224] B. Sudret and A. Der Kiureghian. Comparison of nite element reliability methods. Probabilistic Engineering
Mechanics, 17(4):337348, 2002.
[225] A. Tarantola. Inverse problem theory: Methods for data tting and model parameter estimation. Siam,
Philadelphia, PA, USA, 2005.
[226] A. Tarantola. Popper, Bayes and the inverse problem. Nature Physics, 2(8):492494, 2006.
[227] B.H. Thacker, S.W. Doebling, F.M. Hemez, MC Anderson, JE Pepin, and EA Rodriguez. Concepts of model
verication and validation. Technical report, Los Alamos National Lab., Los Alamos, NM, 2004.
[228] S. Thns, M. H. Faber, and W. Rcker. Ultimate limit state model basis for assessment of offshore wind
energy converters. Journal of Offshore Mechanics and Arctic Engineering, 134(3):03190410319049, 2012.

182

Bibliography

[229] T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M.P.H. Stumpf. Approximate Bayesian computation scheme
for parameter inference and model selection in dynamical systems. Journal of the Royal Society Interface, 6
(31):187202, 2009.
[230] C. Topkaya, A.S. Kalayci, and E.B. Williamson. Solver and shell element performances for curved bridge
analysis. Journal of Bridge Engineering, 13(4):418424, 2008.
[231] M. Verleysen, D. Francois, G. Simon, and V. Wertz. On the effects of dimensionality on data analysis with
neural networks. Articial Neural Nets Problem solving methods, pages 10441044, 2003.
[232] N. Wang, B.R. Ellingwood, and A.H. Zureick. Bridge rating using system reliability assessment. II: Improvements to bridge rating practices. Journal of Bridge Engineering, 16(6):863871, 2011.
[233] R.J. Westgate and J.M.W. Brownjohn. Development of a Tamar Bridge nite element model. In Conference
Proceedings of the Society for Experimental Mechanics Series 3, volume 5, pages 1320. Springer, 2011.
[234] B.C. Williams and J. Kleer. Qualitative reasoning about physical systems: a return to roots. Articial
Intelligence, 51(1-3):19, 1991.
[235] K. Y. Wong, K.W.Y. Chan, Y.Q. Ni, and C.L. Ng. Advanced nite element model of Tsing Ma Bridge for
structural health monitoring. International Journal of Structural Stability and Dynamics, 11(2):313344,
2011.
[236] K. Worden and A.P. Burrows. Optimal sensor placement for fault detection. Engineering Structures, 23(8):
885901, 2001.
[237] K. Worden, C. R. Farrar, G.E Manson, and G. Park. The fundamental axioms of structural health monitoring.
Proceedings of the Royal Society A-Mathematical Physical and Engineering Sciences, 463(2082):16391664,
2007.
[238] F. Xie and D. Levinson. Evaluating the effects of the I-35W bridge collapse on road-users in the twin cities
metropolitan region. Transportation Planning and Technology, 34(7):691703, 2011.
[239] B.F. Yan, A. Miyamoto, and E. Brhwiler. Wavelet transform-based modal parameter identication considering uncertainty. Journal of Sound and Vibration, 291(1-2):285301, 2006.
[240] I. Yeo, S. Shin, H.S. Lee, and S.P. Chang. Statistical damage assessment of framed structures from static
responses. Journal of Engineering Mechanics, 126(4):414421, 2000.
[241] K.-V. Yuen, J.L. Beck, and L.S. Katafygiotis. Efcient model updating and health monitoring methodology
using incomplete modal data without mode matching. Structural Control & Health Monitoring, 13(1):
91107, 2006.
[242] K.V. Yuen. Bayesian methods for structural dynamics and civil engineering. Wiley, 2010.
[243] K.V. Yuen, S.K. Au, and J.L. Beck. Two-stage structural health monitoring approach for phase I benchmark
studies. Journal of Engineering Mechanics, 130(1):1633, 2004.
[244] K.V. Yuen, J.L. Beck, and L.S. Katafygiotis. Unied probabilistic approach for model updating and damage
detection. Journal of Applied Mechanics-Transactions of the ASME, 73(4):555564, 2006.
[245] E.L. Zhang, P. Feissel, and J. Antoni. A comprehensive Bayesian approach for model updating and quantication of modeling errors. Probabilistic Engineering Mechanics, 26(4):550560, 2011.
[246] A. Ziegler. Schwingungsmessungen auf der Langensandbrcke in Luzern. Technical Report no. 1621, Ziegler
Consultants, Zurich, Switzerland, 2009.

183

Academic Curriculum
Born : 1984
Nationality : Canadian
: james.a.goulet@gmail.com

Doctorate (Ph.D.) - Civil Engineering


cole Polytechnique Fdrale de Lausanne (EPFL)
Lausanne, Switzerland

[2008-2012]

Master (M.Sc.) - Civil Engineering


Laval University
Qubec, Canada

[2006-2008]

Bachelor (B.Ing.) - Civil Engineering


Laval University
Qubec, Canada

[2004-2008]

Extracurricular activities
Responsible for the EDCE PhD student organization

[2010 - 2012]

Instigator of academic contests for EPFL students

[2011 - 2012]

Canadian Society for Civil Engineering (CSCE) student representative

[2007 - 2008]

Founder of Laval University bridge design team (ESUL)

[2005]

Honors and Awards


FQRNT Quebec National Funds for Research in Natural Sciences and Technologies
Postdoctoral research scholarship

2012

185

Academic Curriculum
Swiss National Science Foundation
Fellowship for prospective researcher (Postdoctoral research)

2011

McGill University Engineering Doctoral Award


Scholarship for graduate studies [Refused]

2008

CSCE Canadian Society for Civil Engineering Annual Conference


Best technical presentation award

2008

FQRNT Quebec National Funds for Research in Natural Sciences and Technologies
Graduate research scholarship

2008

CRSNG Natural Sciences and Engineering Research Council of Canada


Undergraduate research scholarship awarded for independent research

2007

Canada Millenium Scholarship Foundation


Scholarship awarded for leadership and academic excellence

2005

Personal interests
Skiing, sailing, kayaking, surng, data visualization, international politics & economy

186

List of Publications
Publications made during the thesis are presented in the list below.

Journal papers
J.-A. Goulet, C. Michel, and I. F. C. Smith. Hybrid probabilities and error-domain structural
identication using ambient vibration monitoring. Mechanical Systems and Signal Processing,
In press.
J.-A. Goulet and I. F. C. Smith. Predicting the usefulness of monitoring for identifying the
behaviour of structures. Journal of Structural Engineering, In press.
J.-A. Goulet, P. Kripakaran, and I. F. C. Smith. Multimodel structural performance monitoring.
Journal of Structural Engineering, 136(10) 1309-1318, Oct. 2010.

Book chapter
J.-A. Goulet and I. F. C. Smith. Structural Identication of Constructed Facilities. Approaches,
Methods and Technologies for Effective Practice of St-Id, Chapter 8.8. American Society of
Civil Engineers (ASCE), 2012.

Conference proceedings
R. Pasquier*1 , J.-A. Goulet, and I. F. C. Smith. Reducing uncertainties regarding remaining lives
of structures using computer-aided data interpretation. Proceedings of the 19th International
Workshop: Intelligent Computing in Engineering, Munich, Germany, July 2012.
J.-A. Goulet, I. F. C. Smith*, M. Texier, and L. Chouinard. The effects of simplications on
model predictions and consequences for model-based data interpretation. Proceedings of
ASCE Structures Congress, Chicago, USA, March 2012.

1 The symbol (*) denotes the presenting author.

187

List of Publications
J.-A. Goulet* and I. F. C. Smith. Prevention of over-instrumentation during the design of a monitoring system for static load tests. Proceedings of 5th International Conference on Structural
Health Monitoring on Intelligent Infrastructure (SHMII-5), Cancun, Mexico, December 2011.
J.-A. Goulet* and I. F. C. Smith. Uncertainty correlation in structural performance assessment.
Proceedings of the 11th International Conference on Applications of Statistics and Probability
in Civil Engineering, Zurich, Switzerland, August 2011.
J.-A. Goulet* and I. F. C. Smith. Extended uniform distribution accounting for uncertainty of
un- certainty. Proceedings of the International Conference on Vulnerability and Risk Analysis
and Management/Fifth International Symposium on Uncertainty Modeling and Analysis,
pages P.78-85, Maryland, USA, April 2011.
J.-A. Goulet* and I. F. C. Smith. Overcoming the limitations of traditional model-updating
approaches. Proceedings of the International Conference on Vulnerability and Risk Analysis
and Management/Fifth International Symposium on Uncertainty Modeling and Analysis,
pages p.905-913, Maryland, USA, April 2011.
J.-A. Goulet* and I. F. C. Smith. CMS4SI structural identication approach for interpreting
measurements. Proceedings of the 34rd IABSE symposium, Venice, Italy, September 2010.
J.-A. Goulet* and I. F. C. Smith. Evaluating structural identication capability. Proceedings of
the Structural Faults & Repair, Edinburgh, UK, June 2010.
J.-A. Goulet*, P. Kripakaran, and I. F. C. Smith. Structural identication to improve bridge
management. Proceedings of the 33rd IABSE symposium, Bangkok, Thailand, September
2009.
J.-A. Goulet*, P. Kripakaran, and I. F. C. Smith. Estimation of modelling errors in structural
system identication. Proceedings of the 4th International Conference on Structural Health
Monitoring on Intelligent Infrastructure (SHMII-4), Zurich, Switzerland, July 2009.
J.-A. Goulet, P. Kripakaran, and I. F. C. Smith*. Considering sensor characteristics during
measurement-system design for structural system identication. Proceedings of the 2009
ASCE International Workshop on Computing in Civil Engineering, p.74, Austin, Texas, June
2009.

188

Das könnte Ihnen auch gefallen