Sie sind auf Seite 1von 134

DEVELOPEMENT OF A TWO

DIMENSIONAL, STOCHASTIC
METHODOLOGY TO ASSESS SEISMIC
SITE RESPONSE

A Dissertation Submitted in Partial Fulfilment of the Requirements


for the Master Degree in

Earthquake Engineering &/or Engineering Seismology

By
ANDRES FELIPE ALONSO RODRIGUEZ

Supervisor(s): Dr. CARLO GIOVANNI LAI

December, 2008

Istituto Universitario di Studi Superiori di Pavia


Università degli Studi di Pavia
The dissertation entitled “Developement of a two dimensional, stochastic methodology to
assess seismic site response”, by Andres Felipe Alonso Rodriguez, has been approved in
partial fulfilment of the requirements for the Master Degree in Earthquake Engineering.

Dr Carlo G. Lai …… … ………

Dr Mirko Corigliano ……… … ……


Abstract

ABSTRACT

Site effects are crucial for seismic risk assessment of structures; but in order to perform a reliable
analysis, two major facts should be considered: variability of soil mechanical properties along all
directions; and, effects of topographical and morphological features that might overcome traditional,
one dimensional analysis procedures. In order to address these, a two-dimensional, stochastic
assessment methodology to estimate site effects has been developed. This procedure allows for
geostatistical interpolation of mechanical properties of the soil underlying a site, based on field
measurements of shear wave velocity data at specific locations. Afterwards, Montecarlo simulations
and First Order, Second Moment (FOSM) estimates are performed on a two dimensional finite
element mesh to assess variability of surface response, represented by their mean and standard
deviation, at several places on top of a profile which includes the side. The methodology has been
successfully tested for two artificially generated custom sites, in agreement with FEMA 450
specifications for geotechnical research. Effects of different statistical descriptions of soil properties,
sampling strategies, and finite element modelling strategies were reviewed for these cases, yielding
some general remarks.

Keywords: Site Effects; Montecarlo Simulation; Geostatistics;

i
Acknowledgements

ACKNOWLEDGEMENTS

Firsty, I would like to express my gratitude and love to my parents, Bertha Marina, and Gustavo.
Everything I have been, I am and will be is due to them. Their unconditional support, in all means, has
allowed me to follow this path, on which this work is a crucial step. My brother, Luis Fernando, has
experienced the unconditional love of my parents with me, thanks for his company along my life. My
family presence beyond the ocean has been fundamental to finish this research. From the bottom of
my heart, thank you.

Alongside the road, I have met wonderful people. Among them, Professor Carlo Lai has been a
continuous support. His caring guidance has reflected in myself, as he has encouraged me to keep
going. Now my work has yielded a fruit. Also, I have to mention my school mates along my Master
program; being able to interact with people from all corners of the world has enriched myself in such a
wonderful way. My life and soul have become brighter as I have met such amazing and diverse
friends.

This work has been possible due ROSE SCHOOL. Granting me a partial scholarship made this
wonderful path possible. There are not enough words to express how I will cherish trough the rest of
my life all the experiences I had in my master program. Especially I would like to extend my gratitude
to Saverio Bisoni, ROSE SCHOOL secretariat director. His patience, diligence and passion for his
work, and specially his consideration for myself, encouraged me to go through the hard times and
now, I have to recognize him. Many thanks.

Finally, I have express my gratitude to you, dear reader for investing your valious time to read my
work.

Best Regards

Andres Felipe Alonso Rodriguez


Rion, Greece
December 2008.

ii
Index

TABLE OF CONTENTS

Page
1  INTRODUCTION: ........................................................................................................................... 1

2  FUNDAMENTALS........................................................................................................................... 9 
2.1  Finite Element Analysis for 2D Site Response Assessment ...................................................... 9 
2.1.1  Boundary definition for 2D site response assessment Using Finite Element Analysis.. 13 
2.1.2  Integration of the Equations of Motion .......................................................................... 16 
2.2  Random Field Fundamentals ................................................................................................... 18 
2.2.1  Stationarity – homogeneity of a random field................................................................ 19 
2.3  Elements of Geostatistics ......................................................................................................... 21 
2.3.1  The Nature of variability in Geotechnical engineering .................................................. 21 
2.3.2  Field Variograms ........................................................................................................... 22 
2.4  Estimation ................................................................................................................................ 31 
2.4.1  Least Squares Linear Fit ................................................................................................ 31 
2.4.2  Ordinary Kriging ............................................................................................................ 33 
2.4.3  Analytical Modeling of the Field Variogram ................................................................. 38 
2.5  Montecarlo Simulation............................................................................................................. 41 
2.5.1  Random Number Generation ......................................................................................... 42 
2.5.2  Statistical Characterization of system response ............................................................. 45 
2.5.3  Efficiency and Precision of Montecarlo Simulations..................................................... 46 
2.5.4  Latin Hypercube Sampling ............................................................................................ 48 
2.6  Fist Order Second Moment Reliability Method FOSM ........................................................... 49

iii
Index

3  DESCRIPTION OF PROPOSED METHODOLOHY .................................................................... 52 


3.1  1D Seismic Stochastic site assessment .................................................................................... 53 
3.1.1  Haskell Thompson Matrices. ......................................................................................... 54 
3.1.2  Stochastic 1D Seismic Site Assesment. ......................................................................... 57 
3.1.3  Input Parameters ............................................................................................................ 59 
3.1.4  Elastic Analyses: ............................................................................................................ 59 
3.1.5  Equivalent Linear Elastic Analyses: .............................................................................. 62 
3.2  Site Random Field Modeling ................................................................................................... 63 
3.3  Kriging ..................................................................................................................................... 64 
3.4  Site Response Assessment Using QUAD4M........................................................................... 65 
3.4.1  Finite Element Mesh Characteristics ............................................................................. 65 
3.5  Montecarlo Simulation............................................................................................................. 66 
3.5.1  Distribution Dicretization .............................................................................................. 66 
3.5.2  Simulation Algorithm .................................................................................................... 68 
3.5.3  Site Response Characterization ...................................................................................... 68 
3.6  First Order Second Moment Estimation .................................................................................. 69 
3.7  Simulation of Normally distributed Random Fields ................................................................ 69

4  NUMERICAL SIMULATIONS ..................................................................................................... 71 


4.1  Base Model Input Parameters .................................................................................................. 72 
4.2  Simulation Parameters ............................................................................................................. 74 
4.3  Assessment of Predictive Scheme............................................................................................ 75 
4.3.1  Cases 1, 2 and 3 ............................................................................................................. 75 
4.3.2  Case 4:............................................................................................................................ 85 
4.3.3  Case 5:............................................................................................................................ 89

5  RESULTS ........................................................................................................................................ 92 


5.1  Convergence of Simulations .................................................................................................... 92 
5.2  Assessment of FOSM and Montecarlo Simulation .................................................................. 96 
5.3  Spatial variation of the response ............................................................................................ 104 
5.4  Sampling Scheme Assessment ............................................................................................... 107 
5.5  Effect of Field Variability ...................................................................................................... 109 
5.6  Effect of Finite element mesh ................................................................................................ 111

iv
Index

6  FINAL REMARKS ....................................................................................................................... 114

7  REFERENCES .............................................................................................................................. 116


APPENDIX A: APPLICATION MANUAL ............................................................................ A1 (117)

v
Index

LIST OF FIGURES

Page
Figure 1.1: Performance Based Design Framework .............................................................................. 2 
Figure 1.2: Seismic wave propagation to site ........................................................................................ 3 
Figure 1.3: Damage Belt in Kobe due 1995 Earthquake ........................................................................ 4 
Figure 1.4: Permeability profile of glacial deposit at Chicopee, Massachusetts....................................5  
Figure 2.1 Isogeometrical Representation of a 4 node quadrilateral element ....................................... 10 
Figure 2.2: Rayleigh Damping. 5% of critical damping set at 0.5 and 1.5 Period Values. .................. 13 
Figure 2.3: Reflexion of a pulse on a Rigid Boundary ......................................................................... 14 
Figure 2.4: Attenuation of a Pulse with distance. ................................................................................. 14 
Figure 2.5: Damper at boundary ........................................................................................................... 15 
Figure 2.6: Boundary configurations: (Kramer, 1996) ......................................................................... 16 
Figure 2.7: Variogram Example, ........................................................................................................ 22 
Figure 2.8: Variogram for Example (Figure 7) .................................................................................... 24 
Figure 2.9: Clustering of data for non uniform sample spacing ........................................................... 25 
Figure 2.10: Effect of data clustering (Isaaks and Shiravastava, 1989) ................................................ 26 
Figure 2.11 Sampling when defining an omnidirectional variogram. .................................................. 27 
Figure 2.12: Rose Diagram a Walker Lake data set (Isaaks Shiravastava 1989) ................................ 27 
Figure 2.13: Rose Graph Variograms. (Isaaks and Shiravastava, 1989) ...........................................28 
Figure 2.14: Sill and Range definition ................................................................................................. 29 
Figure 2.15: Residual variations of SPT blow counts (Beacher and Christian, 2006).......................... 32 
Figure 2.16: Nugget Effect. ................................................................................................................. 38 
Figure 2.17: Analytical variogram Functionals. .................................................................................. 39 
Figure 2.18: Range Anisotropy. ............................................................................................................ 40 
Figure 2.19: Number of Simulations required for achieving a target COV.............................47

vi
Index

Figure 3.1: Flow Diagram of the Proposed Methodology……………………………………………53


Figure 3.2: Haskell Thompson model: Layer definition:...................................................................... 54 
Figure 3.3: Small Strain Damping Ratio for RC and RCTS testing. (Darendeli Stockoe, 2006) ......... 60 
Figure 3.4: Effect of Poisson ratio on wave velocities, P, S and Rayleigh. (Ishihara, 1996) ............... 61 
Figure 3.5: Amplitude of Rayleigh wave with depth for a half space. Ishihara 1996 .......................... 62 
Figure 3.6: Proposed domain subdivision for a normally distributed random variable ..... .................. 67
Figure 4.1: Simulation Base Model: ..................................................................................................... 71 
Figure 4.2: Acceleration response spectra for the Outcropping Ground Motion .................................. 73 
Figure 4.3: Acceleration time story for Selected Outcroping Ground Motion. ................................... 73 
Figure 4.4: Variogram for cases 1,2,3,5................................................................................................ 74 
Figure 4.5: Horizontal Variogram for cases 1,2,34 and 5 .................................................................... 75 
Figure 4.6: Isogeometrical View of Randomly Generated Shear Wave Velocity ................................ 75
Figure 4.7: Contour plot on figure 4.6. ................................................................................................. 76 
Figure 4.8: Random Field without a correlation structure .................................................................... 76
Figure 4.9: Samples to set Case 1. Borehole close to the mid field ...................................................... 77 
Figure 4.10: Kriging estimates case 1 ................................................................................................... 77 
Figure 4.11: Kriging Variance of Estimates, Case 1 ............................................................................ 78
Figure 4. 12: Relative Error in (%) defined as the ratio of absolute error to simulated value case 1 ... 78 
Figure 4.13: Indicator Distribution case 1 ............................................................................................ 79 
Figure 4.14: Sampling at extreme boreholes case 2 ............................................................................. 80 
Figure 4.15: Kriging interpolated values, Case 2 ................................................................................. 80 
Figure 4.16: Kriging Variances Case2 .................................................................................................. 81 
Figure 4.17: Relative errors (in %) Case2 ............................................................................................ 81 
Figure 4.18: Standard Error Case 2 ...................................................................................................... 82 
Figure 4.19: Sample data (shear velocity) for three boreholes, ............................................................ 82 
Figure 4.20: Kriging estimates case 3......................................................................................83
Figure 4.21: Kriging variance case 3 .................................................................................................... 83 
Figure 4.22: Relative error Case 3 ....................................................................................................... .84 
Figure 4 23: Standard error Case 3 ...................................................................................................... .84 
Figure 4.24: Random field simulation case 4 (shear wave velocities in m/s)...........................85
Figure 4.26: Kriging values, case 4. (Shear wave velocity in m/s)... ......................................86  
Figure 4.25: Borehole Samples ............................................................................................................ 87
Figure 4.27: Kriging variances case 4......................................................................................88
Figure 4. 28: Relative Error, Case 4 ..................................................................................................... 88 
Figure 4.29: Standard error, case 4 ....................................................................................................... 88 
Figure 4.30: Random Field Case 5 ....................................................................................................... 89 

vii
Index

Figure 4.31: Kriged values, case 5 ........................................................................................................ 89 


Figure 4.32: Kriging Standard deviation, Case 5 .................................................................................. 90 
Figure 4.33: Relative error case 5 ......................................................................................................... 90 
Figure 4 34: Standard error, case 5 ...................................................................................................... 91
Figure 5.1:Convergence of Simulations, Case 1. ................................................................................. 92 
Figure 5.2: Convergence of simulations, Cases 2, 3 and 4. .................................................................. 93 
Figure 5.3: Convergence for Mean Responses for case 5 ..................................................................... 94 
Figure 5.4: Convergence of Standard deviation for cases 1,2,3,4, and 5. ............................................. 95 
Figure 5.5: Mean Response Spectra for case 1.. ................................................................................... 96 
Figure 5.6: Coefficient of variation COV of Spectral Acceleration for case 1. .................................... 97 
Figure 5.7: Spectral Acceleration, with Variability computed for case one. ........................................ 98 
Figure 5.8: Mean response spectra for cases 2, 3, 4,5 ....................................................................... 100 
Figure 5.9: Standard deviation for cases 2 and 3. . ............................................................................. 101 
Figure 5.10: COV for maxima and mean surficial response. Case 4 ................................................. 102 
Figure 5.11: Standard deviation of maxima and mean spectral response. ......................................... 102 
Figure 5.12: COV case 5. Maxima and mean surficial response. ....................................................... 103 
Figure 5.13: Mean and standard deviation of response for cases 1,2,3,4 and 5. ................................. 105 
Figure 5.14: Mean spectral acceleration response of maximum values at surface, for three sampling
strategies  ............................................................................................................................................ 107 
Figure 5.15: Standard deviation of spectral acceleration response of maximum values at surface for
three sampling strategies. .................................................................................................................... 107 
Figure 5.16: Plus and minus one standard deviation acceleration spectral response of maxima values
at surface. ........................................................................................................................................... 108 
Figure 5.17: Mean response of maxima surficial spectral acceleration. Cases 2 and 4. .................... 109 
Figure 5.18: Standard deviation of maxima surficial spectral acceleration. Cases 2 and 4.. .............. 109 
Figure 5.19: One plus and minus standard deviation envelopes for maximum spectral acceleration at
surface........, ........................................................................................................................................ 110 
Figure 5.20: Mean Response of maxima spectral acceleration on surface. For cases 2 and 5............ 111 
Figure 5.21: Standard deviation of maximum spectral acceleration at surface. Cases 2 and 5. ......... 111 
Figure 5.22: Plus and minus one standard deviation envelopes of maximum surficial spectral
acceleration.  ....................................................................................................................................... 112 

viii
Appendix A

LIST OF TABLES

Page
Table 2.1: Variogram Example ............................................................................................................. 23
Table 2.2: Factors for simulation of not normal random variables ......................................................45 

Aix
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

1 INTRODUCTION: THE NEED OF TWO DIMENSIONAL


SITE STOCHASTIC SEISMIC RESPONSE ASSESMENT

Earthquakes are among the most destructive and disruptive natural phenomena, as they occur
without warning, releasing huge amounts of energy in few seconds. In 2000 Munchener Ruck,
one of the biggest insurance companies worldwide, made an assessment on economical and
life loss due natural disasters along the past millennia, but focusing specially in the last half of
the 20th century. Estimates showed that earthquakes were the most destructive and life
threatening of all natural hazards in such period, by being accountable of 47% of deaths
(seven hundred million) and 35% of all economic losses (300 hundred billion dollars at 2000
monetary value) (TOPICS, 2000).

Even so, the biggest economical loss reported by a natural catastrophe was achieved after
Kobe Earthquake, raging more than 100 billon dollars at 2000 estimates; this loss is
comparable to the reported damage after Katrina Hurricane. Northridge earthquake caused the
biggest insured economical loss due an earthquake, reaching 15 billon (2000 estimates). Still
was a moderate event.

On third world, on contrary, human losses are still significant. For example, Death count due
Kashmir earthquake surpassed 88000 and economical losses reached 5 billon dollars (2005
numbers) obliterating a whole generation at certain places. Unfortunately, insurance coverage
was low, so compensation was minimal, leaving thousands with nothing, dependant on local
and international aid. A similar overall situation can be observed trough India 2001
earthquake (20000 deaths) and Bam 2003 earthquake (26000 deaths).

1
Chapteer 1: The neeed of Two Dimensionnal Site Stocchastic Seism
mic Responnse Assessm
ment

In genneral it is observed
o thaat, fatalitiess have decrreased in high developped countriies, where
sound earthquakee resistant structural
s d
design codess and speciifications, aaimed to prrotect life,
have been
b enforcced trough the
t use andd demand of o strict building practiices; but deespite this
accounntable objecctive, econoomic lossess have skyrrocketed as society getts richer. In n low and
middlee countries casualties, as
a explainedd, are still extended.
e

Severaal reasons explain


e whyy economiccal losses due
d earthquake have riisen so sign nificantly;
generaal growth off populationn, concentraation of inhaabitants andd values in hhighly exposed cities,
and thhe use of modern,
m sommetimes extrremely vuln nerable higgh technologgy, leading to a new
loss pootential withh a major orrder of maggnitude. Forr example, for
f an earthqquake scenaario based
on 19223 Kanto Earthquake,
E , losses, onn present tim
me, would go beyondd 1000 billo on dollars
(2006 numbers) (Smolka, 20007).

Thereffore a mulltiple objecctive hazardd mitigation n scheme should be approached d, as just


meetinng life prottection mighht not be enough.
e Thhis have givven risen too Performan nce based
designn, in which, design tarrgets are seet for diffeerent treat levels,
l requuiring assessment for
differeent scenarioos:

Figure 1.1:
1 Performance Based Design
D Frameework (Christopoulos Filiaatrault, 2006)

Thereffore, in succh schemes hazard hass to be defiined not jusst for a sinngle scenario; several
targetss with differrent probabbilities of occcurrence haave to be coonsidered. T
Therefore a complete
methoodology to define
d seism
mic hazard on o a site, on
n a systemaatic and connsistent way
y has to be
performmed. This framework
f h also to be
has b flexible, so differentt scenarios can be addrressed.

2
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

Then, while for the life protection level, a close event might represent the most critical threat,
as damage tolerance is high, (life protection performance level), for low hazard levels, a
different scenario might lead to higher damage expectation. Far moderate earthquakes with
distinct low frequency content might induce sizable damage, if local geotechnical conditions
are prone to amplification.

Earthquake is complex; propagating mechanical energy produced at fault rupture is modified


during its travel trough earth crust before reaching the site. Energy is spread along volume
(geometric attenuation) and lost trough damping behavior of geologic materials (material
damping). As known, energy decay, (for viscous damping) is a function of number of cycles.
Therefore, higher frequency component of the shaking is attenuated more than low, long
period motion.

Not just the magnitude of the movement changes, also its frequency content, which might
lead to a dramatic modification of seismic input at surficial soil layers, where precisely a
strong change on the mechanical properties of the propagating medium is observed.

http:

Figure 1.2: Seismic wave propagation to site , (Kramer, 1996)

A common approach to deal with these complex phenomena is trough attenuation


relationships, nonlinear regression analyses performed on strong motion instrumental
parameters computed from recorded motions, normally, spectral ordinates. Coefficients of
Variation COV for “rock” sites in which specific site properties are neglected can range 50%
of the mean value of PGA (spectral acceleration at period zero) for a given magnitude and
rupture distance, for example Abrahamson and Silva (Abrahamson and Silva, 1997). If
Variability due magnitude and distance is also considered COV values can reach 150%.
(Arroyo & Sanchez Silva, 2005).

Normally, attenuation relationships include spectral ordinates for soil site, but based on a
coarse classification, and because they are computed using site records, cannot take into
account specific site features, like observed properties on site.

As seismic waves propagate into softer materials, got diffracted, until plane shear waves can
be approximated, close to the analysis site. P-waves can be discarded as energy is allocated
mostly at distortional S-waves. This way, one dimensional analysis can be performed, as long
the geometry and the material properties of the site are homogeneous.

3
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

Soil mechanical properties, which define how seismic energy manifest itself on surface, are
subjected to a sizable variability. Coefficients of variation of 40 and 50 are not uncommon,
one order of magnitude to the values observed in engineered building materials. Although, are
low compared to the variability of the seismic input.

Soil properties affect site response in two ways; firstly they amplify ground motion, as the
seismic waves travel from stiffer to softer material (even at a rock site an amplification factor
of two is obtained due the free surface effect),

Also, they amplify waves with certain energy content which is compatible with the natural
period of vibration of the layer system, on which the site is founded, in a similar fashion as a
single degree of freedom does, in a response spectra calculation.

Therefore, site amplification can lead to complex shaking patterns, and if variability among
soil properties is included, the response on surface becomes more uncertain, as is not just
influenced by the gross values of the properties, also how they are distributed below. For 1D
analysis, infinite layers are considered and their height and pattern define how motion is
transferred to surface.

If inelasticity is considered, the magnitude of the shaking and its frequency content can affect
also the response. Change on the mechanical properties of the soil layer due induced stresses
of course might lead to a different response if the elastic case is taken as a reference.

Damage spatial distribution in Kobe and Northridge earthquake shows the dependence of
ground shaking on the geometry conditions of the underlying rock. Such amplifications due
the distribution of the higher stiffer underlain base material are called basin effects, and can
be sizable enough to offset one dimensional computed response as shown in figure 1.3:

Figure 1.3: Damage Belt in Kobe due 1995 Earthquake (Kawase, 1996)

4
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

During 1995 Kobe earthquake damage assessment in field teams, noticed a strange
distribution of damage occurrences around a region few tenths of meters wide, which
followed the outcropping of an underlying granite formation. 30% of damage to residential
building stock occurred in this relatively thin strip. Further research lead by Kawase (1996),
showed that this definite pattern was caused by direct interference of upward travelling shear
waves around the soft deposit on which the city was built, and, Rayleigh waves generated at
the interface between the granite formation and surface.

Also, the same effect, of having a specifically spatially distributed damage trend was also
observed after Northridge earthquake, around Hollywood foothills. Fortunately, consequences
were milder. Anyhow observing two distinct basin effects in phenomena at two distinct
locations in the globe, show that basin effects are not that marginal. Further, research shows
that 1D modeling hypothesis are quite artificial, and have limited applicability. Therefore at
least 2D analysis should be performed to consider such basin effects into site specific
assessment.

Spatial variability is not build just on rock base, as for instance, Baecher and Christian show
the following profile of a site across a location on Massachusetts:

Figure 1.4: Permeability profile of a relatively homogeneous glacial deposit near


Chicopee Massachusetts (Baecher Christian, 2003)

5
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

The site depicted is considered homogeneous. As shown, the idea of setting strata following
the surface is at most, a rough approximation. Even so, these maps are made after taking point
estimations, and then try to set regions on which measurements are expected to have
comparable values. So spatial distribution of soils (and therefore their mechanical properties)
is not just geometrically homogeneous, is also unknown, as is just taken at specific points, and
“interpolated”

Therefore, in order to follow trends and real situations is reasonable to consider a stochastic
model for both the spatial distribution and value of the soil properties, into a site response
hazard assessment on a site. Therefore, such characterization requires at least a two
dimensional analysis, as the hypothesis of layers parallel to the surface is not compatible with
the overall spatial randomness.

In order to perform such analysis in an efficient and systematic way, a methodology to


estimate two dimensional Site Response taking into account the spatial variability of soil
mechanical parameters is mandatory, this is the main objective of this work.

Summarizing, the following topic have to be addressed:

• On site earthquake hazard assessment on seismic shaking prone regions is a major


concern, as such events are on overall the most life threatening and costly of natural
hazards;

• Economical looses have increased noticeably in the last years, and are observed even
for mild shaking, as societies around the globe becomes more advanced and
technological, requiring addressing hazard at several levels. Now consequences of an
earthquake might become global as markets and trading is more interconnected.
Therefore, a multiple target hazard assessment has to be performed;

• Hazard is influenced by local, geotechnical conditions. Uncertainty on founding soil


mechanical parameters is comparable high, if contrasted to other engineered materials,
therefore deterministic analysis are limited, and ways to include this uncertainty must
be formulated;

• Observed basin effects, varied geometrical conditions and failure to tackle spatial
variability of underlying soil (as layers have to be defined in one dimensional site
assessment), calls for 2D stochastic site hazard assessment.

Taking into account the needs outlined before, the main objective of this work is set up a
methodology to perform 2D stochastically hazard assessment of a site, considering the spatial
variability of its underlying soil. In order to accomplish this task the following steps are
required to perform such analysis:

6
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

1. Analysis of field data to infer the spatial structure of the soil deposit where the site is
founded.

2. Definition of at least a two dimensional model of the site

3. Interpolation of measured soil parameters at unsampled locations, required to


completely set the model selected in step 2

4. Stochastic analysis of the model tailored in steps 2 and 3

5. Characterization of surface response (at least in terms on first statistical moments,


mean and standard deviation)

These steps can be performed using a wide range of methods diverse as finite elements, and
boundary elements. Some steps might be performed altogether, as for example, in stochastic
finite elements. Specifically In this work, the general steps are accomplished by employing
the following tools:

1. Analysis of field data, using descriptive statistics, specifically geostatistical tools.

2. Definition of a two dimensional finite element mesh.

3. Ordinary Kriging Interpolation at elemental centroid points based on field shear wave
measurements

4. Stochastic analysis performed using Montecarlo and First Order Second Moment
FOSM Method to compute mean and standard deviation of surface response, taken as
the mean and envelope of maxima spectral acceleration values.

This process is done for a single strong motion record. Therefore, conditional expectations are
calculated. And such values must be integrated to get a complete stochastic site analysis,
which incorporates variability of input outcropping rock motion.

Development of the framework on which the proposed methodology has been developed, has
given rise to a set of computer applications in Visual Basic© and Matlab©, to perform most
of the tasks automatically, in the case where the user requires support, specific routines have
also been implemented. Specially analysis of field data.

Analysis of field data can be performed using a set of tools included in the application. It has
not been set automatic, as the importance of judgment in this step is critical to achieve
representative results of surface response descriptors. This issue will be discussed extensively
in the next sections of this report, alongside with the details of the interpolating scheme
(kriging) and the stochastic tools implemented (FOSM and Montecarlo)

7
Chapter 1: The need of Two Dimensional Site Stochastic Seismic Response Assessment

An overall single numerical simulation, following a simple geometry, set to match general
code requirements has been performed with illustrative purposes mostly. Several properties of
the same scheme were changed to illustrate the flexibility of the methodology proposed as
different sampling strategies, different modeling approaches and finally two randomly
generated artificial fields, were considered. But all cases limited by the same general
geometrical constraints. In total 5 specific analyzes were performed.

This report follows the framework of this methodology. In the second chapter Fundaments,
overall theorical background required to understand the geostatistical tools, Ordinary Kriging
Montecarlo simulation and FOSM procedures, are presented. In the third chapter, the
methodology is described deeply, alongside the computer applications developed to
implemented, also, particular hypothesis taken along its development, are summarized.

In chapter four, the numerical simulation is described. Input parameters taken into account are
explained, and support on their values is exposed. This is done with the aim to introduce
future users with the application, and also give expose the fundamental criteria related to the
selection of such values.

In the last sections of chapter four, and completely chapter five, results of the simulation are
shown. Firstly and assessment of the interpolation scheme is performed, as ordinary kriged
and simulated values are compared for different sampling alongside mesh densities and site
variability. Afterwards, results for the maxima envelope (as the collection of the maxima
value at selected locations for a range period between 0 and 4 seconds) and mean surface
spectral acceleration response are shown for each case considered, among results, assessment
of Montecarlo simulation was also performed, and comparison with their results and FOSM
estimates.

8
Chapter 2 Fundamentals

2 FUNDAMENTALS
2.1 Finite Element Analysis for 2D Site Response Assessment
The fundamental idea of finite element analysis is subdividing the continuum into constitutive
elements defined trough specific nodes; and then, solve the equations of motion to find a set
of compatible displacements in such nodal locations, Afterwards displacements along the
continuum are estimated using an interpolating function which sets the interpolated
displacements u based on nodal values , according to equation 2.1

  (2.1)

Virtual work analysis can be employed to achieve this goal. In such case, a set of “virtual
displacements”, which don’t change the equilibrium state of the system, are applied. Due the
fact that equilibrium is kept, internal and external work of those actions have to be the same.
For the internal elastic stresses, work done by such set of virtual displacements is given by

(2.2)

Wi denotes the internal work done on the element, σ is a vector depicting the stress state and ε
is the strain state. The integral is performed along the volume of the element V. Notice that
stresses are due applied loads in the system, before the imposed “virtual” displacements;
while strains are generated by this specific action.

Trough geometric considerations, a relationship between strains ε and nodal displacements


can be formulated. For example, axial strain as a first order approximation (small
displacement theory) is defined as:

    u (2.3)

Notice that nodal displacements are independent of the x coordinate. Following the same
framework, expression for other strains (shear and axial) can be stated. In general, a matrix
relationship can be stated:

(2.4)

If a constitutive law is implemented, stresses and strains are uniquely related:

9
Chapter 2 Fundamentals

(2.5)

Where D is a matrix which defines the constitutive law for each stress and strain component.
Now, if the constitutive and geometrical expressions are considered together, equation (1.2)
can be rewritten in matrix notation

(2.6)

Again notice how stresses relate to the actual displacement field while strains are due the
imposed virtual displacements. Now, if a regular structural system, with a lumped stiffness
matrix is subjected to the same “virtual displacement actions” the following expression for the
internal energy can be found:

(2.7)

Where K is the stiffness matrix of the lumped regular structural system. And again, un denotes
the nodal valued displacements. Now, by comparing equations 1.7 and 1.6, an expression for
the stiffness matrix associated with the continuum can be stated:

(2.8)

Notice that this integral is performed on Cartesian domain, which might lead to a difficult
evaluation. To overcome this inconvenient, and isogeometrical representation can be defined
as shown in the figure below (Kramer, 1996), for a two dimensional space, as an example.

Figure 2.1 Isogeometrical Representation of a 4 node quadrilateral element (Kramer 1996)

This way, integrals are evaluated among unitary values, independently of the relative position
of nodal point coordinates, o a regular geometry, as shown in figure 1. Of course to achieve
this advantageous scheme, is necessary to transform the integration variables, using the
Jacobian (J). Then the stiffness matrix is given then by (Kramer, 1996):

|| (2.9)

The Jacobian (for two set of coordinates) is given by (Kramer, 1996):


4 4
⎡ ⎛∂ ⎞ ⎤ (2.10)
∑ ∑
∂ ∂ ∂
J ⎢xi⋅⎜ uni ⋅ unj − uni ⋅ unj ⎟ ⋅yi⎥
i=1 j=1
⎣ ⎝ ∂s ∂t ∂t ∂s ⎠ ⎦

10
Chapter 2 Fundamentals

Where x and y denotes the Cartesian coordinates of each point. Equations 2.8 and 2.9 apply
for the most general three dimensional case, if a Jacobian transformation in space is defined.
Also is possible to scale back the problem from a 3D dimensional formulation to a 2D
analysis; two approaches can be followed: plane strain and plane stress formulation. In
geotechnical problems, like the one dealt in this work in particular, the first one is followed.

In such situation a “slice” section is considered. A plane of the 3D continuum is set as the
object, and the displacements in the out of plane direction are constrained by next sections,
allowing for a simpler formulation of the constitutive matrix D and the geometrical
relationships that give rise to geometrical matrix B. For small displacement theory (first order
derivative approximation) the following geometrical equations are set (Kramer, 1996):


ε xx ux
∂x

ε yy uy
∂y (2.11)
∂ ∂
ε xy uy + ux
∂x ∂y
Where, εxx is the axial strain on x direction, εyy is the axial strain on y direction, and εxy is the
shear strain on the plane of x and y. the sub indices in the displacement u denote x and y
directions. As stated before, to build the B matrix, it is necessary to define an interpolating
function compatible with the boundary conditions; how this is made is beyond the scope of
this work, therefore the reader is referred to any Finite Element Book. Constitutive
relationships (strain stress relationships) for the bidimensional case, related to an isotropic,
homogeneous element subjected to 2D plane strain are given below (Kramer, 1996):

⎛⎜ λ + 2μ λ 0 ⎞⎟
D ⎜ λ λ + 2μ 0 ⎟
⎜ 0 ⎟ (2.12)
⎝ 0 2μ ⎠
Where λ is the lame constant and μ is the shear modulus. Poisson ratio is given by:

/ 2 (2.13)

An important remark must be clearly made about 2D plane strain analysis. Out of plane
strains are constrained, but that does not mean vanishing of out of plane stresses. These
actions do not perform work, therefore can be ignored in the energy formulation, but, they are
indeed required to make out of plane strains be null. They can be found as linear combinations
of stresses along x and y directions. The reader is referred to any Finite Element book for
further details in this issue.

11
Chapter 2 Fundamentals

A similar procedure to the one described above can be followed to establish a consistent mass
matrix, but most applications (including the finite element code employed in this work)
follow a lumped approach, in which masses are assigned to each node according to their
afferent area. In such scheme, interaction between neighboring nodes is neglected, resulting in
a diagonal matrix. This way, analysis becomes simpler and less intensive in resources. Even
so, Lysmer (Kramer, 1996), reports that consistent mass matrix analysis overestimates natural
frequencies compared to continuum close form solutions; while lumped mass approach
underestimates them by roughly the same factor. Therefore, he proposed taking a “mixed
matrix” which is simply the average of consistent and lumped mass matrix (Kramer, 1996).

Damping shows to be more problematic than stiffness and mass definition in 2D Site
Response Assessment. The major contribution to energy dissipation is due hysteresis, which
is taken into account by changes in the material constitutive law, as strain develops and
reverses. But, experimental evidence, show that damping, although is small (generally less
than1.5% of critical value) is nonzero even for low strain values (Stokoe et al, 1999). Also,
adding a small damping value is desired to achieve a numerically stable solution of the
equation of motion (Kramer, 1996) so is a common practice to set a low value, even for
elastic analyses.

In order to allow the modal decoupling of the equations of motion for each node, it is required
that damping matrix is a linear (or higher order, for Caughey damping) combination of mass
and stiffness damping. This way, normal modes of vibration will allow for the resolution of
the dynamic problem in generalized modal coordinates. This is the fundamental Idea of
Rayleigh Damping

(2.14)

Where C is the damping matrix, M is the mass matrix, K is the stiffness matrix and ao and a1
are weighting factors. If modal decomposition (at mode n) is performed on equation 2.9 the
following expression is found:

2 (2.15)

Where the handwritten terms refer to the nth modal mass (Mn) and stiffness (Kn), ξ is the
critical fraction of damping for mode n and ω is the circular frequency of mode n. Two
arbitrary constants are set, so equation (2.15) can be applied for two modes, defining the
following system of equations (Chopra, 1995)

⎛ 1 ω ⎞
⎜ω 1⎟ a
⎛⎜ ξ 1 ⎟⎞ 1 ⎜ 1 ⎟ ⋅ ⎛⎜ 0 ⎟⎞ (2.16)

⎜⎝ ξ 2 ⎟⎠ 2 ⎜ 1 ⎟ ⎜ a1 ⎟
⎜ ω ω2 ⎟ ⎝ ⎠
⎝ 2 ⎠

12
Chapter 2 Fundamentals

If the fraction of critical damping is set equal for both modes, the following values for each
constant are found by solving equation 2.16:

2 ⁄ (2.17)

2⁄

The critical issue now is the selection of the frequencies for which damping is set. If equations
(2.16) are reviewed, it is possible to observe lower damping than the set value ξ for values
between the controlling frequencies. Outside this range, overdamping happens. Therefore, is a
common practice to set the weighting factors for the first natural period (longest one) and a
period several times smaller than the natural frequency, for which response is expected to be
unaffected.

0.2
Mass
Stiffness
0.15
Total
Damping

0.1

0.05

0
0 0.5 1 1.5 2 2.5

Period (s)
Figure 2.2: Rayleigh Damping. 5% of critical damping set at 0.5 and 1.5 Period Values.

2.1.1 Boundary definition for 2D site response assessment Using Finite Element Analysis
Other fundamental issue related to the use of finite element methods into assessment of 2D
site response is the definition of the boundary between the half space and the domain analysis.
Normally in structural analyses a fixed or pinned constrain is set, and in most cases
reasonable results are obtained by neglecting soil-structure interaction (Pender, 2007).

Unfortunately, for wave propagation phenomena this solution is not admissible. As it leads to
a situation called Box Effect which induce serious errors in a ground response or soil-
structure interaction analysis (Kramer, 1996).

Consider for instance, a wave propagating down a rod (shear or axial), if the wave meets a
fixed condition, it will be refluxed back to the source, trapping the energy in the system:

13
Chapter 2 Fundamentals

I Rf

ROD

Rr

Figure 2.3: Reflexion of a pulse on a Rigid Boundary

The reduced domain in our finite element analysis is limited by the rigid boundary in gray,
but the rod extends infinitely. As this limit is artificial, the pulse shall radiate away from the
domain into infinity (Rf). But due the imposed rigid boundary, a reflected wave is generated,
and the pulse goes back into the domain analysis (Rr) generating an artificial spurious
response which is not present in reality.

One solution might be extending the domain analysis; if damping is included in the model, the
pulse amplitude will diminish with distance, and therefore, the effects of a refluxed wave will
be smaller as the travel length increases.

Rr

Figure 2.4: Attenuation of a Pulse with distance.

This solution has a critical disadvantage, the domain has to be extended until the amplitude of
the travelling wave decays to a reasonable amount; due the low damping values of soil,
together with the relative high wave velocities (between 100 and 1000 m/s) the required
distance might be extremely long, imposing a sizable demand of resources and computation
capabilities to perform such finite element analysis. Another approach is to introduce and
absorbing element which can take the energy of the pulse, avoiding any reflection.

Stress on a 1D propagation situation (in this case, for example an axial wave) is given by:

(2.18)

E is Young modulus (transversal deformations are not considered), and the last term denotes
the first partial derivative of the displacement u along the x direction. Using the chain Rule for
differentiation stress can be expressed as:

/ (2.19)

14
Chapter 2 Fundamentals

The partial derivative of the displacement u with time is the particle velocity V, while the
ratio of time against space is the wave propagation velocity c. The impact force Fi, is
equivalent to the product of area A and stress σ

/ (2.20)

Now consider a damper, located on the boundary as shown on figure 5. This element will
generate a reaction proportional to the velocity of the particle where it is attached. As given
by:

(2.21)

I Damper

ROD
Rf

Figure 2.5: Damper at boundary

C (capital) is the damper constant. Now if we equate the damper reaction Fe, and the driving
force of the pulse Fi, it is possible to find an appropriate damping constant for which the
refluxed pulse will be avoided:

(2.22)

Expressing the modulus in terms of shear wave propagation velocity and solving for C we
find (Kramer, 1996)

(2.23)

Where ρ is the soil density and c is the wave propagation velocity.

This scheme can be generalized to 2D propagation phenomena, but the value of the dashpot
constant required for achieving perfect absorption will depend on the angle of incidence of the
incoming wave (Kramer, 1996). As they are likely to strike the boundary surface at different
angles, spurious reflections are unavoidable. Such boundary condition is called a Local
Boundary.

Consistent Boundaries capable of absorbing energy from all kind of surface and body waves,
have been developed trough boundary element method formulation. Those limit elements
involve a complex assemblage of springs, dashpots and masses, as reported for example by
Wolf (Kramer, 1996).

15
Chapter 2 Fundamentals

Figure 2.6: Boundary configurations: A: zero displacement, reflexive boundary, B: Local boundary, C:
Consistent Boundary (Kramer, 1996)

2.1.2 Integration of the Equations of Motion


Another critical Issue related to the assessment of 2D site response is the solution of the
equations of motion after the system has been modeled (by any method) and the boundaries
have been set properly.

Although a sizable amount of integration and solving methodologies have been proposed to
tackle the solution of the equations of motion, just two schemes will be exposed in this work
and just one deeply. The first one is Nigam Jennings, which renders an exact solution and
therefore is worth of attention, and secondly, Newmark Beta method, which is the most
widespread methodology, employed to estimate the response of a system subjected to short
time varying loading phenomena.

Nigam Jennings assumes a pricewise behavior among time story steps of a recorded event.
Therefore an exact solution of the equation of motion for a linearly changing load with set
initial values is developed; afterwards the solution is applied for a given interval, taking the
conditions at the end of the step before as initial values. This way the solution at the end of
the interval is analytically found. The solution propagates in time as the analysis is repeated
for the next step, taking this time the displacement and velocity vectors found, as initial
conditions (Clough and Penzien, 1993).

Nigam Jennings method is exact and analytical. No approximation is imposed, beyond the
piecewise behavior between sampling points, which is a reasonable hypothesis (really an
exact trend between them is not known) and therefore is the best method to analyze an elastic
system. Matrices and calculations are complex, but if the properties of the system remain
invariant with time, such procedures have to be done just once at the first step. And then used
again and again as the solution is extended temporally.

Unfortunately, if the system is nonlinear, meaning that its properties are not time independent,
the parameters governing the solution of the equation of motion might change from one
interval to the next one, making the method extremely inefficient.

16
Chapter 2 Fundamentals

Newmark Beta method is the most extended methodology to solve the equations of motion. Is
a Non explicit formulation in which acceleration inside a given interval is assumed constant,
and equivalent to a weighted average of the value at their extremes. Therefore, displacements
(u) and velocities ( ) at the end of the interval i can be found using the following
relationships:

Δ 1 Δ Δ (2.24)

1 Δ Δtu

β and γ are the constants of the method. γ must be equal to ½ and β less or equal to ¼. The
method is unconditionally stable for the limit values of ½ and ¼, so those are the most
common choice in nonlinear analysis. Now that the displacement and velocity vectors are
known at the end of the interval, the equation of motion can be solved. But there is a working
hypothesis on the acceleration, which is validated iteratively.

The procedure is the following (Otani, 2006)

1. Assume an acceleration vector at the end of the analysis interval ( ).

2. Evaluate the displacement and velocity at the end of the interval using equations 2.24

3. Evaluate damping (D) and resistance force (R) for the displacements and velocities
estimated, at the end of the interval, according to the hysteresis and damping models
set for the material (or element)

4. Solve the equation of motion at time i+1 to find the acceleration vector at the end of
the interval.

(2.25)

Where e is a unit vector (which represents the excitation at the boundary) and the g
sub index denotes acceleration at the basement.

5. If the computed acceleration vector differs from the assumed acceleration by a given
tolerance, the value found in step four is set as input in the first step. The procedure is
iterated until convergence in the acceleration at step 4 is achieved.

2.2 Random Field Fundamentals


A random field is parametered family random variables, with the parameter(s) being S
dimensional and belonging to an indexing set. If the dimension of the parameters is one, the
random field is a random process (Conte, 2006).

Therefore, a random field is formally defined as the joint probability distribution or their
constitutive random variables z (Baecher and Christian, 2003):

, … , ,, … , , ,….., (2.26)

17
Chapter 2 Fundamentals

This joint probability function , … describes the simultaneous variation of random


variables Z within the S-dimensional space x. For the framework set for this work, the
random variables Z represent Shear wave velocities at set locations in a specific site. Shear
wave velocity can be sampled at any specific point inside the field, which can be also
represented by a set of Cartesian coordinates. Therefore, Shear wave velocity at any location
can be parameterized by the pair of Cartesian coordinates x = (x,y) at the point where it is
considered.

Shear wave velocities can be observed anywhere in the site, as a continuum or can be taken at
representative locations, as done in the present study. The collection of points where shear
wave velocity is considered, along their parameter coordinates, defines the random field of
shear wave velocity associated to the site.

Several moment functions of various orders can be defined for the random field; for example:

(2.27)

First order moment (mean function), in the framework of this work is the mean shear velocity
at a specific location x (including both x and y coordinates).

, , , , , (2.28)

Second order moment, or Autocorrelation function. In this specific study is the expectation of
the product of the shear wave velocity along two sites. Further moments can be calculated, but
in general, mean and autocorrelation functions are the most widely used. Central moments can
be also defined.

, (2.29)

Equation 2.26 shows the Autocovariance function of the random field (the covariance
between two random variables located at points x1 and x2 , again recalling the framework of
this work, is the Variance between the shear wave velocity at two locations inside the site (x1
and x2). Solving the expected value yields (Conte, 2006)

, , (2.30)

If the Autocovariance function is evaluated at the same location (x1 = x2 = x) the standard
deviation at x is found (being the variance of shear wave at a specific location x). This is
named the variance function σz(x)

Also, it is possible to define an autocorrelation coefficient function

, , ⁄ (2.31)

Which would represent the linear correlation of shear wave velocities at two distinct sites (at
the same place is equal to one). Another descriptor, used widely on random field applications
in geological and geotechnical situations is the variogram function. At its most general form
is given by:

18
Chapter 2 Fundamentals

, /2 (2.32)

Solving the quadratic term yields:

, , (2.33)

Varigram function has an advantage compared to other descriptors as autocorrelation and


mostly autocovariance; its evaluation does not require any prior knowledge of the distribution
of the random variables of the field, and can be performed directly from measured data,
through a simple equation; the average of the squared difference of the sampled values at two
locations. This explains its wide use.

Although the general formulation presented in this section is arid, as is performed on abstract
terms, is not customary. This research is intimately bounded to shear wave velocity, but also
random fields can be employed to represent any other mechanical property of the soil; for
example, shear resistance, and permeability, among others; even not mechanical descriptors
can be considered, as done in several environmental studies, where concentrations of
chemical waste and life threatening contaminants is addressed used random field theory.

2.2.1 Stationarity – homogeneity of a random field


A random field (and therefore a random process) is stationary in a strict sense if its complete
probabilistic structure (or description) is independent of the absolute parameter origin (Conte,
2006). Therefore, if a shift in space is applied to the random variables, for example, being
sampled a distance further than the initial on which samples were taken, their partial
distributions have to remain equal, as stated in equation 2.34, where just the first two
distribution orders are shown

, , (2.34)

, , , , , , , , , , ,

.For first order descriptor distribution, we have that all variables share the same distribution,
while for second order; the joint distribution of two (or more) variables depends only on their
separation distance.

For 2D surface response assessment, this implies that shear wave velocity has the same
distribution for all points the within the site. Also the distribution of shear wave velocity at
two sites just depends on how far are the points, and not their specific location. If Stationarity
can be just verified for the two first distribution descriptors (i.e. the distribution of a single
random variable, and the joint distribution of all possible pairs of two random variables of the
field) the field is said to be weakly stationary.

19
Chapter 2 Fundamentals

Another precision has to be made relating stationarity. Originally this concept was developed
for random processes, as a topic of random vibration theory. If it is applied to a spatial
descriptor instead of temporal one, the field is said to be instead, homogeneous. But
conclusions derived for random processes are equally valid (with limitations) to spatial
random fields.

One of the immediate consequences of stationarity is that mean, mean square; variance and
standard deviation are independent of time altogether (Newland, 1993) for random processes,
or location for homogeneous ones, and therefore become a property of the random field.
Applying this concept to the autocovariance and variogram functions (Equations 2.30 and
2.33), the following is observed:

(2.35)

(2.36)

Where d is the distance among locations x1 and x2. If the autocorrelation function is solved in
equation 2.35 and the result is replaced in equation 2.36 a relationship between the
autocovariance and variogram functions can be obtained:

  (2.37)

But the first term is the variance of the random field (common to all random variables within
it)

  (2.38)

The variogram function has several advantages respect the autocovariance, its calculation
does not involve any knowledge of the mean value of the random field, and can be performed
without any hypothesis on field data. Its calculation is more straightforward, as is just the
squared difference of the measured results. That is why is preferred among geosciences,
although is common to see autocovariance and autocorrelation coefficient calculations from
experimental data in geological and geotechnical analyses.

2.3 Elements of Geostatistics


Geostatistics is the application of random field theory to Geosciences related situations, with
the aim to describe their spatial continuity, which, is an essential feature on many natural
phenomena; and afterwards adapt classical regression techniques to take advantage of this
continuity (Isaaks, Shrivastava, 1989), then, in a broader sense, geostatistics is the statistics of
regionalized correlated random variables.

Origins of geostatistics are found exclusively in the mining industry. D.G Krige a South
African Mining Engineer, and H.S Sichel, and Statistician developed a new interpolation
method in early 1950’s when “classical” statistics was found unsuitable for estimating
disseminated ore reserves. Later, Georges Matheron, a French engineer developed Krige’s
Innovative concepts and formalized them within a single framework with his “Theory of
Regionalized Variables”.

20
Chapter 2 Fundamentals

Matheron, at the Centre of Geostatistique pioneered the use of mining geostatistics in the
earlies 1960’s. The word Kriging was coined in recognition of DG Krige. (Charmbers, Yarus
and Hird, 2000).

Now geostatistics has extended from their original field. Among mining industry, oil
companies use it to characterize prospective new findings. Also Environmental Agencies and
research use it to estimate traces of pollutants and hazardous materials from field
measurements, for example work by Oliver and Kharyat (Oliver and Kharyat, 2001). Even
has been used in deep ocean mapping as shown by Bleines (Bleines, 2004).

Applications into civil engineering fields are also flourishing, specifically at geotechnical
engineering analyses. For example Auvinet, (2006) used geostatistics to estimate differential
settlement on the foundation of Rion-Antirion bridge; Even among earthquake geotechnical
engineering and seismology geostatistics is becoming a popular tool; for example, Thompson,
Baise and Kayen (2004) have studied the spatial correlation of shear wave velocity of the
upper 30 m in San Francisco Bay Area Sediments, using geostatistical tools. Also, Dawson
and Baise (2005) employed geostatistical interpolation to assess liquefaction potential due
Northridge earthquake at two selected sites.

2.3.1 The Nature of variability in Geotechnical engineering


Before exploring in deep how random field models are employed to define the framework on
which geostatistics are founded, it is advisable to analyze why and how randomness is
involved in geosciences, and specifically geotechnical analysis.

Processes which gave rise to soil structures are complex and not fully understood, but they are
not random by themselves; just seem to be non deterministic due poor knowledge about them,
as stated by Isaaks and Shiravastava: “Processes that actually do create an ore deposit, a
petroleum reservoir or a hazardous waste site are extremely complicated and our
understanding of them might be so poor that their complexity appears as random behavior to
us, but this does not mean that they are random; it simply means that we are ignorant” (Isaaks
& Shiravastava, 1989).

Although, random field theory fundamentals are almost exactly the same for both random
vibrations and geostatistics, there is a fundamental difference on the approach to solve
physical problems. While a single occurrence of a random process is irrelevant, because the
factors that gave rise to it are unique and the chances that they will repeat exactly, giving rise
to the same time story are almost zero, on geosciences, we deal with an unique site, and we
wish to infer properties about it. Then, the focus is on a single occurrence of the field, instead
of characterizing the complete ensemble.

The aim of random vibrations is to characterize in probabilistic terms the response of a system
due an ensemble of actions. Geostatistics, on contrary uses a probabilistic model of a single
situation to make estimates (or interpolate) measured properties around unsampled places.
Objectives are completely different.

21
Chapter 2 Fundamentals

2.3.2 Field Variograms


Although some research about non stationary geostatistical methods is on course, common
practice assumes, at least, weak homogeneity. In such case, as explained before, first order
distributions describe general properties of the field (mean, standard deviation, expected
squares, etc) while second order distributions shows how probabilistic properties vary along
the field. The main descriptor related to second order distributions is the autocorrelation
function.

Therefore, in order to characterize the probabilistic structure of a Spatial Random Field, a


careful analysis of its autocorrelation function is required. Also, it was shown how the
autocorrelation function might be found using more appealing and common descriptors like
autocovariance and variogram (semivariogram as noted by other authors) functions.

In geostatistics, use of variogram function is the most common approach. They can be easily
estimated from data and have a graphical behavior more appealing than autocovariance
functions, but the use of one or another is conventional. At large, one can be found from the
other, as explained before.

If samples are uniformly spaced, equation 2.32 can be applied, as long is expressed in terms
of field data collection (Isaaks and Shiravastava, 1989):
1
2 ∑ ,   (2.39)

In which the squared difference is taken between all the pairs of field values (shear wave
measurements) i and j separated a distance h, and N is the total number of pairs.

As an example, consider the following data, generated as random outcome of 5 independent


normally distributed (250 m/s mean and 50 m/s standard deviation) random variables
(meaning that there is no correlation among them). Assume, for example, that the values
represent shear wave velocities along a horizontal line at the same depth, spaced the same
length, 10m:

A B C D E

Soil Mass

Figure 2.7: Variogram Example, Hypothetical (artificially set) measurements of shear wave velocity on as
site at the same depth.

22
Chapter 2 Fundamentals

Table 2.1: Variogram Example, Hypotetical (artificial) shear wave measurements at the same depth

Id Vs (m/s) Position (m)

A 310 0

B 224 10

C 254 20

D 322 30

E 243 40

Looking at the spacing and data structure, the following element Pairs can be generated for
each separation distance:

• 10 m separation: AB, BC, CD, DE

• 20m separation: AC, BD, CE

• 30m distance AD, BE

• 40m distance AE

Then, following formula 2.36, variogram values are computed for the first distance:

• 10m, Number of pairs N = 4


1
2
1
8 310 224 224 254 254 322 322 243

10  2395  /  

• 20m, Number of pairs N = 3


1
2
1
6 310 254 224 322 254 243

20  2143  /  

• 30m, Number of pairs N = 2


1
2
1
4 310 322 224 243

30  126  /  

23
Chapteer 2 Fundameentals

• 40m, Num
mber of pairrs N = 1
1
2
1
2 310 243

40  2
2244  /  

Despitte the simpplicity of thhis exercisee, several topics


t abouut variogram
m estimatio on can be
shownn; random variables
v A, B, C, D annd E were geenerated noormally distrributed with h variance
equal to 502 = 25500 and coovariance (K K) equal to zero for all
a distancess (because they
t were
indepeendent showwing no rellationship among
a them
m in the fielld). Accordding to this data, and
equation 2.35 thee variogramm should bee a constan nt line at 2550 m2/s2 soomewhat wee have an
agreem
ment, but ata the 30 m distance there is a noticeable drop. A pllot of the calculatedc
quantiities comparred with disstance, is caalled variogrram (or sem
mivariogramm).

Fiigure 2.8: Varriogram for proposed


p exaample (table 2.1)

Data generated
g w random,, and thereffore this tren
was nd is artificial. If moree data were available,
the unnlikely evennt of havingg two valuees exactly separated
s 300m would bbe counteraacted with
other measures.
m S
Saturation iss a sensitivee problem in
i Geostatisstics. As lesss data is in
nvolved in
calculaations for long distancces, there iss a greater chance thatt unlikely rresults wou uld lead to
trends that aren’tt real. Even worse iff mistakes are a includeed in field data (for example
e a
misplaace dot or exxtra zero woould make a lecture off one hundreed, a thousannd )

Thereffore, the firrst step in any


a geostatiistical analy ysis is perfforming a trraditional well
w posed
descripptive statisttical analysiis of field data.
d Outlierrs and mistaakes should be identifieed prior to
performm any spatiial characteerization. Allso represen ntativeness of the data should be addressed,
a
especiially for lonng distance, where it is sparse (Isaaaks and Shirravastava, 11989).

24
Chapter 2 Fundamentals

In broad terms, as long periodic trends are avoided, covariance should diminish with distance.
Values at a given location are expected to influence by lectures close to it, rather than further
ones. Traditional geostatistics theory imposes the fundamental hypothesis of stationary fields,
so, if equation 2.35 is taken into account again, it is expected that, for certain distance, the
variogram stabilizes at a value closer to the standard deviation of the random field which be
employed to model the soil structure. So, if a variogram is drawn based on data from an
expected homogeneous soil variable, a simple estimate of the standard deviation of the field
can be set.

If data is not uniformly spaced, inconsistency issues become more critical even. Now if
several distances between data are set, distance values do not show a constant increment,
instead variable increments are observed, therefore, data gets more distributed among distance
categories, and not just at the extreme lengths.

Therefore, some clustering of the data must be done to keep representativeness. Close data are
grouped together setting a distance and angle tolerance, or also trough rectangular areas,
which is better suited for mapping, if the Cartesian coordinate system is kept, as shown in the
figure (Isaaks and Shiravastava, 1989):

Figure 2.9: Clustering of data for non uniform sample spacing from Isaaks & Shiravastava, (1989)

If grouping of several data is performed, now the computation of the histogram has to be
performed along the clustered values. Also, an approximation is set on the distance; instead of
the value measured between samples, now, separation among critical points of each area
(centroid for example) is taken into account.
1
2 ∑ ,   (2.40)

Which is almost equal to equation 2.39 but now, distance h is taken approximately. Grouping
of data has a clear smearing effect as show in the following variograms, computed for a real
case, Walker Lake area in Nevada (Isaaks and Shiravastava 1989).

25
Chapter 2 Fundamentals

Figure 2.10: Effect of data clustering due non uniform sample spacing. On left data clustered each 5m, on
right data clustered each 10m (Isaaks and Shiravastava, 1989)

If both clustering approaches are compared, the log distance of 10 m shows a clear
improvement (on the right). By categorizing data, sparse randomness which is masquerading
the spatial continuity trend of the data, is removed. A better understanding of the variability is
achieved. How long a class data should be sampled does not obey a mathematical rule. If too
short, noise will blur the desired result, if too long, smearing and clustering might obscure
artificial trends present in the data, like periodic recurrence, for example. A good balance can
just be met by performing several tryouts.

Another issue that has to be considered is the effect of direction. Distance, defined as the
difference of two positions is a vector and therefore, besides a magnitude, has a direction.
Spatial continuity might follow specific trends on space, so, variability following a particular
trace might be completely different if other is followed.

Also, it has to be remembered that Geosciences deal with complex phenomena that obey
physical laws sensitive to topographic gradients. As a simple example, consider an alluvial
soil deposit. As it was shaped by settlement of materials along long times, (and therefore they
properties can change completely) spatial continuity on vertical direction is expected to be
less than along a plane perpendicular to it. For example, DeGroot compiled values of tip
resistance (from cone penetration tests), SPT blow count N, undrained shear strength and
hydraulic conductivity resistance, and found that statistical correlation can be along 0.5 to 3 m
in vertical direction, while on the horizontal can range between 15 and 30m (Thompson,
Basie and Kayen, 2004).

A first approach to understand the phenomena is to derive an omnidirectional Variogram, in


which variability is studied in all possible directions. In such approach, data is related among
each other following circles (spheres for 3D analyses) with variable radius, set equal to the
analysis distance:

26
Chapter 2 Fundamentals

L
Analysis Domain (distance = h)

L Lag Length

h Grouped Samples (distance = h)


L

Ungrouped Samples (distance = h)

\
Figure 2.11 Sampling when defining an omnidirectional variogram.

Now that a general framework has been set, after computing the omnidirectional variogram it
is reasonable to explore some particular directions. Following the scheme depicted on figure 9
on the left. And setting an appropriate angle tolerance, the same way is done for the length
lag.

Each analysis is specific, and chosen directions are due judgment. Geological history of the
deposit, or any qualitatively analysis which can be performed before sampling is helpful, and
will raise the significance and usefulness of the geostatistical study.

Several directions are taken, like radii of a bicycle, extending from the analysis point. This
way a rose diagram is depicted. The graph shown afterwards is a rose diagram performed for
the Walker Lake data set (Isaaks and Shiravastava, 1989).

Figure 2.12: Rose Diagram along which directional variograms were calculated, Walker Lake data set
Isaaks Shiravastava 1989)

27
Chapter 2 Fundamentals

Varigrams among each of the nine directions are also depicted.

Figure 2.13: Rose Graph Variograms. Direction is depicted on front. Distance required to
achieve a value of 80000 is shown for each graph (Isaaks and Shiravastava,
1989).

Differences among directions can be noticeable, showing the importance of this factor in the
analysis. If enough directions are taken into account, it is possible to identify a major and a
minor axis, along which variogram values are maximum and minimum for a set distance.
Instead, as done in the Walker Lake set, given a certain value of the variogram, distance to
reach it is recorded. The major axis is the one for which distance is minimum, and minor, the
opposite. In geotechnical problems, is expected that vertical would be the maximum value (as
correlation distance is shorter) while horizontal is the minimum. This is based on basic
knowledge of the phenomena that lead to layering of soil strata.

28
Chapter 2 Fundamentals

Again, due the high amount of judgment required to characterize the spatial continuity of a
random field, a prior analysis, before sampling, is required. Taking into consideration
qualitative data will increase the benefits from the further geostatistical study. Basic statistical
descriptors are also handy to set the completeness and coherence of the overall field
measurements set. Outsiders and non reliable data shall be removed before continuing with
the geostatistical work.

There are two basic parameters that can be indentified on field variograms: Range, and Sill.
Based on their values, it is possible to estimate general properties of a random field, especially
if it is homogeneous.

• Range: Is the distance at which the variogram function reaches a constant value,
showing up in the variogram as a flat region (approximately flat as field data is
considered). Therefore, if the variogram function reaches a constant value,
correlation beyond the range is zero, being an immediate consequence of equation
2.35 (otherwise a non zero constant correlation will be found from the range to
infinity which is clearly absurd). Then the Range can be also defined as the limit
distance at which random variables in the field are correlated

• Sill: Is the limit value that is reached by a variogram after the range. If equation 2.35
is recalled, and the sill value is replaced, and considering that the covariance function
is zero, it is an obvious consequence that the sill has to be equal the standard
deviation of the random field.

SILL

RANGE

Figure 2.14: Sill and Range definition. (Variogram shape already depicted on figure 10, left)

There are cases in which a sill is not reached, because the variogram does not stabilize around
a value, or there is a decrease on the variogram value, after the sill is reached. These
situations, and specially the last one, are named “hole” effects. One case, which is quite
extreme is the exercise depicted on figure 8; after reaching a value close to 2200, there is a
sudden drop to a value close to 100.

29
Chapter 2 Fundamentals

Hole effects can be related to non consistency, or saturation of data, as graphically shown in
the exercise (if data for a certain log is not enough, randomness might induce artificial trends)
but also, can be bound to the specific nature of the data. They might indicate a periodic trend,
in which correlation goes beyond the short field, and extend further. This way, hidden features
that might be masqueraded by large amount of data, can be discovered, proving, again the
usefulness of the variogram.

Another important feature of autocorrelation, autocovariance and variogram functions is their


symmetry. Consider that in equation 2.39 we invert the sense of analysis, and instead of
pairing the random variables i and j we pair them j and i. In such case the expression will
become (Isaaks Shiravastava 1989):
1
2 ∑ ,   (2.40)

But now distance has to be taken backwards (h was defined as the distance between i and j)
1
2 ∑ ,  

(2.41)

Although this analysis has been performed for the test variogram function, the result is also
valid for their analytical counterpart. Also, it has been shown how the variogram,
autocovariance and autocorrelation function are related trough constant field values, which
are independent of sense. Therefore, the symmetry property is also valid for them.

This work is constrained to single variable fields, but a concise introductory review of
multiple random variable field theory is not out of order. If two random variables are
considered, it is possible to define statistics relating them. This way lab data from measures of
other parameters, besides the one into account, can be included, enriching the analysis. For
example along wave shear velocity, SPT blow count or tip cone resistance might be also taken
into account to predict the main descriptor (shear wave) or asses the surface response for an
example of such analysis, the reader is referred to Rota (2007). A cross variogram function is
defined as:

, /2 (2.42)

Where w and z are distinct random variables. (But members of the same field as they are
parametized by the same set of coordinates x) Sample cross variogram function can be
calculated following:
1
2 ∑ ,   (2.43)

Expression equivalent to 2.39 if we replace on the quadratic factors by the second random
variable. It has to be noted that properties from autocovariance and autocorrelation function
cannot be extended to their multiple variable equivalents, cross correlation and cross
covariance. For example, symmetry is lost unless the order of the random variables is changed
(Conte, 2006).

30
Chapter 2 Fundamentals

Curiously this is not the case of the cross variogram, which remains symmetric (Isaaks and
Shiravastava, 1989). Further developments of the model proposed in this work, for multiple
random variables are left to future research.

2.4 Estimation
Now that a way to describe the spatial continuity of the random field has been developed, it is
possible to go towards the second aim of geostatistics; use this spatial property in conjunction
with classical regression analysis to estimate the expected value of the desired parameter at
unsampled locations.

Non statistical estimation techniques are varied, and can be used as point estimators. Even so,
there is no way to discard them as wrong or biased, as there is no way to contrast them with
an “exact” solution, which would require the knowledge of the desired parameter along the
entire field. But they reject the statistical properties of the system, and therefore, a more
strongly methodology which incorporates randomness is desired. For more information on
non statistical estimators the reader is referred to Isaaks and Shiravastava (Isaaks and
Shiravastava, 1989).

2.4.1 Least Squares Linear Fit


Least squares regression is the most common and classical method employed to find a trend
among data series, and then fix it to a desired functional form. In general terms, given a set of
explained variables Y in terms of explanatory ones X a functional form which minimize the
variance of the error term is desired. For a linear case, and a single explanatory variable the
following is stated:

(2.44)

Which is the linear estimator of the random variable Y. In order to find coefficients A and B it
is necessary to set a pair of conditions. It is desired that the estimator Y’ is unbiased (its
expected value is equal to the mean of the predicted variable) and has minimal variance. The
error is defined as the difference between the predicted and the measured value:

(2.45)

Unbiased condition implies that the Error has a zero mean:

0 (2.46)

Now trough minimization of variance it is possible to find the values of the coefficients A and
B

(2.47)

The last term is zero due the non bias condition.

2 2

31
Chapter 2 Fundamentals

2 2 2 (2.48)

Now in order to minimize the variance, partial derivatives at variables A and B are taken, and
set equal to zero.

2 2 2 0

2 2 2 0 

A system of two variables (A, and B) which can be solved analytically

⎛A⎞
⎜ ⎟ ⎜
( )
⎛ E x2 E( x) ⎞

−1
⋅⎜
⎛ E( x) E( Y) ⎞
⎟ (2.49)
⎝B⎠ ⎝ E( x) 1 ⎠ ⎝ E( y ) ⎠

The standard deviation of the error term can be found by replacing the values of the fitting
coefficients into equation 2.44. Least Squares estimator is BLUE (Best Linear Unbiased
Estimator), due the imposed unbiased condition and the minimization of variance procedure.
However lacks a fundamental property, which makes it unsuitable for their direct use in
geotechnical, and geosciences, analysis. Error term is assumed independent for all predictions,
neglecting any correlation between two locations.

Generated
Field

Figure 2.15: Residual variations of SPT blow counts measured at the same elevation every 20m beneath a
horizontal transect at a site data has been standardized (zero mean, unit standard deviation).
the dark line shows field data. The light one depits artifically generalted data with the same
mean and standard deviation, but probabilsitically independent. After Beacher and
Christian, (2006).

Figure 15 shows a graph of the residuals (error terms after performing a fitting following a
procedure similar to the one depicted before) from SPT data. As shown, residuals follow
some periodic trend unlikely of non correlated variables. As stated, physical processes which

32
Chapter 2 Fundamentals

gave rise to soil structures are well set in space and time, and therefore, locations nearby share
a common structure. (For example, a layer which had been deposited, along it, materials have
a common geological history, as they might had been generated at the same time).

For that reason, a procedure which can include explicitly the expected correlation among
nearby structures is desired, in geotechnical and geosciences analyses. As will be shown
afterwards, this specific need gave rose to kriging.

A final remark should be done about fitting procedures. Least Squares methodology (even
beyond linear, as the functional shape might be changed to suit specific needs) is just one
approach. Other solution is trough Bayesian fitting. Although for most of the cases results
from least squares method are almost equal to the ones yielded by Bayesian fitting, the
methodological framework is completely different. Again, least squares approach is the one in
which Kriging, the predictive methodology set specifically for Geostatistics, is based.
Nerveless, a brief look at Bayesian fitting is advisable.

On Bayesian fitting, a functional form, with guess parameters is imposed on the system, then,
using Bayes theorem, and likehood functions, it is possible to find the characteristics of an
updated function, which considers the real field data. The most interesting feature of Bayesian
approach is the fact, that, parameters of the functional form are not set constant, instead, they
are treated as random variables, and trough the inference process their probability distribution
can be determined. (For linear case a t-student parameters have been analytically found).

Also, in Least Squares fitting, the only constraint imposed on the residuals (error terms) is
their independence, and therefore; no hypotheses are made regarding their probability density
function, because just the first two moments of the data are considered. Generally they are
assumed to be normally distributed (taking into account the central limit theorem); by analogy
to the central limit theory. With Bayesian inference, a probability density function for errors
can be specified. For more information about Bayesian regression analysis, the reader is
referred to the work of Baecher and Christian (Baecher and Christian, 2003).

2.4.2 Ordinary Kriging


The cornerstone of geostatistics is Kriging; a prediction (or interpolation according to Baecher
and Christian (Baecher and Christian 2003)) scheme, used to estimate the expected value of a
property inside a random field, based on its probabilistic structure, represented by an
autocovariance function. This is used in conjunction with certain occurrences (samples) with
in the field, to make an estimation of the mean value of the parameter at unsampled places.

According to its complexity and the characteristics of the random field involved, several
categories of Kriging have been defined: Simple, Ordinary and Universal, among others. Both
Simple and Ordinary consider and homogeneous random field. But in simple Kriging a
known mean value is also enforced. In practical for geotechnical problems, such restriction is
not reasonable.

33
Chapter 2 Fundamentals

Ordinary Kriging on contrast does not require a prior knowledge of the mean value of the
stationary (homogeneous) random field. Therefore is more widely used than Simple Kriging.
Also as will be shown latter, kriging is a BLUE estimator (Best Linear Unbiased Estimator)
which, make it attractive for assessment purposes. In this work, Ordinary Kriging was the
estimation methodology employed to characterize the analysis region and therefore will be
discussed extensively and carefully.

Estimates at unsampled locations in ordinary and simple kriging are taken as linear
combinations of sampled values.

∑ (2.50)

Where the summation is taken along the N i-samples. wi is a constant weight coefficient
given to each sample, and Si is the measured value of sample i. The objective is to find the
sample weights w.

Now, unbias condition will be imposed on the weight factors w:

∑ ∑ 0 (2.51)

Recall that the weights are constants while samples S are specific occurrences of a set of
random variables within the field, which is used to model the physical situation, Also the field
is homogeneous, then the expected value of each sample is equal to the other samples and the
expectation of the predicted variable. Therefore equation 2.51 yields:

∑ ∑ 0

Solving for the sum of the weights w, we find the condition imposed on the weights to satisfy
the unbias requirement:

1 ∑ 0

∑ 1 (2.52)

Unique and necessary condition required to guarantee unbiasseness of the predictive scheme.
The variance of a linear combination of n k random variables is given by (Conte, 2006):

∑ , ∑ ∑ , (2.53)

Where the coefficients are constants. For two random variables reduces to:

1 2 1, 2 (2.54)

Which, is a functional form more widely known. If equation 2.53 is applied to the error term
Er = Z – Z’ the following is observed:

2 , (2.55)

34
Chapter 2 Fundamentals

First term is the field squared standard deviation σ2 as it is homogeneous. Second term, the
variance of the estimator, is a linear combination of the samples and their weights wi then
according to equation 2.53:

∑ , ∑ ∑ , ∑ ∑ , (2.56)

Third term can be solved directly:

, ,∑ ∑ ∑ (2.57)

Introducing the values of the predicted random variable (Y) and its expectation (E(Y)) inside
the sum terms yields:

, ∑ ∑

Introducing all expectations inside the sum terms, and collecting terms, finally is found:

, ∑ ∑ , ∑ ,   (2.58)

The index 0 denotes the place where the random variable Y is being estimated. Notice that all
the required covariance values can be calculated directly from the covariogram, (or indirectly
from the variogram) setting as input the distance between point i and j or between “0” (the
sampling point) and i. Finally the covariance is:

∑ ∑ , 2∑ , (2.59)

Now, we can proceed to minimize the variance, subjected to the non bias condition, if
Lagrange multipliers are employed, or target function is given by:

∑ ∑ , 2∑ , 2 ∑ 1 (2.60)

Taking partial derivatives along each weight, and the Langrage parameter λ (the 2 coefficient
has been added to simplify calculations, as the target equation is arbitrary won’t affect the
results) , and equating those to zero, a system of equations is found:

In Matrix notation:

Where Co is a vector in which the element at row i includes the covariance value between
sample i and the estimated (interpolated) point 0 the final element is 1 to allow for the
conditioning equation. Of 2.54 set, C1 is a matrix at which i,j-th element the covariance

35
Chapter 2 Fundamentals

between Sample i and j is located, except the lass row and columns which are filled with
unitary values. . W is the vector of coefficients, which is unknown. Solving for it:

(2.62)

For example, for an interpolation based on three samples, the following matrices are defined:

−1
⎛⎜ w1 ⎞⎟ ⎛⎜ c11 c12 c13 1 ⎞
⎟ ⎛⎜ C01 ⎞⎟
⎜ w2 ⎟ ⎜ c12 c22 c23 1 ⎟ ⎜ c02 ⎟
⎜ ⎟ ⎜ ⎟ ⋅⎜ ⎟
⎜ w3 ⎟ ⎜ c31 c32 c33 1 ⎟ ⎜ c03 ⎟
⎜ λ ⎟ ⎜ 1 ⎟ ⎜ 1 ⎟
⎝ ⎠ ⎝ 1 1 0⎠ ⎝ ⎠

Now that the kriging coefficients are know, it is possible to find the variance of the error,
(kriging variance) trough equation 2.54, which expressed in matrix form is equivalent to:

(2.63)

Although formal derivation of kriging procedure is lengthy and not appealing, is just a
generalization of least Squares fit. Both methods follow the same global idea of minimizing
the error, using a given condition and properties on the estimators. Minimization is
performed, and as will be shown latter, both can be combined in a powerful predictive
scheme, Universal kriging. Also, it is clear why and how kriging outputs the best linear
estimate, if the fundamental hypothesis on which the procedure relies are met. In general,
properties of ordinary kriging are summarized below:

• Kriging estimate honors the observed sample. If the estimation is performed on a place
where a sample has been taken, (and of course keeping the sample in the analysis) the
estimate will yield the sample value.

• Kriging is a BLUE estimate (best linear unbiased estimator).

• Kriging provides a measurement of precision, represented by the minimized Kriging


Variance. Is feasible to set confidence intervals on the Kriged results. (error can be
characterized as a normal variable with zero mean (as the kriging estimate is unbiased)
and standard deviation equal to the square root of the kriging variance)

• Spatial continuity of the data is considered explicitly.

• Estimated values are independent among each other. A prediction won’t affect the
result of a posterior one. Predictions are dependent just on field properties, represented
by the spatial continuity of the samples.

36
Chapter 2 Fundamentals

Of course fundamental hypothesis on which the kriging is based have to be reasonable met:

• The analysis region is modeled trough a homogeneous random field. Trends and other
position dependant properties should be avoided.

• Spatial variability among the field should be characterized correctly. The key
parameter on interpolation is the covariance, therefore variograms, which define these
values, have to be in agreement with observed data, and otherwise, results might not
be representative.

• Kriging matrix C1 has to be positive definite. This allows for non singularity, and
computation of positive kriging variances. Variograms from field data might not be
used directly to characterize the spatial continuity of the field. If matrix C1 is not
invertible, it is not possible to find a solution. If it is not positive definite, results might
be absurd.

The latter issue is fundamental. As raw variograms (variograms found trough analysis of field
data) cannot be used directly. They have to be fitted trough specific functional forms which
guarantee the positive definiteness of the kriging matrix. This issue will be discussed
extensively in the next section, when the topic of modeling field variograms will be analyzed
in depth.

Other issue has to be noted. Least square fit and kriging follow the same fundamental
Framework: minimization of errors trough explicit non bias condition, using Lagrange
multipliers or direct gradient approach. Then, both methods are comparable. Even so, it is
possible to perform a trend Fitting to a set a functional form (without independent error
constrain) and Ordinary Kriging on the residuals at the same time, which are by nature
homogeneous, as they share the same mean (zero due the unbiased trend fitting). This is
called Universal Kriging.

Use of Universal Kriging on Geotechnical engineering is promising. For example, overburden


pressure, which is directly related with depth, is an important factor on the definition of some
mechanical properties of the ground, like their short strain shear modulus, and shear modulus
degradation curves (Ishihara 1996) which affect directly shear wave velocity. Including
Universal Kriging will relax the uniformity constrain which limits Ordinary Kriging.

Unfortunately, Universal Kriging was not considered in this work due to resource and time
limitations, but tools already developed for Ordinary Kriging estimation can be updated to
include Universal Kriging, and this way take advantage of the relaxed conditions associated
with this improved interpolation scheme.

Also it has to be noted how the weighting coefficients depend only on the spatial continuity of
the field, represented by the variogram. Weights are unaffected by sample values. So it is
possible to assess the effect of the uncertainty in the measure, by directly propagating the
sampling deviation, trough equation 2.53, as the definition of the covariance of a linear
transform:

37
Chapter 2 Fundamentals

(2.64)

Where w is the vector of the kriging coefficients (equation 2.62) and C is the Covariance
matrix of the field measurements. This topic is controversial, as the setting up of the random
properties of the homogeneous modeling field is based on the statistical analysis of the same
field measurements, but this procedure is performed indirectly. Anyhow, this proposal is
worthwhile of further consideration, maybe as part of future research projects.

Other interesting feature of Ordinary Kriging arises from its relationship with least squares
estimates. As the latter one, variables can be modeled by a constant, deterministic trend
associated with the predicted mean value (Z’) in conjunction with a normally distributed
random variable zero mean (due non bias condition) and standard deviation equal to the
kriging variance. This is the framework on which the reliability assessment performed in this
work is based.

2.4.3 Analytical Modeling of the Field Variogram


As mentioned before, positive definiteness of the kriging matrix, requires that variogram
functions employed to find the variance values between samples and the predicted location
follow well defined functional shapes. A specific fit to this pattern functions has to be
performed, but in such way that spatial trends observed trough field data are represented by
their analytical counterparts. The first issue related with the modeling of field variograms, is
the nugget effect. According to the definition of variogram function, its value at zero distance
has to be exactly zero, as the expected difference of a variable with itself is null.

0   0 (2.65)

but analytical positive definite functions are not required to be exactly zero at origin, even so,
due lack of data at short distances, is not possible to characterize the behavior of the field
variogram at values at close field (next to origin), in such case, it is admissible to set a
constant value immediately at the origin, this way, an artificial jump at zero is generated, due
modeling issues. This is the nugget effect.

h(x)
Field Data

Analytical Fitting
Unsampled
Close range
Nugget Space

Distance (x)

38
Chapter 2 Fundamentals

K(x)
Nugget

Analytical Fitting

Sill

Distance (x)

Figure 2.17: Nugget Effect

As explanatory, the variogram and the plot of autocovariance functions are depicted in figure
16. Nugget is shown as an artificial increase of the field standard deviation, due to continuity
of the modeling function and lack of data in the close field.

In general terms, three basic functional shapes are defined in common geostatistics practice
(Isaaks and Shiravastava, 1989):

• Exponential

1 exp 3| |/

• Gaussian

1 exp 3 | |/

• Spherical

3 ⁄2 | | ⁄ 1 ⁄2 | | ⁄ ,0

• Linear

| |

Where N is the Nugget, S is the Sill, and R is the Range and x is separation distance.
Spherical model is not asymptotic. The sill is defined as the distance at which 95% of the
expected range is achieved. Linear model is not complete at itself, as it lacks a proper sill, but
is employed in combination with exponential, Gaussian and Spherical functions to fit field
data more properly. Absolute value of distance is taken, as negative values are not allowed.

39
Chapter 2 Fundamentals

25

20

15
h

10
Exponential
5 Gaussian
Spherical
0
0 6 12 18 24 30

Distance

Figure 2.18: Analytical variogram Functionals. Sill value of 24, Range equal to 18. Nugget is 6

Then, field variograms have to be fitted by any of the shapes depicted; fortunately, a linear
combination of any functional of the proposed theorical variograms, will yield positive
definite matrices, and therefore will also be acceptable to set the analytical variogram,
allowing a sizable set of allowable functions.

Modeling of anisotropy should be done following the same functional shapes. Different sill,
Nugget and range values among diverse directions can be considered, but, in geotechnical
analyses, the most common case is different ranges along certain directions (normally in
Cartesian coordinates), with same sill and nugget values for each principal directions (vertical
and horizontal).

25

20

15
h

10

5 Vertical
Horizontal
0
0 6 12 18 24 30

Distance

Figure 2.19: Range Anisotropy. Sill value of 24, Nugget of 6. Range values of 8 and 16

40
Chapter 2 Fundamentals

In order to model the situation depicted in figure 18, in which two ranges are observed on two
different directions, it is advisable to set a base analytical variogram with a range of one, and
afterwards, scale back the distance itself by the observed range. This way, ordinates, instead
of being defined in absolute distance, are set relatively to the range. For example consider a
unitary range exponential model:

1 exp 3| |/1

Now the domain is scaled by the desired range R

And the scaled domain is replaced in the unitary range variogram (which is the base form) as
shown below:

1 exp 3| / |/1   1 exp 3| |/


| |
  1 exp ⁄ (2.66)

Scaling can be performed at three dimensional Cartesian coordinates:

⁄ ⁄ ⁄ (2.67)

And applying equation 2.66

(2.68)

Where d is the distance between the analysis points dx is the distance along x coordinate, dy is
distance along y coordinate, and dz is distance along z coordinate.

It is possible to perform any linear transform of coordinates, allowing for axis rotation and
other geometrical variations, but commonly geotechnical problems show two main axis of
anisotropy (vertical and horizontal) due layering consolidation of soils, and material
homogeneity indicates the same statistical properties beyond the range, the methodology
developed along this research just deal with the anisotropy pattern exposed. Anyhow, the
reader is referred to Isaaks and Shiravastava (Isaaks and Shiravastava, 1989) for a more
detailed explanation about modeling of anisotropy trough analytical variograms.

2.5 Montecarlo Simulation


Montecarlo Simulation techniques were envisioned in great extent by Von Newman during
Second World War, with the aim of developing nuclear weapons at Los Alamos National
Laboratory in New Mexico, USA. Its name comes from the city of Montecarlo, in reference to
its casino, emphasizing the relationship between the aleatory characteristics of simulation and
games of chance (Sanchez Silva, 2007). Montecarlo Method, employs aleatory sampling to
simulate artificially (non real cases) the behavior of a system. In overall terms, the procedure
is comprised of the following steps:

41
Chapter 2 Fundamentals

• Define system response Y in terms of all its random descriptors (variables)

, ….

• Set the probability distribution functions with their defining parameters, for all random
variables z

• Generate random samples (realizations) of each variable, according to its own


probability density function

• Evaluate Y deterministically, for the samples set in the step before, perform this step
for a sizable number of times N

• Get statistical properties and descriptors of Y

• Assess the efficiency and precision of the simulation scheme

In general, Montecarlo simulation is the simplest approach to reliability. Its fundamentals are
basic. Assessment of the properties of the descriptor of the system is performed directly. The
inconvenient is the large amount of realizations that might be required to do an analysis.

2.5.1 Random Number Generation


Random number generation algorithms are a common resource available in mostly
commercial mathematical software (Matlab©, MathCAD©, Excel©..) and hand held
calculators, even for a wide spectrum of probability distributions. Almost all generators use as
basic input a seed value to define a specific set of values. If the seed is changed, another set is
calculated. Therefore in precise terms, random variables generated this way are not entirely
aleatory and might repeat themselves, but after a large number of realizations, beyond 109
enough for practical proposes (Sanchez Silva, 2007).

The most widespread method, employed to find random variables matching a certain
distribution, is the inverse transform method. Uniform random variables between 0 and 1 are
set using a standard generator. These values are afterwards deemed as a randomly found
“probability” value which becomes the input parameter for the inverse probability distribution
function. Finding the desired samples:

(2.69)

This way x uncorrelated samples following the Fx distribution function can be determined
from u variables, distributed uniformly around [0 1]. Inverse transform method is efficient if a
close form solution of the inverse distribution is available, unfortunately, for a wide range of
distributions, just numerical approximations of it are defined. A fundamental example of such
situation is the normal distribution.

Box and Mueller in 1958 developed a simple formula to simulated standard uncorrelated
normal random variables without the need to calculate the inverse probability distribution:
(Phoon 2006)

42
Chapter 2 Fundamentals

2 cos  2 (2.70)

2 sin  2

Where Z values are normally standard distributed, and U values are uniformly standard
(between 0 and 1) distributed. A further improvement of the algorithm depicted in equation
set 2.65 was developed by Marsaglia and Bray in 1964 (Phoon, 2008)

1. Generate two random standard random variables U1 and U2

2. Define variables V1 and V2 as:

2 1

2 1

3. Calculate R2 = V12 + V22 if R2 = 0.0 or R2 ≥ 1.0 repeat steps 1 and 2

4. Simulated two independent Standard Normal random variables using:

2ln  / (2.71)

2ln  / (2.71)

Equation 2.66 is computationally more efficient than 2.65 because trigonometric functions are
not required. Finally, normally distributed random variables with a given mean and standard
deviation can be derived from:

N (2.72)

Where N is a normally distributed random variable with mean μ and standard deviation σ.

Normally, descriptor variables of the system parameter Y are correlated, implying that a
scheme to generate such samples have to be derived. Firstly it must be stated that, the
covariance of a linear transform (equal to the product of a vector T and k) of a set of random
variables k is given by:

  (2.73)

Now, if a set of random uncorrelated random variables is considered, the covariance matrix
for such group is the Identity matrix. If it is desired to find a linear transform of this set
which can yield correlated random variables with a Covariance matrix C known (as the
statistical descriptors of the parameters of the system behavior function Y are already set); the
transform defined in equation 2.73 might be applied on the uncorrelated standard samples,
with Cs correlation matrix, looking forward to obtain the Covariance matrix of the correlated
parameters:

  (2.74)

43
Chapter 2 Fundamentals

Which is the Cholesky decomposition of the Covariance matrix of the system’s descriptors.
Therefore, the linear transform needed to set the correlated samples is just the lower triangular
matrix arisen from Cholesky decomposition. Then the procedure to generate correlated
sample from standard normally correlated random variables is given by the following steps
(Sanchez Silva, 2007)

1. Perform the Cholesky decomposition of the Covariance matrix associated with the
system descriptors X

2. Generate a sample of uncorrelated normally standard distributed random variables N


(with size equivalent to the number of system descriptors)

3. Normally Correlated random variables, are given by

(2.75)

Where B is the lower triangular matrix arisen from the Cholesky decomposition performed on
step one, and μx is a vector containing the mean values of each system descriptor. The latter
one has to be added, as the sample N is made of Standard Normally distributed random
variables, which, by definition have zero mean.

The scheme depicted has been developed for normally distributed random variables only. If
other distributions are required, it is necessary to go beyond the presented methodology. One
option to generate random variables according to any desired distribution is presented
afterwards (Sanchez Silva, 2004).

If a random variable xi is described by a given (besides normal) distribution Fx it is possible to


map it into an standard normal space by the following transform

Φ (2.76)

Although, due the change of space, covariance matrix of u variables, is different than
covariance matrix of x parameters. Fortunately a set relationship between correlation
coefficients for both spaces has been defined:

, , (2.77)

Where, , is the correlation coefficient of variables x1 and x2 in normal standard space,


and , is the correlation coefficient of the same descriptors, but considering their original
probabilistic distributions. F is a parameter given by:

(2.78)

Where Vi is the coefficient of variation (COV = σi/μi) of variable i, Vj is the COV of variable
j, ρ is the coefficient of correlation (in original space) between variables i. and j values of
coefficient a, b, c, d, e, f, g, h k and l are given below. This way, it is possible to find the
correlation matrix on the normal standard space (noticing that values along the diagonal are
one as the variables have been mapped into normal standard space). The method is exact for

44
Chapter 2 Fundamentals

lognormal distribution and an error of 2% for other distributions specified on table….the


steps required are summarized (Sanchez Silva, 2004):

1. Find the Covariance Matrix (CU) in Normal Standard Space, using for each
correlation coefficient equation 2.72 and 2.73.

2. Find the Cholesky decomposition of CU

3. Generate a set of Y standard normally independent distributed random variables

4. Find a set of correlated Normal Standard random variables based on matrix CU, using:

(2.79)

Where B is the lower triangular matrix arising from step 2

5. Map each variable u out of Normal Space Using the inverse transform of equation
2.71:

Φ (2.80)

Table 2.2: Factors for simulation of not normal random variables, according to equation
2.78 for several distinct distributions. From (Sanchez-Silva, 2004)

45
Chapter 2 Fundamentals

Distribution List: NM normal, UN Uniform, Ex shifted exponential, RC shifted Rayleigh, EVT1,


Gumbel, LN lognormal, GM Gamma, VE-TII-max. Extreme value type two, maxima, VE-TIII
extreme value, type III minima.

2.5.2 Statistical Characterization of system response


After the simulations have been performed the issue of system response characterization
becomes fundamental. In general, N samples of the response are set, and base on those,
statistical inference descriptors can be calculated, but, for general proposes, the most
important, and only considered are the sample mean and the sample standard deviation:

1 ∑ (2.81)

∑ ⁄ 1 (2.82)

N is the total number of simulations, and x denotes the individual realizations. Taking into
account that both, mean and standard deviation are sampling parameters. If a limit state
function g(x), defining the survival of the system for a given event is defined, it is also
possible to estimate failure probabilities. This estimate is performed as:

0 ⁄ (2.83)

Although this definition is precise, it is not formal. Therefore it is possible to define a Boolean
function, and indicative function as:

 0    0
 1    0

Where g(x) is the multivariate limit state function. In such case, Failure probability is:

0 (2.84)

Where, the integral is nfold trough all the n parameters defining the system state. fx represents
the distribution of each parameter. Notice that the failure probability pf is the expected value
of the indicator function I.

Trough a statistical analysis it can be shown that an unbiased estimator of the failure
probability is given by:

1⁄ ∑ 0 (2.85)

This result is remarkable. Firstly, failure probably is an estimator, and therefore a random
variable itself. Statistical descriptors of Pf can be defined, including, of course its variance,

46
Chapter 2 Fundamentals

which can be estimated using equation 2.82 for the indicator function I. Also, its distribution
form is highly related to the joint PDF of the descriptors.

2.5.3 Efficiency and Precision of Montecarlo Simulations


The critical issue with the use of Montecarlo simulation is the assessment of its precision and
efficiency, both related to the number of occurrences required to estimate a parameter of the
system response (mean, standard deviation…) or a given probability of failure.

As stated before, probability of failure estimated using Montecarlo method is a random


variable itself. If each realization of the experiment (assessment of system response to a single
sample) is considered as a Bernoulli trial (success is indicated by a 1 occurrence of the
Indicator function I equation 2.84) for such case, the probability of failure is given by:

⁄ 1/ 1 (2.86)

Where Pf is the estimated probability of failure, N is the number of simulations to be set , Pv


is the probability of failure and k is the number of trials which have to be failed to achieve the
estimate Pf. the functional form of 2.86 is a Binomial distribution, therefore the expected
value of 2.86 and its variance are given by:

1 / 1 /

Recalling that N is a constant value dividing random variable k (number of failure events)
therefore the coefficient of variation of Pf can be calculated as:

1 /   1 /     (2.87)

Then the number of simulations N required to achieve a desired COV can be found solving
equation 2.87

1 /   (2.88)

47
Chapter 2 Fundamentals

6
1×10
COV 2%
COV 5%
Number of Simulations

5
8×10
COV 10%
5
6×10 COV 50%

5
4×10

5
2×10

0
−6 −5 −4 −3
1×10 1×10 1×10 1×10 0.01 0.1 1

Failure Probability
Figure 2.20: Number of Simulations required for achieving a target COV. From equation 2.88

For example, for a yearly probability of failure of 1/475 (10% excedance in 50 years, the
required target probability of failure in most seismic codes) almost 5000 simulations are
required to achieve a COV of 10%

1 1⁄475 / 0.1  1⁄475 4740

A more straightforward approach, which can be employed to assess the estimator of a


parametric descriptor like mean or standard deviation, is to simply plot how these values
converge as the number of simulations increase.

When a stable behavior is observed, it is possible to state that a reasonable amount of cases
has been performed. At the end, all simulations done during the precision/efficiency analysis
can be joined together, to get a wider and more representative sample (as each occurrence is
independent).

2.5.4 Latin Hypercube Sampling


Although Montecarlo simulation is simple, straightforward and has a real significance, it
might require a sizable amount of runs to achieve a desired precision, becoming not efficient,
as shown in the last section, this is why is normally called raw Montecarlo simulation. It is
possible to reduce the amount of required tests (and therefore analysis runs) if sampling
among system descriptors is done in a more clever way than just selecting them randomly.

A lot of variance reduction techniques aimed to achieve this purpose have been envisioned.
Most of them are aimed for the assessment of reliability, and are bounded to the definition of
a failure criteria function. Others allow for more flexibility, being suited to central moment
estimation of the system behavioral function Y. among them Latin Hypercube Sampling is
one of the simplest, and is the one adopted in this research.

48
Chapter 2 Fundamentals

Latin Hypercube Sampling follows the overall idea of “divide and conquer” (divide et
impera). Instead of dealing with individual random samples, groups are set, and representative
values of those are set as the input parameters for system evaluation. The range of allowable
values which each descriptor of the system is divided in bins and afterwards a representative
value of each one is selected.

Afterwards, the characteristic values of each bin of each random input variable are combined
in such way that each group is considered only once at the whole simulation process. Finally
results of each simulation are analyzed statistically, and the system behavioral function Y is
characterized stochastically

In a more straightforward way, the procedure goes through the following steps (Sanchez
Silva, 2004):

• Define system behavioral function Y and input random variables X required to define
Y; Y = f(X1, X2, X3….Xk) = f(X)

• Divide the range of each random variable Xi into n bins. If possible try that the
probability of occurrence of Xij (P(Xi) = P(a ≤ x < b where a and b are the extremes of
the bin j) is equal to 1/n

• For each bin, of each random variable Xi, define a characteristic value. This value can
be selected randomly among the values inside the bin, or equal to any desired criteria.

• Based on previous step, it is possible to set nk possible combinations of the


characteristic values of all the bins. Therefore, a number of n combinations have to be
selected, following that each bin is just represented once.

• Evaluate the system behavioral function Y at the combinations found in the last step.
Afterwards evaluate any stochastical descriptor desired, based on the simulations of Y.

The main disadvantage of Latin Hypercube Sampling is the artificial bounds put on the
domain analysis, as the possible outcomes of each random variable are limited to
characteristic values, the consequences of a possible outlier are masked by clustering of the
allowed values. Therefore Latin Hypercube Sampling is better suited for overall descriptor of
systemic properties as mean and standard deviation values. Due its simplicity and its moment
induced calculation, was selected as the sampling scheme for the methodology depicted in
this work.

2.6 Fist Order Second Moment Reliability Method FOSM


Fist Order, Second Moment Reliability method is based on a linear approximation of the
system behavior function y around its mean value. If the n-dimensional Taylor series of Y
around its mean μY is trimmed before reaching any second order term, the following
expression is found:

,…, (2.89)

49
Chapter 2 Fundamentals

If expectation is taken along equation 2.83 the following is found:

,…,

,…, (2.90)

As partial derivatives are not random; and the expected value of any X is equal to its mean.
Using gradient notation, equation 2.83 can be recalled:

(2.91)

Therefore is clear that, the Taylor series approximation of the first order is a linear transform
on the X descriptor random variables, with a constant term equal to the system behavioral
function (y(μx)) and constant coefficients equal to the gradient components, both evaluated
at the mean values of X. Therefore, according to equation 2.73, the variance (as is performed
along a single random variable) of Y is given by:

    (2.92)

Where Cx is the covariance matrix of random variables x, and   is gradient evaluated at the
mean values of each random variable xi

If an analytical formulation of system response behavioral function is available equation 2.85


can be solve directly. Otherwise, numerical derivatives around the mean values of the
parameters X have to be calculated. Either a forward or backwards approach might be
reasonable:

Δ ,…., , , ,… ,…., , , ,…  

Where h might be a positive (forward derivative) or negative (backwards derivative), finally


the partial derivative is estimated as:

Δ / (2.93)

Forward and backward derivatives have the advantage that the system behavioral function has
to be estimated just once per each partial derivative calculation (as the estimate at the middle
value is the expected value of Y which, just require a single calculation, valid for all partial
derivative calculations), but the estimate to be valid, relies on the smoothness of y, such that,
variations among estimates for positive or negative increments are similar and then choosing
just one is enough.

A better choice might be to perform a centered difference. In such case the increment is given
by:

Δ ,…, , , ,… ,…., , , ,…

Δ /2 (2.94)

50
Chapter 2 Fundamentals

In this case y is estimated at twice at both, backwards and forward increments around the
mean value of xi. Requiring almost twice time than estimate defined by equation 2.94 this
process has to be performed for each partial derivative, so if the number of descriptors is
large, would be a time and resource consuming procedure.

FOSM is easy to implement, and allows to propagate the uncertainty in the input parameters
to system response directly. Information is limited to the two first central moments, mean, and
variance. Although, this can be an advantage, in the sense that information about the nature of
the distribution of each random variable is not required, making distribution free estimates,
but in a strict statement this feature is a lack of completeness. (In the sense that further
information cannot be obtained beyond mean and variance).

It is possible to calculate the coefficient of variation of the system response as:

⁄     (2.95)

If y is instead a limit state function, the inverse of their COV is the First Order, second
Moment reliability index: (Conte, 2006)

1⁄ ⁄     (2.96)

And the probability of failure might be roughly estimated as (Conte, 2006)

Φ (2.97)

A drawback of FOSM as a reliability index is its lack of invariance; different values of the β
are found for different forms of the limit state function, this means for example, that β value
for a formulation like R-S = 0 (R resistance, S demand) will be different, if the same
relationship is expressed as R/S =1, despite the fact that they are depicting the same physical
situation.

This phenomenon is due the nature of the linearization of the limit estate equation at the mean
values of the X descriptors. On contrary other indexes like FORM approximate the function
on a location along the limit state equation, having the invariance property which FOSM
lacks. This makes FOSM a secondary reliability index. Specially if compared to FORM (First
Order Reliability Method).

51
Chapteer 3 Descripption of the Proposed Methodolog
M gy

3 DESCRIP
D PTION OF PRO
OPOSED
D METH
HODOLO
OHY
The aiim of this research
r is to
t develop a proceduree to assess 2D site response based on field
test daata, and speecifically pooint shear wave
w velociity measureements. In order to achieve this
objective, simulaation technniques, and a predictiive schemee based onn Geostatisstics were
emplooyed to charracterize botth the site and
a its accelleration on surface duee a given ou
utcropping
strongg motion. In broad term ms, the methodology has been strucctured in thee following way:

Figuree 3.1: Flow Diiagram of thee Proposed Methodology


M

52
Chapter 3 Description of the Proposed Methodology

Following limitations are currently imposed on the methodology:

• Just shear wave velocity is considered as a random field in employed. cokriging is not
implemented.

• Site is characterized as a single spatial stationary field; meaning that shear velocities at
all locations share the same mean value; due to this hypothesis trends along samples,
and distinct, sizable well defined layers should be avoided, or results might not be
representative.

• Site response analysis is two-dimensional; although kriging is performed on 3D data.


This means that, mean shear wave velocities are estimated along a section cut from
data spread through space.

• Inelastic analysis can’t be performed. Linear or approximate linear equivalent analyses


can be done.

For this specific research work, a layer lying over a half space, modeled as a rectangular
region subject to variability is considered, as detailed deeply in chapter 4. Nevertheless the
application developed is able to perform linear equivalent analyses, and is flexible enough to
consider any finite element mesh defined to QUAD4M standards (Hudson, Idriss, Beikae
1994) as will be explained latter. Such additional features will be discussed alongside the
description of each element of the methodology, in this chapter.

Before explaining in depth the proposed methodology, is advisable to review the


fundamentals of one dimensional stochastical site assessment, as fundamental concepts on
which this work are founded, can be explained easily trough this simplified method. It will be
shown how two dimensional analyses are an extension of 1D procedures.

3.1 1D Seismic Stochastical site assessment


As expressed in the introduction, a first approach to site seismic amplification can be drafted
if a one dimensional, shear wave analysis model is employed to characterize the geotechnical
structure below a specific location. Layers parallel to surface, and motion perpendicular to it
are considered. Variability of soil properties is constrained to vertical direction, as a layer
structure is imposed.

A fundamental requirement to perform a stochastic analysis is to define a reliable formulation


to analyze the phenomena under assessment. Fortunately, there is a systematic way, among
others, to estimate 1D shear wave behavior, for a system of layers; Haskell Thompson
Matrices.

53
Chapter 3 Description of the Proposed Methodology

3.1.1 Haskell Thompson Matrices.


Close form solutions for wave propagation along a shear beam with constant unitary weight
and stiffness are available. As can be extensively shown, partial differential equation
governing the motion of a shear beam is given by (Verruijt 2008):

2 2
2 ∂ ∂ (3.1)
c⋅ u u
2 2
∂x ∂t

√ /

Where u is the displacement, μ is the shear modulus, and ρ is the material density. c is the
shear wave velocity. Solution of partial differential equation 3.1 can be obtained using
separation of variables, for non time varying boundary conditions as is usually assumed for
linear and equivalent linear analyses. (Graff, 1991). This way displacement can be expressed
as the product of two singly parametered functions of time (t) and length (x)

(3.2)

If an harmonic input motion is defined, h(t) has to follow an exponential form. This way an
universal solution is given by (Kramer, 1996)

(3.3)

Where the w is the circular frequency of the input motion, A1 and A2 are two coefficients to
be found according to the boundary conditions. Now consider a layer located within the soil
mass:

Surface

x
h

Bedrock

Figure 3.2: Haskell Thompson model


54
Chapter 3 Description of the Proposed Methodology

As shown, two points are taken along the layer, the uppermost limit and its lower bound.
Between them soil shear wave velocity is taken as constant. x coordinate is positive, having a
zero value at A and h value at B.

According to equation 3.3, stresses can be found to be equal to:

(3.4)

As displacement and stress compatibility has to be enforced always, and therefore should be
granted by the spatial dependant terms, time expressions can be neglected. This way, the
spatial functional F(x) at point A (zero x coordinate) for displacement and strain is given by:

(3.5)

In matrix notation:

⎛⎜ ua⎞⎟ ⎛ 1 1 ⎞ ⎛⎜ A1⎞⎟
⎜ ⎟⋅
⎜⎝ τa⎟⎠ ⎝ ikμ −ikμ ⎠ ⎜⎝ A2⎟⎠

(3.6)

Following the same procedure, for point B (x = h) the following relationship is found:

⎛⎜ u b ⎞⎟ ⎛ eikh e
− ikh ⎞ ⎛ A ⎞
⎜ ⎟ ⋅⎜ 1 ⎟
⎜⎝ τ b ⎟⎠ ⎜ ikh − ikh ⎟ ⎜ A 2 ⎟
⎝ ikμe ikμe ⎠⎝ ⎠
(3.7)

As the coefficients are equal, it is possible to find a relationship between displacements and
shear stress between the top of the stratum (point A) and its bottom (point B)

(3.8)

Finally, if all layers are considered (noticing that displacement for the n layer can be
expressed in terms of n-1 layer and so on), a transfer function between rock base (UR) and
surface (US) can be established:

∏   (3.9)

55
Chapter 3 Description of the Proposed Methodology

Where Hi is the Haskell matrix of the strata i, US is the vector of stresses and displacements at
surface, UR is the vector of displacements and stresses at the bedrock and n is the number of
layers. And Ht is the Haskell matrix for all layer structure. For normal, free surface
conditions, stresses at surface are zero, and the input motion at the bedrock is known. This
way the system can be solved for the displacement at the site.

⎛⎜ u R ⎞⎟ ⎛⎜ Ht11 Ht12 ⎞⎟ ⎛⎜ u S ⎞⎟ ⎛⎜ Ht11 Ht12 ⎞⎟ ⎛ u S ⎞


⋅ ⋅⎜ ⎟
⎜ τR ⎟ ⎜ Ht21 Ht22 ⎟ ⎜ τ S ⎟ ⎜ Ht21 Ht22 ⎟ ⎝ 0 ⎠
⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠

This way, two equations for the rock boundary are set:

(3.10)

Motion on the rock is composed by the reflexed and transmited wave. Setting the origin at the
rock interface, and keeping the convention set for the length coordinate, we have:

(3.11)

The R subscript denotes that shear modulus and wave number are obtained taking into
account rock properties. As the positive direction of the x coordinate is downward, known
seismic input is B2 (related to negative x coordinate) while reflexed motion into the bedrock
half space is defined by B1. Solving for this quantity yields the following result:

(3.12)

Recalling expressions 3.10 a relationship between motion on surface (us) and seismic input
(B2) can be established:

(3.13)

Input motion is half the outcropping motion (shear wave propagating in a half space)
therefore, a transfer function h, defined as the complex modulus of the ratio between surface
and outcropping motion can be found trough equation 3.13

| | (3.14)

This formulation is valid for harmonic motion, however, it is possible to represent the seismic
Input (I(w)) in the frequency domain using the Fourier transform, and afterwards multiply
each component by the function h.

56
Chapter 3 Description of the Proposed Methodology

Which depends on frequency as the wave number is the ratio of w and shear wave velocity.
This way, the Fourier representation of the surface response can be obtained (us(w)), and
afterwards can be transformed back to time domain.

(3.16)

Damping can be considered using an imaginary shear modulus representation, leading to an


imaginary shear wave velocity, as shown by Kramer (1996). Also equivalent elastic analysis
can be performed, by defining a transform between motion at the top of the layer and its mid
point. For further details in these topics, the reader is referred to Lai and Sanchez, (Lai,
Sanchez, 2008) where stochastic analysis of 1D shear beam model is explained extensively.

Haskell Thompson Matrix formulation allows for a systematic and efficient assessment of site
effects, for a given input motion; represented in the frequency domain form. This method has
been implement on a long range of computer applications, developed to estimate site seismic
amplification, like EERA (Bardet, Ichii & Lin, ) and PROSHAKE (EDUPRO, 1998). Also,
formulations in time domain are available, being the most known SHAKE (Idriss & Sun
1992).

3.1.2 Stochastic 1D Seismic Site Assesment.


Once a proper rheological model has been set, it is possible to perform a complete stochastic
analysis either by direct computation of the distribution of surface response; simulation or
approximate methods.

From Haskell Thompson formulation is clearly shown that the following parameters are
required to deterministically estimate surface response:

• Definition of a popper set of layers with constant mechanical properties within them

• Frequency content and magnitude of input motion at bedrock.

• Shear wave velocity and damping coefficient of each layer

• Shear Modulus of each layer.

• Width of each layer

• Bedrock Properties: Shear wave velocity, Shear Modulus, and damping.

All of the above parameters might be subject to variability and therefore can be set as random
variables into the stochastic analysis; however, variability along soil is normally several times
higher than uncertainty among bedrock properties, therefore, is customary to take them as
constant values.

57
Chapter 3 Description of the Proposed Methodology

It is possible to find the exact probability distribution of surface response, using the definition
of compound distributions (Conte, 2006):

| | … (3.17)

Where fs is the distribution of surface response, u’sx1 is the partial derivative of surface
response with just one of any of the desired random parameters, taken as main descriptor (for
example, the partial derivative of the surface response with the shear wave velocity of the
uppermost strata) fx is the joint distribution of all random variables considered (all shear
waves velocities of each stata, each strata height, etc, taken altogether) and the integral is
performed along the random variables not considered as the main descriptor (in this case, all
random variables except the shear wave velocity of the uppermost strata).

It has to be recalled that the main descriptor can be any of the random variables involved in
the analysis (and of course one which is not taken as constant) therefore, is advisable to select
a parameter for which the partial derivative can be easily computed. When selected, the
integral has to be performed along each one of the other random variables.

Equation 3.17 has a serious setback: the joint distribution of all random variables has to be
known; also calculations of partial derivatives, even after carefully selecting the main
descriptor, might be challenging. Therefore, for engineering proposes, where instead of a
rigorous definition of the distribution of surface response is enough to get a few moments,
approximate methods might be handy.

Haskell Thompson Matrix Ht can be derived elementwise, to define a gradient along mean
values. Then, recalling section 2.6 of this work, a FOSM (Fist Order Second Moment)
approximation can be set. Of course, partial derivative calculation can be also performed in
numerical terms, as already explained in chapter 2. Even so, if gradient can be found
analytically, it might be possible to even enforce FORM (First Order Reliability Method) . Of
course it must be recalled that both FOSM and FORM are approximate methods.

The most straightforward approach to stochastic analysis is trough simulation. Haskell


Thompson provides and efficient and simple approach which allows for systematically
evaluation of surface response after specific realizations of a set of random variables. Of
course, all variance reduction techniques can be employed, among them Latin Hypercube
Sampling.

Anyhow, averages of hundreds or thousands of realizations have to be performed, each


realization involving a specific layer pattern (unique layer thickness and layout) with sampled
shear wave velocity among them. Different modulus and damping degradation curves might
be accounted, and is customary to consider several strong motions.

For more details on 1D Stochastic Seismic Site Assesment, and specially specific site
applications the reader is referred to Lai et al (2008), Sanchez (2008) and Prieto (2006), which
58
Chapter 3 Description of the Proposed Methodology

have performed this kind of analysis for several places along the globe, specifically for the
researchers mentioned, Italy, India and Colombia. Showing how extended and diverse is the
concern of randomness in site seismic assessment.

Two dimensional, is more challenging than one dimensional stochastic seismic site analysis.
Random variables increase as several locations on space are considered and variability
extends beyond the vertical direction, requiring random field theory fundamentals to be
considered. Response is not uniquely defined at a single location on surface and averaging
and assessment at several locations on it is required.

Despite the increased complexities, higher flexibility and coherence of analysis might worth
the effort. By not constraining the geometry to layer pattern, basin effects and amplification
patterns due to topography or morphology of subsoil can be addressed. Also analysis can be
extended to slopes, proving useful for stability analyses, for example. Also, as noticed earlier,
variability in space is better represented as horizontal direction can be taken into account.

As one dimensional analysis requires the definition of layers parallel to surface, site data
cannot be taken completely. Samples at several boreholes have to be employed at defining a
single base soil profile, meaning that results cannot be used directly. Instead, in two
dimensional analyses, interpolation can be employed to account all available data, as shown in
the present methodology, if shear wave velocity is measured. For all those reasons, two
dimensional models were considered in the present study.

3.1.3 Input Parameters


Field data required to go through the proposed methodology are point (at specific locations)
of shear wave velocity. Although other mechanical properties have a role on bi dimensional
soil response, their variability was neglected for several distinct reasons, which will be
explained in deep:

3.1.4 Elastic Analyses:


• Density: field measurements of unitary weight shows coefficients of variation COV
less than 10%, even reaching low values as 2 and 3%, while standard penetration test
(N) and vane test (Su) COV values can go as high as 50% and 40% respectively; Even
so, among several field measurements reported by Phoon (Phoon, 2008) unitary
weight shows the least variation. Therefore is reasonable to consider it as a constant
value along the soil profile.

• Small Strain Damping (material damping): Small strain damping was considered to
be null or almost zero, but is customary to add a minimum artificial value to avoid
numerically instability as stated before in section 2.1. Work by Darendeli and Stokoe
reviewed the issue, and set regression equations to estimate this parameter in terms of
the material type and state (represented by confining pressure). data were obtained
from damping measurements on two tests, Resonant Column, (RC) and Torsional
59
Chapter 3 Description of the Proposed Methodology

Shear (TS) which are performed at high and low frequencies, respectively. Their
results are plot in the following figure (Stokoe et al, 1999)

Figure 3.3: Small Strain Damping Ratio for Several Materials


samples, at different confining pressures and subjected to RC and
RCTS testing. (Stokoe et al , 1999)

On the figure test results are shown for different sample confining pressures, material
Types (circle: Silty Sand (SM), Triangle: Fat Clay (CH), Square: Sandy Clean Clay
(CL)) and testing scheme (filled Shapes: Resonant Colum testing (high frequency
f>10Hz), Empty Shapes: Torsional Shear test (performed at 1Hz)), in general,
measured small damping is independent of confining pressure, which is the main
status parameter that would explain variability among position.

Material type shows some spread, but the main source of uncertainty (which shows the
major variability) is the testing scheme. Damping values computed using Resonant
Column Testing can be twice or even higher than Torsional Shear test results.

60
Chapter 3 Description of the Proposed Methodology

Stokoe and Darendeli explained such discrepancy as a result of frequency dependence


of Short Strain damping, as RC testing is performed at frequency ranges above 10 Hz,
and TS at one Hz. Even so, they state that the hypothesis of independency of
frequency low strain damping is not valid for period values less than 0.1s (frequencies
beyond 10Hz).

Therefore, as has been seen, variability due state and material properties is low to mild
if compared to testing and input frequency, so it has been judged reasonable, for this
methodology, to assume a constant value of material damping trough space, as done
with density.

• Poisson Ratio: For one dimensional propagation, Poisson ratio is irrelevant, as just
shearing deformation is considered. But for 2D analysis, axial strains play a role.
Therefore a proper definition of Poisson ratio is required in the constitutive model, as
shown in section 2.1, however, analysis of strong motion records, shows that most of
earthquake generated energy is propagated trough S waves. If Stokes solution for
wave propagation due a disturbance in a media is recalled, this remark is verified,
showing that earthquake related damage is mostly due by propagation of Shear waves.

Specifically, Poisson ratio is a major concern if compressive P wave propagation is


fundamental as shown below. Instead, wave propagation velocities for shear waves
and Rayleigh surface waves is just slightly dependant in Poisson ration

Figure 3.4: Effect of Poisson ratio on wave propagation


velocities, P, S and Rayleigh waves. (Ishihara, 1996)

There is a concern in surface response, Rayleigh waves, which are generated due the
boundary condition imposed by the free surface; as they arise from compatibility of
deformations between axial and shearing strains on the free boundary.

61
Chapter 3 Description of the Proposed Methodology

Fortunately, their behavior is mildly sensitive to Poisson ratio as shown in the plot
below, therefore, allowing for a global definition of this parameter, and consequently
overall value setting:

Figure 3.5: Amplitude of Rayleigh wave with depth for a half space after Ishihara (1996)

Effect of Poisson ratio on 2D site assessment was addressed specifically by Nour et al (Nour
et al, 2004) as seismic wave propagation analysis were performed on heterogeneous soil
profiles (in both horizontal and vertical direction), showing that most important variables
modifying results and contributing to seismic wave amplification are shear wave velocity and
fraction of critical damping, while Poisson ratio play a secondary role. Then in lieu of the
theorical fundamentals exposed in this section, and the research done by Nour et al (2004),
involving finite element assessments, Poisson Ratio is taken as a constant value among the
soil profile.

3.1.5 Equivalent Linear Elastic Analyses:


Despite the fact that the analyses performed in this research are elastic, the application
developed allows for equivalent elastic analysis to be carried out, but, setting just a single set
of damping and stiffness reduction curves is allowed. Then, variability of both, degradation of
stiffness and increase of damping with strain cannot be considered. This limiting constrain
was taken because performing such refined stochastic analysis based on a highly
approximated method as equivalent elastic procedure, is not consistent. For such level of
detail, is recommended to elaborate an inelastic analysis, setting some (or all) parameters of a
given constitutive law, random.

Also, in order to characterize the variability of modulus/damping degradation curves, a


sizable set of complex laboratory tests (like Torsional Shear, or Resonant Column) should be
performed at several places within the same site to collect a sizable sample.

62
Chapter 3 Description of the Proposed Methodology

Otherwise defining a variogram on those parameters would be extremely approximate. Cost


related to such scheme would leave it outside ordinary practice; and for such specific
research, would be advisable to perform a fully inelastic analysis.

3.2 Site Random Field Modeling


This section deals with the way how spatial continuity is estimated from field data, and
afterwards, adjusted into an analytical model trough field and analysis variograms. Three
Matlab© function have been developed to accomplish this task: omnivar (omnidirectional
variogram), dephtvar (variogram along depth), and surfvar (variogram along surface).

User input values are lag distance, and the sample data file, structured according to the Input
File defined in the annex. X distance is along depth while Z and X are horizontal. Functions
omnivar, depthvar and surfvar are not in run alongside the main program, at the moment they
have to be loaded directly by the user.

Trough a nested loop, distances among all samples are calculated. Immediately afterwards,
these are categorized by taking the integer part of the ratio between the computed distance and
the user specified lag length (k); the result is taken as the index of a variable length vector,
which contains the distance to the midpoint of the k-th bin, calculated as the 0.5 times the lag
plus the product of bin number k and lag length.

Variogram function is computed according to formula 2.40 and stored in a vector equal as the
one (and at the same index) containing the k values. As the nested loop is performed along all
samples (ie ij pair is calculated along ji pair) the function is divided by an additional 2 value
(rendering a 4 coefficient instead of 2 as set on equation 2.39). This way, repeated quantities
are effectively dealt in the computation of the variogram. At the end of the loop zero values
are removed, and a scatter plot is made along both vectors.

Anisotropy is considered by Appling weights to distances among different cartesian axis. For
an ominidirectional variogram all weights are set to zero, for a vertical, weighs among X and
Z axis are zero, for a surficial omnidirectional variogram, weight factors for Y distances are
also null.

Analytic variograms have to be fitted by the user, and be supplied in next step, kriging.
Automated generation of variograms was discarded, as positive definite functional have to be
fitted, and most important, the key role of judgement into the definition of analytical
variograms, which is fundamental in following procedures.

Adoption of a variogram model will define validity of the overall assessment. Therefore, it
must be performed carefully, and trough guidance of an expert, which has a sufficient
background on the topic. Generally, data is sparse and subjective assessments have to be
performed. Then a good criterion is advisable. Due all the reasons exposed automated fitting
to a given trend was not considered.

63
Chapter 3 Description of the Proposed Methodology

Even, well established geostatistical software suites as GSLIB rely on “eye” fitting to given
variogram shapes (Deutsh Journel, 1993). A final remark has to be done: as stated before on
the Fundamentals sections, functional shape of vertical and horizontal variograms have to be
the same for this specific application.

Fitting must be performed taking this constraint into account. The most straightforward
procedure to find the functional parameters; sill and nugget shared for both horizontal
variograms, and ranges along vertical and horizontal is to plot the computed field variograms,
along families of analytic functionals. Afterwards parameters are set until a desired match is
achieved.

3.3 Kriging
Kriging is made automatically inside the body of the application developed to set this
methodology, trough KRGdef function. Explicitly its input parameters are the file containing
field shear wave measurements, and the file containing the coordinates (in Cartesian system)
of the midpoints of the elements comprising the finite element mesh of the model. Also a
functional shape with unitary range has to be specified in a joint m file (matlab text file).
More details about procedures are found in the annex.

KRGdef solves the kriging system (Equation 2.62) using Gauss Jordan elimination procedure,
trough a function already included in Matlab©. Output of KRGdef function is the Kriging
standard deviation (Equation 2.63) and predicted (mean) values at unsampled locations, at the
midpoints of each finite element in the mesh representing site test subject. For further details,
please refer to the appendix.

3.4 Site Response Assessment Using QUAD4M


QUAD4M a is dynamic, time domain, equivalent linear two dimensional computer program,
which was written as a modification of QUAD4, a previous FORTRAN code written by
Idriss, I. M.; Lysmer, John. Hwang, Richard Seed, and H. Bolton, in the early 70’s to perform
site response assessment. Major features of QUAD4M are the possibility of including a
transmitting base and performing seismic coefficient calculations (for slope stability
assessment) (Hudson, Idriss, Beikae, 1994).

QUAD4M has been formulated following the general principles of finite element modeling
stated in section 2.1, using Newmark beta method with parameters β=0.5 and γ=0.25.
Damping is included, using Rayleigh Approach, as described in section 2.1. Damping is set at
two particular frequencies, the lowest one is equal to the first natural frequency of the system
(ω1), the other is set at a value equal to n times ω1 where n is the closest odd integer greater to
the ratio of the “predominant frequency of the input earthquake motion”, which has to be
input by the user to the first natural frequency of the system. (Hudson, Idriss, Beikae 1994).

64
Chapter 3 Description of the Proposed Methodology

This approach is based on the fact that natural vibration periods of shear beams are odd
multipliers of the first modal value. This way, damping is underestimated in a period range
between the natural period of vibration the soil deposit and the “predominant period of the
input earthquake motion”, which is not explicitly defined. From a plot in an example it might
be inferred that the “predominant frequency (or period) of the input earthquake” is the
ordinate for which acceleration response spectra of the input strong motion is maximum.

If this definition of the predominant frequency is true, it would make each analysis dependant
on the properties of the input strong motion. A better, and more practical criterion to set
damping might be proposed following Darendeli and Stokoe (2006) research, on which, they
set a limit on the applicability of strain independent material damping at 10 Hz (a period value
of 0.1 s). Even so, an increasing trend on damping might be inferred from differences on
Resonant Column testing and Torsional Shear. Therefore, overestimating this parameter for
frequencies above this minimal threshold seems reasonable.

According to the theory exposed in section 2.1, if a frequency value is above the major cut off
frequency at which damping is set, it (damping) will be overestimated. Taking this behavior
into account, and how does it resembles experimental data, in this methodology, the
“predominant frequency of the input earthquake motion” parameter in QUAD4M, is set equal
to 10 Hz (0.1s period) independently of the input Strong Motion. Besides this change, other
input parameters are strictly defined according to QUAD4M user’s manual, as stated in this
application user manual, which can be found on the reference.

3.4.1 Finite Element Mesh Characteristics


Common practice indicates that, finite element length for common practice, should be at least
equal to a value raging between one fifth and one tenth of the wavelength (λ) of the shortest
expected wave propagating trough the media (Kuhlemeyer, & Lysmer, 1973). Therefore, using
basic wave propagation theory, and a mean value of eight, this length can be found using:

  8 0.12     (3.18)

As the wave propagates trough the continuum, involving several elements, Vs is taken as the
smallest expected shear wave velocity. As in this methodology a homogeneous field is
considered, this value is set to the mean Vs of the random field. Period value T can be set
equal to the shortest wave on which reliable information can be extracted, which theorically
can be found using Nyquist theorem, being equal to twice the sampling period. Due practical
considerations, related to the mechanical properties of strong motion recording devices, a
practical limit of 25Hz (0.04s) for the upper frequency on which reliable data can be obtained
is reasonable.

As will be stated latter, a numerical simulation based on an hypothetical site with a field mean
shear wave velocity of 250 m/s was considered, for such case, based on equation 3.18, the
minimum element length should be:

65
Chapter 3 Description of the Proposed Methodology

0.12  0.04   250 1.5  

A less stringent condition can be set, if a period T = 0.1 s is taken into account, following
Darendeli and Stokoe research (2002) following the upper limit on frequency independent
damping:

0.125  0.1   250 3 

Also, due properties of the interpolating polynomial ϕ, for quadrilateral elements, a shape as
close to square should be kept. In this study, therefore, length and size of the quadrilateral
elements was set equal.

Other imposed limit is due the properties of the random field itself. In order to set a
reasonable statistical interpolation, element length should be less (and even less than half) of
correlation distance along the axis on which either side is orientated. As correlation distance
on horizontal span trough several meters, even tenths of meters, in the horizontal, ranges can
range from 0.5 to 3 m, as stated in chapter 2. If kriging is performed for points beyond the
range, a single average is performed as the kriging weights are equal (Isaaks Shiravastava,
1989). For elements spanning well beyond the sill, averaging several values inside the
element, to set its mechanical properties, is recommended. As will be shown latter, in this
specific study, ranges on horizontal direction spans trough 60m and in vertical trough 4m.
Then, even for the 2.5 horizontal dimension limit, dimensioning of the elements is somewhat
consistent.

Although the application developed allows for any user define finite element mesh, as long it
follows QUAD4M formatting requirements (at stated in QUAD4M user manual (Hudson,
Idriss, Beikae 1994), it includes directly a function to set a rectangular mesh;
RectangularMesh.mat, comprised of quadrilateral elements. This function is already built in
inside the main body of the application as explained in the appendix.

The user has to define the length and height of the site, and set a number of elements on each
horizontal and vertical directions, (element dimension is not directly set, is taken as the ratio
of total site length/height divided by the supplied number of elements). Of course element size
should take into account the general outlines stated before. This function is already built in the
body of the application.

3.5 Montecarlo Simulation

3.5.1 Distribution Dicretization


Montecarlo simulation is performed to assess the stochastic response due the variability found
on the predicted shear wave velocity values (Vs) at the midpoint of each element. Vs is
modeled as a value comprised of a determinist trend, equal to the predicted mean, and a
independently normally distributed random variable.

66
Chapter 3 Description of the Proposed Methodology

Which, represents the error on the interpolation, with zero mean (as Kriging is unbiased) and
standard error equivalent to the square root of the Kriging variance; as is customary with any
Least Squares fit.

0, (3.2)

Not that, its PDF will be equal to the one of normal variable, with a mean set to the predicted
shear wave velocity, and standard deviation matching the square root of Kriging variance.
Therefore, Vs can be treated as:

, (3.3)

Errors are independent among each other, for a given variogram shape and set of samples.
This way, Vs values are uncorrelated. Sampling among Vs was performed using Latin
Hypercube. Distribution values were taken along ten categories, such that bin extremes where
set according to:

,           1,2, … 10

The characteristic value of interval I is given by:

, 1 ⁄ (3.4)

Then each probability density function PDF is divided into an user specified number of bins,
such way that the area enclosed by the PDF in each bin is equal, meaning that the probability
on any the random variable being inside a given bin is equal for all subdivisions, and its value
is equal to 1/N. the representative value of each bin is its median value. For the numerical
simulation performed in this research, a N value of 10 was selected.

−3
8 × 10

−3
6 × 10

−3
4 × 10

−3
2 × 10

0
50 100 150 200 250 300 350 400 450

Figure 3.6: Proposed domain subdivision for a normally distributed random variable,
mean 250 and standard deviation 50. 10 Bins were defined. 67
Chapter 3 Description of the Proposed Methodology

3.5.2 Simulation Algorithm


Shear wave velocity at each one of the unsampled elements represents a random variable of
the model. A single estimate is performed at the midpoint (if the element is square, centroid is
recommended for a triangular layout) and the value predicted, alongside its kriging variance,
is considered for the whole element. For each one of them, domain subdivision is performed
following the procedure stated in numeral 3.5.1.

Then all possible bins are grouped in a single matrix, in which each row represents a single
element. Along the columns, bins are disposed from lower to higher characteristic value. This
matrix is called the Pool, and is calculated trough the function SetPool.mat, which is
automatically called inside the application.

Afterwards, a loop is performed. At a given row i. (representing the i-th random variable) a
random integer j between 1 and N is generated. Afterwards, bin j is selected. Then the value
located at the position (i, j Element i, j bin) in the Pool matrix is translated to the position (i,
1) and the bin characteristic value located in the first column, is moved back to the jth
column.

This way the sample is generated along the first column, which, is trimmed from the pool
matrix when the last random variable is reached. Afterwards, Quad4M input file is ensembled
by setting Shear modulus values according to predicted shear wave velocities (as the density
of the material is taken constant). Then QUAD4M is run, and results at specific locations (5
points automatically set to be equally spaced on surface) are weighted to compute descriptive
statistics, Mean and standard deviation of maximum and mean response on surface (along the
5 selected locations).

The Basic procedure outline in the last paragraph is repeated N times. If a number of
simulations above N, is required, Pool Matrix is generated back, and other Latin Hypercube
simulation is performed, until user defined Number of simulations are achieved. Although it
might be possible than a specific set of variables might repeat itself, this is not likely, as the
number of available combinations is given by Nk where k is the number of finite elements on
the mesh, and N is the number of bins. It is expected that a mesh have at least 100 elements,
therefore for N=10 a value of 10100 is found; which is far beyond the 109 factor at which
random numbers begin to repeat themselves, considering this fact, each Latin Hypercube
simulation can be treated as independent.

3.5.3 Site Response Characterization


Strong Motion parameter chosen to characterize seismic site response is Acceleration
Response Spectra, because, is the most critical value considered by current structural design
specifications and guidelines as EUROCODE 8, FEMA 450 and ASCE7 among others, which
set design resistance targets, defined according acceleration response spectra.

68
Chapter 3 Description of the Proposed Methodology

Although, displacement spectra is getting a more attention recently following research done
by Priestly, Calvi and Kowalski among others (Calvi, Kowalski, Priestley 2006), so for
further developments, characterizing soil response also trough this descriptor should worth
consideration.

Quad4M outputs acceleration values at all nodes in free surface. Acceleration Response
Spectra is calculated employing a custom Matlab© routine, employing State Space Analysis.
Due to resource limitations, five equally spaced points, from the edge, and along free surface,
are chosen as representative locations, on which response ordinates are computed. Although,
computing a single response spectra take less than a second, doing it for each point in surface,
and afterwards, repeating the procedure for the required number of simulations, could lead to
an unpractical time demand for the analysis.

Anyhow, trough a simple change in the routine Accdata.mat it is possible to consider any user
desired number of locations. In this current version of the application is not possible to select
and specific on surface node to perform the analysis.

Mean and maximum spectral acceleration are computed along a period range between 0 (for
which the PGA is found) and 4 seconds (sampled at 0.05s rendering 81 spectral ordinates),
representative of common building stock, comprising a single occurrence, When all
simulations are performed, Sample average and standard deviation of the single occurrences
are calculated, characterizing the overall response of the site.

3.6 First Order Second Moment Estimation


Output from kriging renders covariance matrices (diagonal) and mean values, but gradient
estimation is required as stated in chapter two. This procedure is done trough a forward
difference where the gradient of each spectral ordinate at a period range between 0 and 4
seconds, sampled at 0.05 second interval is calculated. A differential (h see more details on
section 2.6) on shear wave velocity of 2% (although user can define another percentage
threshold) of the predicted value for each element is considered in the estimate. Finally, mean
and standard deviation are found trough equations 2.91 and 2.92. All the procedure is set on
the function FOSM.mat.

3.7 Simulation of Normally distributed Random Fields


As shown later, a numerical simulation scheme was adopted to illustrate the use of this
application. In order to perform this exercise, it was required to generate a single occurrence
of a base random field on shear wave velocity on a fictional site. To perform this task, routine
ExField was developed. ExField requires as input data a text file with the coordinates of the
centroid of each finite element model, and the variogram of the random field on which an
occurrence is desired.

69
Chapter 3 Description of the Proposed Methodology

Shear wave velocity at each element, is set considering the field mean value and the
covariance matrix ensembled for each member of the mesh (by evaluating the variogram
function consecutively for all elements, defining variance and covariance values among all
elements. Finally Randomly generated values are established using the procedure outlined in
section 2.5.1 for correlated normally distributed random variables.

70
Chapter 4 Numerical Simulation

4 NUMERICAL SIMULATIONS
In order to show how the application developed would perform an analysis, a set of numerical
simulations were done on a custom set site, with overall parameters set to generally observed
values in practice. In order to consider stochastic behavior, a normal, spatial random field was
modeled following a given autocovariance function and a given mean value.

Base case was set on a rectangular site, 60m wide and 30 m depth. These dimensions are not
customary, in sense that at NEHRP Recommended Provisions for Seismic Regulations for
New Buildings and Other Structures (FEMA, 450) define soil site classification, and
therefore, design response spectra, based on geotechnical exploration performed at the first 30
meters below surface. Horizontal span was taken twice this value.

The case selected in this case is a layer extended over a half space, represented by a compliant
base. Variability of soil shear wave velocity is taken into account for the vicinity of the site of
interest as is represented by the limit of 60m on the horizontal direction. Meaning that edges
of the model are not set in the case by itself, is just a limit on the analysis domain.
SITE

Surface response sites Infinite


Layer
Random Field And
Soil Analysis Region
H=30m Finite Element
Mesh
Compliant Base

Half Space Rock

Figure 4.1: Simulation Base Model

71
Chapter 4 Numerical Simulation

4.1 Base Model Input Parameters


on the next section, common values of the boundary conditions (half life underlying the site),
strong motion, deterministic soil properties and statistical characterization of the random field
representing shear wave are presented. these properties were set following values observed in
practice. The strong motion selected was from a real event. further details are explained
below.

• Half Space

Shear Wave velocity Vs : 1500 m/s, According to FEMA 450 (FEMA, 2003) critical
value to classify a site as hard rock. It must be recalled that this category (rock) in site
response assessment does not follow the same criteria as done into geology.

Compressive-Dilative wave velocity Vp: 2800 m/s. Ratio of Vp/Vs = 1.87


representative of a material with a Poisson ratio of 0.3 a value which according to
Kramer (1996) is commonly observed for most geological materials (non saturated)

Unitary Weight UW: 25000 N/m3 = 25 kN/m3 unitary weight of reinforced concrete
(For Example see AASHTO 2006) which can be similar to the value of a soft
cemented rock.

• Soil layer:

Unitary Weight: 18000 N/m3 which is representative of a Clay or Clayley material

Poisson Ratio: 0.3, which is the most typical value among geological materials
(Kramer 1996).

• Statistical Descriptors of the soil:

A random field was generated taking into account the described pre-set properties.
Variogram shapes were modeled following a range and a sill value representative of
the Coefficient of Variation of Shear Wave velocity among the field, which will be
explained in more detail in next section. No nugget effect was considered, as it is a
feature which arises from fitting a variogram specific functional to field data, and not
due an assessment of the samples themselves, and therefore, is not built within the
field.

• Range on Horizontal Direction: 60m Al points within the field are horizontally
correlated as expected for a homogenously layered site

• Range on Vertical Direction: 4m corresponding to well defined layering scheme of a


homogeneous material with variable properties.

72
Chapteer 4 Numerrical Simulaation

• Mean sheaar wave vellocity: 250 m/s.


m Set to match
m a D site
s accordinng to FEMA A 450 site
classificattion (D site shear wavee velocities range
r betweeen 180 andd 360 m/s

• Ground Motion
M

Ground motion
m seleccted to perfform this an
nalysis is thhe record obbtained durring Friuli
Earthquakke, on Mayy 6th of 19776. Richter Magnitudee ML = 6.4 (Slejko et al 1999).
Epicenter located at latitude:
l 466.3450 and longitude
l 13.2400 Accceleration reecord was
obtained from
f PEERR (Pacific Earthquake
E Engineering
E g Research Center) NG GA (Next
Generation Attenuatiion of grounnd motions Project). Component
C 0000 was takken. Code
of the recoord is NGAA0125.

Strong Mootion was reecorded at Tolomezzo


T station locaated at: Latittude: 46.38220 and
Longitudee: 12.9820. Geomatrixx classificattion of the station is 11A meaning g that the
accelerommeter was loocated on a lightweight
l one level structure,
s buuried severaal feet into
a stiff soiil layer lesss than 20m
m thick. Acceleration
A n time histoory and acceleration
response spectra
s for this
t shakingg are depicteed below:

Figure 4.2: Acceleration


n response sppectra for the Outcroppin ng Ground M Motion Seleccted to
p
perform the simulation,
s nds, spectral acceleration in fraction of g .
peeriod in secon

Figure 4.3: acceleration


a t
time story foor Selected Outcroping Grround Motion
n. Time in seeconds, 73
accceleration vaalues in graviity
Chapter 4 Numerical Simulation

4.2 Simulation Parameters


Several sampling strategies where considered, in which boreholes spanning all depth were
located at several places along the profile. Firstly a single Borehole at the middle was placed,
afterwards, two boreholes at the extremes, and finally three boreholes, two at the ends, one in
the middle. In order to make a general, simplified assessment on the effect of the finite
element mesh on results, two subdivisions were taken into account, a coarse one, where
elements had 3x3 m dimensions, set to a lower period bound of 0.1s as stated in section 4.1
and a refined mesh, with dimensions cut by half to 1.5x1.5m and a cutoff frequency of 25 Hz.
This way two meshes comprising of 200 and 800 elements were defined.

Finally, effect of site overall spatial variability was addressed by establishing a set of models,
one with custom properties and the other with same descriptors, except their sill value which
was doubled as shown in chapter 2, this is equal to doubling the standard deviation of the field
(or also, its coefficient of variation COV). Summarizing, following different case analysis
where defined, taking into account, different meshing, spatial variability and field research
strategies:

1. Single Borehole at the middle, Coarse mesh and shear wave COV = 20%.

2. Two boreholes at the extremes, Coarse Mesh and shear wave COV = 20%.

3. Three Boreholes, extremes, and middle. Coarse Mesh and shear wave COV = 20%.

4. Two Boreholes at extremes, Coarse Mesh and shear wave COV = 40%.

5. Two Boreholes at extremes, Fine Mesh and shear wave COV = 20%

The same spatial random field on shear wave was generated for samples 1, 2 and 3. Each
simulation is different, as results were taken on the kriged estimates and not the random field
itself. For cases 4 and 5 two different fields were generated due changes in the finite element
mesh (which now includes more random variables for case 5) or the statistical properties of
the system (sill value doubled, for case 4) Variograms for different simulations are shown:
3000
Variogram (m2/s2)

2500

2000

1500

1000
Horizontal
500 Vertical
0
0 10 20 30 40 50 60

distance (m)
Figure 4.4: Variograms for cases 1,2,3,5.

74
Chapter 4 Numerical Simulation

10000
C1
Variogram (m2/s2)

8000
C4
6000

4000

2000

0
0 10 20 30 40 50 60

distance (m)
Figure 4.5: Horizontal Variograms for cases 1,2,3,5 (C1) and case 4 (C4)

4.3 Assessment of Predictive Scheme


A normal, random field characterized by a set of autocorrelation functions was generated.
Based on specific occurrences at certain locations, set as samples, values at non sampled
regions were predicted using Kriging. Afterwards, predicted and simulated values were
compared.

4.3.1 Cases 1, 2 and 3


The base case, set as a single occurrence of the random field, defined accordingly to the
parameters presented in sections 4.1 and 4.2 is presented as follows:

Figure 4.6: Isogeometrical View of Randomly Generated shear velocity Vs (m/s)

75
Chapter 4 Numerical Simulation

Figure 4.7: contour plot on figure 6, showing shear velocity (Vs) variation along the base
case. On vertical ordinates, height from rock basement. on horizontal, X
coordinate.

The effect of spatial continuity is clear; due the long correlation distance among horizontal
direction, a layering pattern is somewhat described. Shear wave shows somewhat long, flat
patterns, while on vertical distance, a more jagged, disperse behavior is observed. Spread is
quite wide as the maximum value observed is 350, joint with a minimum of 130, spanning
more than 2.5 times the standard deviation of the field. For comparison proposes a random
field without a correlation structure has been modeled using the same application, as shown
on figure 8

Figure 4.8: Random Field representing Vs, without a correlation structure

76
Chapteer 4 Numerrical Simulaation

Two effects
e of coorrelation caan be obserrved. In figu
ure 4.7 extrreme valuess tend to bee clustered
and cloose to regioons of overaall low/high shear wavee velocity values, this m
might inducce some ill
behaviior of the system,
s as for
f examplee, low stren ngth materiaals (as sheaar wave vellocity and
strengtth can be correlated
c (
(Isihara, 19996) might be
b altogethher, generatiing a dangeerous low
resistaance zone.

The possibility ofo considering this pheenomenon is i an advanntage of ranndom field modeling
follow
wing a geosttatistical baackground, where spatiial continuiity can be ttackled by defining
d a
set of variogam functionals.
f As shown correlationn has a cleaar effect onn simulated variables,
and thherefore, im mpendent error
e of paarameter occcurrences, should at least be considered
c
carefuully on a casse basis.

Estimaates for sammpling scheeme one aree shown below (Figuree 4.9) jointlly with theiir Kriging
variannces and thee relative errror for eachh sampling point.
p In thiis case, the bborehole was located
at a distance closse to the midpoint
m of the model (exactly att 28.5 meteers from thee edge, as
samplees, as a meaan of convennience, werre taken at thhe centroid of a row off mid span elements)
e

Figure 4.9: Vs Samples to seet Case 1. Borehole close to


t the mid fieeld

Figgure 4.10: Sheear wave veloocity (Vs m/s)) Kriging estiimates, case 1.
1

77
Chapter 4 Numerical Simulation

Figure 4.11: Squared standard deviation of Vs Estimates, Case 1

Figure 4. 12: Relative Error of Vs in (%) defined as the ratio of absolute error to
simulated value, case1

Results are affected by lack of a countermeasure in x direction (because just a single borehole
is available), this way, the vertically varied data is more or less spread trough horizontal
distance, showing a clear layering the pattern as depicted in figure 4.10.

78
Chapter 4 Numerical Simulation

Also, due the characteristics of the variogram model employed and the highly variable sample
at the borehole, standard kriging error builds up quickly, reaching standard deviation values
comparable to the field as 10m away from the sample. For such locations a correlation
coefficient of 0.6 is found. For a greater separation distance from the sample, 15m, the value
falls to 0.47. For vertical direction, the effect is even more noticeable as the range is just 4m.
Beyond this distance, samples are statistically independent, and correlation is just due
horizontal direction, explaining the disk shape predictions, and the Kriging variance in
vertical strips. Despite limitations on the sampling strategy, and the high variability in vertical
direction, estimates are not bad. Mean relative error (ratio of the difference between predicted
and simulated values, divided by the latter one) ranges 14% in absolute terms (considering the
absolute value of the difference between kriged and simulated values) and -6% in relative
(taking into account if the sample is under or overestimated). The bias can be explained by
analyzing the samples themselves; their mean is 262 m/s exceeding the field value of 250 m/s.
It has to be remembered that Kriging is an interpolation scheme and then is strictly bounded
to the samples. There is no apparent bias in the spatial distribution for the relative error, which
is expected due the unbiased condition imposed on Kriging. Although relative error is a
meaningful and clear way to show discrepancies among results, is not completely
representative for the prediction assessment performed, (for this particular case when the
“exact” values are known) as it does not include the fact that, alongside the prediction, kriging
also outputs its variance, as a measure of its uncertainty. Therefore a better indicator might be
developed, if the properties of the random error are considered. For instance, as being
normally distributed, the error term in its standard form is given by:

But the mean value is zero, and the standard deviation is equal to the kriging variance. Ys is
the observed error. Therefore:

This standard error term includes both the absolute error and the Kriging variance. The values
of the standard error for case one are shown in graph figure 4.11.

Figure 4.13: indicator function U’: ratio of absolute error and kriging variance. case 1
79
Chapteer 4 Numerrical Simulaation

Not sppecific trendds are obserrved on thee indicator, showing noo evidence of spatial correlation
c
(for exxample commpare con fiigure 4.8 whhich depictss a field witth no correllation). Of course
c the
graph reflects thee general error
e shape figure draw wn at figurre 12. As ooverall stattistics, the
indicattor value has
h an abssolute meann (as both h under annd over esttimation arre equally
deviations from thhe simulateed value deeemed as “ex xact”) of 0.884. As just a single occcurrence is
done, setting an error testinng scheme is i dubious. Anyhow, for f a standaard deviatio on grossly
65% ofo the data iss expected to
t be withinn this bounddary. Now reesults for caase 2 will bee shown:

Figgure 4.14: Sam


mpling at exttreme borehooles case 2 (sh
hear wave vellocity in m/s)

Figure 4.115: Shear wave velocity (V


Vs m/s) Krigiing estimates,, case 2.

80
Chapter 4 Numerical Simulation

Figure 4.16: Squared standard deviation of Vs Estimates, Case 2.

Figure 4.17: Relative Error of Vs in (%) defined as the ratio of absolute error to simulated
value, case 2.

In broad terms, trends observed in case one are shown again in case 2. Now sample mean is
close to 220 almost 12% less than mean field value. Now two sampling schemes are available,
but still observations are statistically far as their separation distance is equal to the range.
Standard deviation of Kriging errors builds up steadily and quickly reaching a value close to
75% of the field parameter at a distance close to 12 meters, as observed in the first case.

81
Chapteer 4 Numerrical Simulaation

Resultts, as expectted, are stroongly influeenced by thee samples. Now an ovverall error of 16% in
absoluute terms is computed, (absolute valuev of thee error dividded by the ssimulated sh
hear wave
velocitty) while, inn relative teerms (takingg the sign of
o error) , iss 12.5% in aagreement with
w mean
averagge.

Figure 4.18: indicattor function U’:


U ratio of ab
bsolute errorr and kriging variance. casse 2.

Again standard error show features


f alreeady discusssed for casse 1. There is no or at least dim
spatiall correlation among data,d and teends to be no clusterred as expeected, resem mbles the
relativve error allso, and sttrong errorr positive and negatiive are alttogether, expressinge
uncorrrelation (a soft
s changee from miniimal to max xima is duee graphical interpolatioon, which
has noo relation too the data) mean standdard error, for f all sampples is 0.833 similar to case one.
This iss due the sttill long stattistical sepaaration among sampless, as noticedd earlier. Noow results
for casse 3 are shoown.

F
Figure 4.19: Shear
S Velocitty Samples (V
Vs m/s) for th
hree boreholes, two at thee edges, one in
i the
middle.

82
Chapter 4 Numerical Simulation

Before going on with kriging results, there is a note to be made about the lectures in the
samples. Although a trend at 7.5 meters above the rock basement might be inferred, it is just
casual and is not due the field itself. It has to be recalled that a small sample size is being
taken, and therefore observed values are prone to “hole” effect observed in the exercise
performed in chapter two.

Figure 4.20: Shear wave velocity (Vs m/s) Kriging estimates, case 3.

Figure 4.21: Squared standard deviation of Vs Estimates, Case 3

83
Chapter 4 Numerical Simulation

Figure 4.22: Relative Error of Vs in (%) defined as the ratio of absolute error to simulated values, case 3

Figure 4. 23: indicator function U’: ratio of absolute error and kriging variance. case .

84
Chapter 4 Numerical Simulation

General remarks found for the other cases can be extended again for case 3, although in this
case a far better description of the field is achieved. Maximum Kriging standard deviation
error don’t reach values as high as cases one and two, as a sample is given within the
continuity region of the field, within its range. No clear spatial trends on the error are
observed, just clustering of error in a specific point.

Also, field shows less smooth as more variability is induced by taking more samples. Kriging
by itself is a smoothening procedure, as prediction is based on the sampled values, therefore,
if more sampling subjects are available, roughness of the data is better represented. As
observed in the kriged values of figure 20. On quantitative terms, relative error was found to
be close to 2.5% (if over and underestimates are averaged altogether) and 11.5% if absolute
value of error is taken into account, this is expected as the average of the samples is 10% than
field mean. Also, due the middle borehole, variability is better represented, allowing for a
better match of specific outliers.

Standard error also shows a better behavior. Mean value is now 0.77 compared to 0.84 of both
cases 1 and two, observing a reduction in uncertainty. This means that a real progress was
made in field properties assessment, as cases one and two seem to have the same statistical
significances, as the standard error scored the same value.

It has to be said that these observations are specific for this exercise, and should be carefully
extended as an universal trend. In reality, there is no background solution to compare, and
fixing data to a given functional variogram might lead to more uncertainty. If variability is
already observed within a field in which all conditions to perform Kriging are met, on real
cases, is expected to have even a blurrier panorama.

4.3.2 Case 4:

Figure 4.24: Random field simulation case 4 (shear wave velocities in m/s)

85
Chapteer 4 Numerrical Simulaation

Figu
ure 4.25: Borrehole Samplles (shear wavve velocity in
n the ordinatee in m/s) heigght from rock
k
bedroock

Figu
ure 4.26: Krigging values, case 4. (Shear wave velocitty in m/s)

86
Chapter 4 Numerical Simulation

Figure 4.27: Squared standard deviation of Vs Estimates, Case 4

Figure 4. 28: Relative Error of Vs in (%) defined as the ratio of absolute error to simulated values, case 4

87
Chapter 4 Numerical Simulation

Figure 4.29: indicator function U’: ratio of absolute error and kriging variance. case 4

Effect of higher variability is clear, a wider range of values are expected, increasing
uncertainty. Absolute value error rises to 22.5%, although the expected value (in which over
and under estimation are balanced) is just 1%. This is due the mean value of the sample is
252, one percent higher than the mean field value.

Again, variability rises up quickly up to a distance of 10 m from the boreholes, as observed


for case 2 and therefore, measurements tend to mimic the sample behavior. This has lead to a
more or less uniform interpolation pattern showed in figure 4.26. Standard error shows a
mean value (despite under or overestimation) of 0.67. This might be due sample behavior, as
their mean is closer to the field value, if compared to case 2. Therefore, increase in the
relative error is in agreement with the predictions; also due the higher sill range, spread of
predicted samples is larger, and standard error, taken as a relative ratio of absolute error and
kriged standard deviation, might reasonably score lower than compared to cases one, two and
three.

Now, results for case 5 will be reviewed. Meshing has been adjusted to half dimensions,
therefore rendering 4 times more elements, than domain subdivisions performed on cases
4,3,2,1. Sampling strategy is again, two boreholes, located at the extremes. Random field
variance was set back to its default value of 2500 square meters per second, rendering a
coefficient of variation of 20%.

88
Chapter 4 Numerical Simulation

Again, a new random field occurrences, and borehole occurrence, were generated,
representing a new set of samples. In general, comparison between cases is more descriptical
than analytical, as for several different field values were simulated. Also it must be considered
that kriging is conditioned by borehole occurrences, so direct comparison is not completely
rigorous.

4.3.3 Case 5:

Figure 4.30: randomly generated shear wave velocity values (m/s) case 5.

Figure 4.31: Shear wave velocity (Vs m/s) Kriging estimates, case 5.

89
Chapter 4 Numerical Simulation

Figure 4.32: Squared standard deviation of Vs Estimates, Case 5

Figure 4.33: defined as the ratio of absolute error to simulated values, case 4

90
Chapter 4 Numerical Simulation

Figure 4. 34: indicator function U’: ratio of absolute error and kriging variance. case 5

In general terms, results comparable to case 2B where found. Although error is somewhat
smaller, as it scores 12% in absolute terms and 3% underestimation. Sample estimate is less
than one percent higher than the field mean, so is quite a close agreement. Lack of an
intermediate borehole hampers the efficiency and convergence of the results, rendering a high
increase in the standard deviation as the values go more than 10 m away from the sampling
locations (boreholes), as observed for cases 1, 2 and 3.

Also, due the lack of data, kriging estimates become smooth towards the centre of the region,
although, vertical distance begins to play a certain role, as samples are located just 1 m away
in that direction, anyhow, compared to horizontal continuity the effect is marginal.

Standard error reaches 0.7 and improvement compared to 0.83 for the coarse mesh. Although
variability along vertical direction is better represented (as distance among central points of
each element is less than one half of the range (1.5/4), this result might be due the steadier
behavior of the sample, rather than the dense mesh itself.

91
Chapter 5 Analysis Results

5 RESULTS
5.1 Convergence of simulations
As stated, the first analysis that shall be performed after the stochastic estimation has been
completed should be the efficiency and precision of the simulation. The most straightforward
procedure to do it, is by making a plot of the computed moment characteristic values and the
number of simulations performed to obtain them.

Figure 5.1:Convergence of Simulations, Case 1. On left mean value of maxima response on surface. On
Right, mean value of mean response on surface, spectral ordinates as fraction of gravity

As shown, convergence of mean values is achieved for less than 50 simulations; a neat stable
behavior is shown afterwards. Units are expressed in g, as fraction of earth gravity (9.8 m/s2).
Results for other cases will be shown below. Convergence is shown for the five most
representative (biggest) acceleration values and Peak Ground Acceleration (PGA) which is
the spectral ordinate for zero natural period.

92
Chapter 5 Analysis Results

Figure 5.2: Convergence of simulations, Cases 2 (Two boreholes at extremes) 3 (Three boreholes) and 4
(two boreholes at extremes, but field standard deviation doubled) On right, Mean value of
mean surficial response, on left, mean value of maxima surficial response.

93
Chapter 5 Analysis Results

Figure 5.3: Convergence for Mean Responses (mean on left, maxima on right) for case 5. (Dense finite
element mesh)

For all cases, convergence of mean values is achieved with less than 50 simulations for most
significant spectral ordinates. This is a promising result, as for crude Montercarlo; a thousand
simulations figure is not uncommon. But a complete assessment should also include
convergence of standard deviation as shown afterwards:
Sa(g) Sa(g)
Sa(g)

Sa(g)

94
Chapter 5 Analysis Results

Sa(g) Sa(g)

Sa(g) Sa(g)

Sa(g) Sa(g)

Figure 5.4: Convergence of Standard deviation for cases 1,2,3,4, and 5. (Descending order). Standard
deviation of the maximum surface spectral acceleration given on left. Standard deviation
of the mean surface spectral acceleration shown on right. Units in the vertical ordinate are
given as fraction of gravity. Most significant natural periods are shown.

95
Chapter 5 Analysis Results

In general convergence on the standard deviation of both maximum and mean surface
response is achieved after 175 simulations although a somewhat more consistent behavior is
observed for the mean response, as expected, due the averaging effect of taking all the
samples together.

As for period ranges, values close to spectral ordinates less than 0.2 show noticeable
variation, and a jagged behavior as the number of samples increase; while higher natural
periods stabilize quickly. It has to be noted that the range of validity of the Finite element
mesh is close to 0.1s as explained before in chapters 3 and 4. Also differences between mean
and maximum surficial response are more noticeable among this period range, topic which
will be discussed afterwards.

In general is a remarkable result that convergence is found below 200 simulations. Usually,
Montecarlo schemes involve several thousands of runs, which are resource intensive.
Although this result is partial and bounded to this specific exercise, an analysis in a real case
should be performed, and then convergence be computed in the same fashion. If a result
similar to this one is found, then the methodology can be stated as efficient for computing
both central moments. As a remark, FOSM require at least a number of finite element
program runs equal to the amount of elements. Then, for the dense mesh (800 elements), it
was found that Montecarlo simulation was more efficient than FOSM.

5.2 Assessment of FOSM and Montecarlo Simulation

T(s) T(s)

Figure 5.5: Mean Response Spectra for case 1. On the left, Maximum Surface Response, on right, means
surface Response. Natural period ordinates in seconds. Spectral acceleration in fraction of
gravity. N=250 Montecarlo simulations. Base denotes the response spectra due outcrop.

96
Chapter 5 Analysis Results

Agreement between FOSM and Montecarlo is remarkable. Average difference between


FOSM and Montecarlo estimates, defined as the ratio of their absolute difference and the later
(Montecarlo) is less than 2.5% for all periods considered (raging between 0 and 4s). Top
differences are observed around values less than 0.1s and at resonance with the seismic input
(around 0.35 and 0.5 seconds) where ratios between 7.5 and 9% are observed.

It has to be considered that FOSM is by itself an approximation, so “exact” match is of course


out of order. In general results are promising in the sense that, two completely different
methods yield results that are comparable, showing the quality of the methodology
framework.

Variability for mean value estimates are marginally slower, as expected due the weighting
effect of taking the mean of surface response, anyhow, there are less than 15% lower than the
differences computed for their maxima counterparts. As will be shown latter, due the specific
geometrical characteristics of this exercise, differences between mean and maximum surficial
response, are negligible.

T(s)

T(s)

Figure 5.6: Coefficient of variation COV of Spectral Acceleration for case 1. On the left, Maximum
surface response, on right, mean surface Response. N=250 Montecarlo simulations

Although FORM and Montecarlo coefficient of variation COV estimates follow the same
pattern, (ie FOSM estimate plot follows the same trends that montecarlo sampling) are more
scattered; now mean difference for all period range ranges close to 30% for both Mean and
maximum. Errors less than 2% and bigger than 50% can be observed altogether for both
cases. In general differences between mean and maximum surficial quantities is marginal,
compared to the overall spread of FOSM and Montecarlo estimates compared alongside.

Differences arise for several reasons; sensibility to model constrains (as noticeable
disagreement is observed for periods below 0.1s), numerical estimation of gradient, which
also increases the uncertainty.
97
Chapter 5 Analysis Results

Therefore, despite the increased difference between the methods, in overall terms, estimates
are reasonable, if increased variability built inside FOSM estimates is taken into account. As
stated, a single forward derivative is performed, and better agreement might be achieved if a
backwards calculation is also considered. Finally, approximation of the target function
becomes more critical, as the derivates are first order estimates, then a variability of 30% is
reasonable.

Sa(g) Sa(g)

T(s) T(s)

Figure 5.7: Spectral Acceleration, with Variability computed for case one. On the right, transfer functions,
defined as the ratio between spectral response on surface and spectral response on the rock
outcrop.

Again, results from both approaches are consistent. Although they differ in their mean and
standard deviation estimations, they follow each other (For example, FOSM estimate reaches
a maximum at a close natural period than the one observed for Montercarlo estimate)
validating the overall methodology. Now mean and one standard deviation (both positive and
negative) are depicted for both the maximum ordinates of the response spectra at surface and
transfer function. Variability increases around the characteristic periods of the strong motion
(where peaks on the acceleration response spectra are observed) which is depicted as “base”.

Selective resonance is observed, as the soil deposit amplifies mostly frequency content close
to its resonance value. A single shear beam analysis yields a natural period of the layer close
to 0.5 if variability is discarded:
4 4 120 ⁄250 0.48

Transfer functions follow the same trends as expected. Results for just maxima of surface
response are shown, as will be discussed latter, differences between mean and maxima
response, for this specific exercise are minimal. Now results for cases 2, 3 4 and 5 will be
discussed altogether.
98
Chapter 5 Analysis Results

Sa(g) Sa(g)

T(s) T(s)

Sa(g) Sa(g)

T(s) T(s)

Sa(g) Sa(g)

T(s) T(s)

99
Chapter 5 Analysis Results

Sa(g) Sa(g)

T(s) T(s)

Figure 5.8: Mean response spectra for cases 2, 3, 4,5 from top to bottom. Maximum surficial response
is depicted on left, mean response on right. For all cases. Spectral ordinates are given as
fraction of g. ordinates for natural period are in seconds. Base refers to response spectra for
outcrop motion

For all cases FOSM and Montecarlo estimates are statistically equivalent. For cases one, two,
three error ranges between 2% and 2.5% for case 4, reaches 4%, and for case 5 is close to
1.5%. For case 4 is not clear if this increment in the relative difference is due the higher
standard deviation of the field, or simply chance.

Anyhow, 4% is a good fit for engineering proposes. A similar conclusion can be drawn about
model 5 estimate. Is not completely sure if a better description of the model would lead to a
better agreement among both methodologies, although is quite expected, as the number of
variables describing the response is increased, in FOSM Assessment. But is just a general idea
rather than a fact.

Differences between maximum surficial response and mean surficial response are marginal
for all cases. Also “errors” tend to be clustered for periods below 0.25 second natural seconds,
which grossly match the higher ordinate in the acceleration response spectra of the
outcropping motion.

In an overall view results are promising. Achieving such close estimation using two
methodologies completely different is remarkable. Validates in a global sense the framework
on which the methodology is built in.

Now, results for standard deviation estimates will be analyzed, for the 5 situations considered,
and explained in chapter 4:

100
Chapter 5 Analysis Results

Sa(g)

T(s) T(s)

Sa(g)

T(s)

T(s)

Figure 5.9: Standard deviation for cases 2 and 3. On right, maxima of surficial response, on left, mean
response. Natural period in seconds.

Results observed for cases 2 and 3 are somewhat consistent with case1. Overall variability
ranges between 35 and 20% and, for case 3 differences among FOSM and Montecarlo
estimates is smaller, raging 26% for extremes and 20% for mean surficial response. The
opposite is observed for case 3 in which, 35% is for mean response and 26% for extreme
response.

Again, Standard deviation estimates using FOSM are helpful. In overall terms, if plotted
against Montecarlo found values, both follow the same trends. Beyond 0.1 s, they diverge
substantially; indicating model instability for period values less than this limit.

101
Chapter 5 Analysis Results

Causes of such phenomenon have already been discussed, in chapters 3 and 4. Now results for
case 4 will be discussed:

T(s)

T(s)

Figure 5.10: COV for maxima (left) and mean (right) surficial response. Case 4

At first glance, COV figure might be misleading, in general there seems to be no agreement
between FOSM and Montecarlo results, as the mean difference between their estimates
reaches 90 and 85% slightly twice the values found for cases 1, 2 and 3. Which, is somewhat
in agreement with the expectations, as variance doubled (and FOSM estimates are linear
estimates of the variance).

Trends among differences are masqueraded by spurious random occurrences, and ratio to the
mean estimates as COV values are compared, a clearer panorama can be achieved if instead,
standard deviation estimates are plotted:

T(s) T(s)

Figure 5.11: Standard deviation of maxima (right) and mean (left) spectral response. Spectral ordinates
as fraction of g, natural period in seconds. 102
Chapter 5 Analysis Results

Now trends are clearer, also a shift on estimates is observed which can explain why COV
results look so jagged. Increase variance seems to wide FOSM estimates, compared to
Montecarlo simulations. But in general terms comparison between found standard deviations
is fairly good. Both follow the same trends, and in general methods are comparable. Results
for case 5 are shown as follow:

T(s) T(s)

Figure 5.12: COV case 5. Maxima response (left) mean surficial response (right).

Results found for case 5 match general remarks done for cases 1,2 and 3. COV for maxima
surficial response is 34% and for mean response 36% (as case 2 estimates show greater
differences than for extreme response). This is in agreement with results for cases 1 and 2.
Again, both estimates follow the same global trends which are remarkable. It might be said
that, due the increased number of random variables (because the number of elements becomes
4 times) a slight increased variance is observed (if compared to 20 and 26%) .

In overall terms, it might be said that FOSM is a good estimate for mean response and a fair
one for standard deviation, especially if it is comparable low (coefficients of variation, for
example, less than 30%) it has to be said that analyses performed were elastic, so
proportionality applies; it is not clear if such behavior will stand for inelastic behavior.

Is not clear if sampling has an effect on convergence of both Montecarlo and FOSM, and
neither domain subdivision scheme. For mean estimates, agreement is clear (and mean
estimate using FORM should be performed as a check because is straightforward, just
evaluate the system behavior at the mean values of the random descriptors) Of course, random
properties of the system has a direct effect on FOSM estimates, as it propagates uncertainty
from random descriptors to the system using a linear transform approximation. General trends
found using Montecarlo simulation are well captured also trough FOSM analysis, which is
promising. These general remarks are tied to this specific exercise and more research is
required to generalize them.
103
Chapter 5 Analysis Results

5.3 Spatial variation of the response

Sa(g) Sa(g)

Sa(g) Sa(g)

Sa(g) Sa(g)

T(s) T(s)

104
Chapter 5 Analysis Results

Sa(g) Sa(g)

Sa(g) Sa(g)

T(s) T(s)

Figure 5.13: Mean and standard deviation of response for cases 1,2,3,4 and 5 (from top to bottom).
Spectral ordinates as fraction of g. Natural period in seconds.

Mean response seems to be almost the same for all representative locations on surface as
mean difference between maxima and spatial average shows a difference of less than 5%
(defined as the ratio of the difference of maxima and mean responses, divided over maximum)
if a period range from 0 (PGA) to 4 seconds is considered.

Although specific deviations, with values not greater than 15% might be observed around the
characteristic period of the ground (0.48s) an a single divergence of 20% at the ground motion
characteristic period (0.25s) was observed for case 4, for all cases, differences never goes
beyond 15%. This specific occurrence is considered to be random, as a trend was not
observed along the data.

105
Chapter 5 Analysis Results

Overshooting at these specific periods is a common trend along all cases, and seems to be a
feature independent of stochastical, modeling and descriptive features. For example
overshooting does not seem to diminish for the dense finite element layer. Also seems to be
more critical for shorter periods than the mean natural vibration period of the soil. If a range
period above 0.1 seconds is considered, differences between mean and maxima envelope can
be cut by half.

Behavior of Standard deviation of maxima and mean estimates is more complex. Overall
difference (defined in the same matter as for the average) is larger, reaching a mean value of
34% for case 3 and a minimum of 8% for case one, for cases 2 4 and 5 values of 23% 16%
and 27% were observed,

If the range or period is limited by taking values between 0.3 seconds, (when the maximum
spectral acceleration is observed in the outcropping motion) and 1.5 seconds (value for which
the spectral acceleration is close to half the PGA), deviations between mean and maximum
fall dramatically reaching 5%, 2.5% 5% 8% and 1.8% for cases 1, 2, 3, 4, 5 respectively.

Therefore, it seems that the most critical parameter related to spatial variability on a flat
surface is the period range, rather than description of the system, refinement of the finite
element mesh or overall variability. Even is noticeable how the biggest variability along all
the period range, which is observed for the case 2, is different if the trimmed range is
considered, now reaching the maximum for case 3.

Of course results are bound to this specific exercise and should be reviewed in real flat
surface cases. If layout on surface is not horizontal, is expected that a diverse trend will be
observed, but such considerations are left to further research. Also, the question if the results
are consistent with displacement spectra should be investigated, as it shows an opposite
behavior to acceleration spectra, instead of diminishing along increased value of natural
period, displacement increases in a roughly linear fashion.

Noticeable scattering among mean and maximum acceleration response values were observed
for periods beyond 2 seconds, the question is this spread is also observed for displacement
spectra. Again, such findings are left to further research. Also, in this exercise a single strong
motion was considered (Tolmezzo record after Friuli earthquake as stated in chapter four)
reaching the question if this behavior will be also noticed for different outcropping motions
specially ones related to far field earthquakes, where the characteristic period might be shifted
to a longer natural period. Such ideas are left to posterior assessments, hopefully on real case
sites.

For purposes of this specific study, and the nature of the performed exercise, (which might be
constrained to the range 0.3-1-5 s) it might be reasonable to say that mean and maxima are
comparable. In order to make the analysis clear, and avoid reviewing excess of data for now
on, results will refer mostly to maxima of surficial response, but overall assessments can be
extended to the mean response as well.

106
Chapteer 5 Analyssis Results

5.4 S
Sampling S
Scheme Asssessment

Sa(g)

T(s)

F
Figure 5.14: Mean
M spectraal acceleratioon response of
o maximum values at surrface, for thrree sampling strategies: C1
C
sin
ngle boreholee at the midd dle, C2, two boreholes att each extremme, and C3 tthree boreholles, two at th
he
a the center. Ordinates as
extremes, one at a fraction of gravity (g) natural period
d in seconds.

Sa(g)

T(s))

Figure 5.155: Standard deviation of spectral acceeleration resp


ponse of maxximum valuess at surface forf three sam
mpling
Strategies C1,
C C2, and C3.
C Spectral ordinates
o as fraction
f of g. Natural
N periood in secondss.

107
Chapteer 5 Analyssis Results

Resultts are interresting. Firrstly differeences betwween sampliing strategiies one an nd two, if
compaared to threee are remaarkable. Forr period ord dinates closse to 0.5s ddifferences can reach
30%, and
a this is not
n necessarry reflected on standard d deviation,, which in ooverall yield
ded higher
for esttimate with two borehooles, rather than
t three sampling site schemes.

Systemm response seems quitte sensitive to homogeeneity of daata, samplinng strategiess as cases
one annd two perfoorm poorly to describe variability of the field, as stated inn chapter 4.. Also, for
an increased num mber of sampples, estimaates get more distinct altogether,
a aand smoothhing effect
of krigging decreaases. Based on the resuults of this simulation it seems thhat sampling g strategy
has a clear
c effect on the estim
mated respoonse, as it fu
undamental to describe overall varriability.

Also, the effect of natural period is fundamenta


f al. Mean reesponse andd standard deviation
increases at the fundamentaal periods of o either thhe soil stratta (mean period) and the input
motionn. Is quite remarkable,
r , how despiite variabilitty, the resoonance periood of the sooil deposit
is keptt at both thee increase on
o mean andd standard deviation
d arround the naatural period
d ordinate
of 0.5ss, but due thhe inherent randomnesss this perio od shifts sligghtly for eaach case; forr example
for casse1 natural period (forr which bothh maximum m standard deviation
d annd mean ressponse) is
slightlly less than 0.5 matchiing closely the non ran ndom valuee of 0.48. W While for other cases,
the peeriod is sligghtly moree than this value. Bellow is show wn the onee standard deviation
enveloopes for maaxima surfi ficial responnse. In gen
neral, remarrks done beefore are confirmed.
c
Enveloopes divergge at signifficant periood ranges. As A their mean and staandard dev viation are
differeent. No speecial trendss are obserrved among g each estim mate, besiddes the onees already
stated..
Sa(g)

T(s)

Figure 5.16:: Plus and miinus one standard deviatioon acceleratio on spectral reesponse of maaxima valuess at surface.
S
Spectral ordiinates as fracction of gravitty g. Natural period in secconds.
108
Chapteer 5 Analyssis Results

5.5 E
Effect of Fiield Variab
bility
Sa(g)

T(s)
Figure 5.117: Mean resp ponse of maxxima surficial spectral accceleration. Caases 2 and 4. Ordinates in
n fraction
of gravity, natural periood in secondss
Sa(g)

T(s)
Fiigure 5.18: Standard
S devviation of maxxima surficiaal spectral accceleration. Cases
C 2 and 44. Ordinates in fraction of
graavity, natural period in seecond.

109
Chapteer 5 Analyssis Results

Saa(g)

T(s)
Figure 5.19:
5 One pluus and minuss standard deviation
d env
velopes for maximum
m speectral accelerration surficiial
responsee, ordinates as
a fraction off gravity, natu
ural period in
n seconds.

Mean response reesults show w an increasse for higheer variabilitty case (casse 4) unforttunately a
direct comparisonn is not rigoorous, as thhe estimatess are perform
med for diffferent realizations of
the ranndom fieldd. So it cannnot be stated that hig gher variabiility might lead to an increased
seismiic responsee. Such situuation depeends on thee properties of the syystem and the input
motionn. Even, inccreased variiability might lead to a lower mean response, contrary to o common
sense.

For exxample conssider a singgle degree of


o freedom with
w a random natural period. Sup ppose that
the naatural periodd of such system
s has a mean clo
ose to a givven frequenncy equal to o an input
harmoonic motion. In such caase, spread is
i beneficiall, as higher variability leads to aveeraging of
lower response beeyond resonnance, comppared to a narrower, lesss variable pphenomeno on.

Standaard deviatioon of the ressponse on contrast


c seem
ms to reflecct more direectly the inccreased in
the fieeld variability, althoughh not proportionally. Roughly
R the overall incrrease rangees 20% for
all nattural periodss between 0 and 4 secoonds. Anyho ow, this obsservation is quite limiteed due the
fact thhat occurrennces and sammples are different,
d and therefore a direct maatching betw ween both
cases isi an extrem
mely rough approximati
a ion.

Even so, despite the higherr increase in i variabilitty, mean reesponse of case 4 is noticeable
n
higherr than case 2, even reaaching case 3 results. ButB any treend statemennt is highly
y dubious,
and fuurther researrch on the toopic must be performed d.

Finallyy the issue of system domain divvision will be addresseed. Again, direct comp parison is
just inndicative at most.
m Casess two and fiive do not fo
ollow the saame occurreence, so sam
mples, and

110
Chapteer 5 Analyssis Results

therefoore predicteed values are


a completeely indepen
ndent. Takinng this intoo account, results
r are
shownn below.

5.6 E
Effect of Fiinite elemen
nt mesh

Sa(g)

T(s)

Figure 5.220: Mean Reesponse of maxima


m specttral acceleration on surfaace. For casees 2 (coarse finite elemennt
mesh) and 5 (refined fiinite element mesh). Specctral ordinatees as fraction
n of gravity. Natural
N periood
in seconds..
Sa(g)

T(s)

Figure 5.21:: Standard deviation of maximum


m speectral accelerration surficiaal response F
For cases 2 and five. Specctral
o
ordinates as fraction
f of grravity, naturaal period in seconds.

111
Chapteer 5 Analyssis Results

Sa(gg)

T(s)

Figure 5.22: Plus an


nd minus onee standard deeviation enveelopes of maxximum surficial spectral acceleration.
a
Spectraal ordinates as
a fraction off gravity, natu
ural period in
n seconds.

Resultts in generaal are in agrreement with observattions made trough this chapter. But B a clear
trend could
c be obbserved. Peaak responsee around 0.1 1 second is clearly
c notiiced in the fine
f mesh,
althouugh, sizable uncertaintyy is also bouund to it. Thhis way, rem
marks relateed to the perriod range
of the analysis arre somewhaat present, showing
s in effect how a coarse mmesh can masquerade
hiddenn trends relaated to low periods.
p

Aboutt maxima response, resultsr are not concllusive. It hash to be noted thatt spectral
acceleerations oveer 5 g’s weere also achhieved in thet coarse mesh, evenn for both simulated
random m fields (ccases 3 andd 4) it seem ms that inccreased respponse obeyys rather too a better
statistiical depictiion of the field, insteead of an inherent prroperty of the mesh itself; by
includding more elements
e along verticaal direction,, a better depiction
d off the behav
vior along
depth is achievedd, and the field
f becommes somewh hat less hom mogeneous, allowing for
f certain
increase in the ressponse. This is a global statement and has to be
b checked for real casses.

Thereffore, low reesponse at cases 1 annd 2 could be due the smootheniing characteeristics of
krigingg, especially if samplees are allocaated at distaances longeer than rangge values, an
nd are not
due thhe limitationns of the cooarse mesh. It might be b valid to estate that dimensioniing of the
elemennts should take into account range r valu
ues on bothh directionns, but nott specific
recom
mmendationss can be set,, as this anaalysis is limiited to this specific
s singgle case.

112
Chapter 5 Analysis Results

Another interesting effect is the fallout in variance, after the 0.1 s peak is achieved. Is not
completely clear why this effect is observed. Maybe the increased refinement in the mesh
makes the model more stable at resonance period, and results become less scatter. But this
single analysis cannot yield a definite conclusion. More research has to be performed before
making any definite statement.

In general terms, it has been shown that the proposed methodology works. It allows for
statement of several characteristics of the site field, sampling schemes and modeling, which
results in flexibility. Now, validation trough real cases and further random field modeling is
advisable. Such task has been left for further research.

113
Chapter 6 Final Remarks

6 FINAL REMARKS
A methodology to consider the effects of spatial variability into two dimensional local site
response has been developed, by modeling the site as an homogeneous field on which
Ordinary Kriging is employed to estimate shear wave velocities at unsampled locations, based
on 3D field test data. Analysis is constrained to a plane strain section.

Soil shear modulus is correlated with confining stress, specially for sands, but also, in a lesser
degree for clays (Stokoe et al 1999). Then it is expected that shear wave velocity with depth
increases as overburden becomes higher. Because of this fact, one of the most significant
improvement, which could extend greatly the scope of application of this work, would be
including Universal Kriging, allowing to perform Kriging among non homogeneous random
fields. (ie with different mean shear wave along different positions).

Also, analysis of real cases should be performed, to be able to set general conclusions about
sampling strategies, overall variability and finite element meshing as results made alongside
the simulations performed in this work are strictly limited to the case data proposed.

Based on the case analysis set up for this research it has been found that statistical description
of the field trough analytical variograms is fundamental for two dimensional stochastic
surface response assessment. In order to achieve a reasonable analysis it is advisable to have
enough samples, distributed more or less evenly across the site, to make a consistent fitting to
a positive determinate variogram functional. Results obtained from the simulation performed
in this research work, showed that kriging error rose quickly after one fourth of the range,
meaning that samples covering this span are recommended. But even if sampling is limited, at
least three measurement points should be taken (two at the extremes and one at the ends) as
single sampling at the borders have proven to coarse, yielding, results with considerable
dispersion (represented by the standard deviation of the Kriged shear velocity values) and a
rough description of surface response (both high standard deviation values and different mean
estimates if compared to the three borehole sampling).

114
Chapter 6 Final Remarks

It was observed a noticeable relationship between the high frequency surface response and
refinement of the mesh, as low period components of wave field are better simulated.
However is not clear if a better domain subdivision (ie smaller mesh element size) would lead
to a more precise assessment as an artificial increase of the range of response has to be
weighted, involving less reliable low frequency strong motion components which also can be
subject to further uncertainty as, for example, damping frequency dependency for values less
than 10Hz becomes critical, and might not be properly dealt on a time step formulation.

Variability of overall field shear wave velocity (represented by the stationary standard
deviation of the whole field) seems to increase the standard deviation of surface response, but
the effect is not proportional. Also, it was noticed that, despite the spatial variability besides
vertical direction, the 1D deterministic period estimate of the soil layer, taken as four times
the travel time, holds. This assessment was performed by comparing spikes on the computed
transfer functions (for the average of surface locations), and taking into account the period
where there features showed up. Differences with the mentioned estimate proven to be
minimal (ie less than 10%).

Again it has to be recalled that the results from this research are taken from the simulations
performed, requiring further validation, especially for real data cases. Also in deep
assessments require a sizable judgment by an expert as critical decision regarding the
definition of the finite element model, and the stochastical structure of the site (represented by
the analytical variogram) have to be performed with care, and knowledge of the complex
phenomena involved. Even a sensitivity analysis is not out order.

In general the application is versatile, efficient and clear, allows for several adjustments and
sensitivity analyses, therefore is ready to be applied to real cases, as long a well prepared and
efficient sampling strategy is performed along a profile. Also would be interesting to apply it
to specific experimental camps, where extensive research of the soil has been performed, and
in conjunction, earthquake occurrences have been recorded, so it can be weighted if the partial
findings found in this research can be generalized. .

115
References

1 REFERENCES
AASHTO LRFD [2007] Bridge Design Specifications. American Association of State Highway and
Transportation Officials.
Abrahamson N., Silva W. [1997] “Empirical response spectral attenuation relations for shallow crustal
earthquakes” Seismological Research letters. Vol 68 No 1 pp 94-127.
Arroyo, O., Sanchez-Silva, M. [2005] “Comparing target spectral design acceleration values by using
different acceptability criteria ” Structural Safety, Vol. 27, No. 1, pp. 73-91.
ASCE-7. [2005] Minimum design loads for buildings and other structures, American Society of Civil
Engineers.
Auvinet, G. [2006] “Some aspects of Rion-Antirion bridge foundation design”. Proceedings of the
International Society for Soil Mechanics ad Geotechnical Engineering, Chile Touring Lecture
Santiago de Chile, Chile.
Baecher, G. Christian, J [2003] Reliability and statistics in geotechnical engineering, John Wiley &
Sons Ltd.
Bardet, J., Ichii, K., Lin Ch [2000]. EERA: Equivalent-linear earthquake site response analyses of
layered soil deposits. University of southern California. Department of civil engineering. Los
Angeles, USA.
Bleines, C [2004] “Successfully Applying Geostatistics” Oceanography Today No 2 pp 24-25
Chambers, R., Yarus J., Hird, K. [2000] “Petroleum Geostatistics for non geostatisticians”. The
leading edge Vol 19 No 5. Pp 474-479.
Chopra, A. [1995] Dynamics of structures Prentice Hall, Upper Saddle River, New Jersey. USA.
Clough, R., Penzien, J. [1993] Dynamics of structures, McGraw-Hill Book Company. USA.
Christopoulos C., Filiatrault A. [2006] Principles of passive supplemental damping and seismic
isolation IUSS Press, Pavia, Italy.
Conte, J.[2006] READER SE 224 Structural Reliability and Risk Analysis: University of California at
San Diego, San Diego. USA
Conte, J.[2006] READER SE 206 Random Vibrations. University of California at San Diego, San
Diego USA.

116
References

Dawson, K,. Baise, L., [2005] “Three dimensional liquefaction potential analysis using geostatistical
interpolation” Soil dynamics and earthquake engineering” Vol 25, No 1, pp 369-381.
Deutsh, C. Journel. A. [1997] Geostatistical software libreary and user’s guide. Oxford University
Press, Oxford UK.
FEMA 450 [2003] Nehrp Recommended provisions for seismic regulations for new buildings and
other structures (FEMA 450) Building Seismic Safety Council – Federal Emergency Management
Agency. Washington DC. USA.
EUROCODE 8 [1998] Design of Structures for earthquake Resistance. European Committee for
Standarization CEN.
Graff, K., [1993] Wave Motion in Elastic Solids Dover Publications INC, New York, USA.
Hudson, M, Idriss I., Beikae M., [1994] QUAD 4M a computer program to evaluate the seismic
response of soil structures using finite element procedures and incorporating a complaining base.
National Science Foundation, Washington DC, USA. (revision 2003)
Idriss I, Sun J. [1992] A computer program for conducting equivalent-linear seismic site response
analyses of horizontally layered soil deposits. Program modified based on the original SHAKE
program published in 1972 by Shnabel, Lysmer & Seed. Center for geotechnical modeling,
department of civil & environmental engineering, University of California, Davis, USA.
Isaaks, E., Srivastava, R. [1989] Applied Geostatistics. Oxford University Press. New York, USA,
Oxford UK.
Ishihara K., [1996] Soil Behavior in earthquake geotechnics. Oxford Science Publications. New York,
USA
Kawase, H, [1996] The cause of the damage belt in Kobe “the basin-edge effect” constructive
interference of direct s-wave with the basin-induced diffracted/Rayleigh waves. Seismological
Research Letters, Vol 67, No 5.
Kramer S., [1996] Geotechnical Earthquake Engineering. Prentice Hall, Upper Saddle river, New
Jersey USA.
Kuhlemeyer, R. L., & Lysmer, J [1973]. Finite Element Method Accuracy for Wave Propagation
Problems, J. Soil Mech. & Foundations. Div. ASCE, 99 (SM5), pp 421-427.
Lai, C. G., Strobbia, C., and Dall'ara, A. [2008] "Terremoto di Progetto e Analisi di Risposta Sismica
Stocastiche nei Territori Toscani della Garfagnana e Lunigiana" IUSS Press, Report No. 2008/01,
pp. 210 (in Italian).
Newland D. [1993] Random vibrations, spectral and wavelet analysis. Longman Scientific &
Technical. Harlow, Essex. England.
Nour et al. [2004] Finite element model for the probabilistic seismic response of heterogeneous soil
profile. Soil Dynamics and Earthquake Engineering. Vol 23, No 1. Pp 331-348.
Oliver MA, Kharyat AL [2001]. “ A Geostatistical investigation of the spatial variation of Radon in
soil” Computers and Geosciences. Vol 27, No 4 pp 939-957.

117
References

Otani, S [2006] “Nonlinear Earthquake Response Analysis of Reinforced Concrete Buildings” class
notes of Nonlinear analysis of reinforced concrete buildings course at ROSE SCHOOL, Pavia,
Italy.
Pender, S [1996] “Earthquake Resistant Design of Foundations” class notes of course Earthquake
Resistant Design of Foundations at ROSE SCHOOL, Pavia, Italy.
Prieto, J. Ramos-C A., [2006] “Influence of uncertainity of soil damping and degradation shear
modulus curves of deep deposits on the earthquake mitigation cost” 100th Anniversary Earthquake
Conference commemorating the 1906 San Francisco Earthquake. San Francisco, USA.
Phoon, K. [2008] Reliability-Based design in geotechnical engineering. Taylor and Francis, London,
UK New York, USA.
Priestley N., Calvi, G., Kowalsky M. [2007] Displacement-Based design of structures IUSS Press,
Pavia, Italy.
Rota M. [2007] “Estimating uncertainty in 3D models using geostatistics. Individual Study European
School for Advanced Studies in Reduction of Seismic Risk (ROSE School), University of Pavia,
Italy.
Sanchez H. [2008] Master of Science Thesis. European School for Advanced Studies in Reduction of
Seismic Risk (ROSE School), University of Pavia.
Sanchez-Silva, M. [2004] introducción a la confiabilidad y la evaluación de riesgos: teoría y
aplicaciones en ingeniería. Ediciones Uniandes. Bogota, Colombia (in Spanish).
Slejko et al [1999] “Stress field in Friuli (NE Italy) from fault plane solutions following the 1976 main
shock” Bulletin of the seismological society of America, Vol 89. No 4 pp 1037-1052.
Smolka, A. [2007] Earthquake review 1994-2006. Schadenspiegel Special topic: Earth- When the
forces of nature becomes a danger. Munich Re Group. Munchen, Germany.
Stokoe K, Darendeli M. Adrus, R. Brown L. [1999] “Dynamic Soil Properties: Laboratory, Field and
Correlation Studies,” Theme Lecture, Second International Conference Earthquake Geotechnical
Engineering, Vol. 3, Lisbon, Portugal, June, 1999, pp. 811-845.
TOPICS. [2000] “Great Natural Catastrophes 1950-1999: economic and insured losses with trends”.
Topics 2000. Natural catastrophes – the current position. Munich Re Group. Special Issue,
Milenium Edition. Munchen, Germany.
Thompson, E., Baise, L. Kayen, R [2004]. Spatial Correlation of shear-wave velocity in the San
Francisco bay area sediments. Soil Dynamics and Earthquake Engineering. Vol 27 No 1 pp 144-
152.
Verruijt A. [2008] Soil Dynamics Delft University of technology. Papendrecht, The Netherlands.

118
Appendix A User Basic Refference

A. USER BASIC REFERENCE


Two main Matlab© Applications have been developed to perform 2D stochastic site response
analysis following the specifications and constrains explained before in this report. Maine.mat
performs Montecarlo simulation using hypercube sampling, while FOSM.mat performs First
Order Reliability Assessment trough a forward gradient numerical computation. Both
applications employ QUAD 4M finite element code as core application to perform each
simulation.

A.1 Software and Hardware Requirements


Microsoft Excel © and Matlab © are explicitly required to run both applications, Maine and
FOSM. Also windows environment is a must, although both Windows Vista, and Microsoft
Windows server 2003 second release are able to sustain the program.

Both applications were benchmarked using a computer with an Intel Core 2 CPU T5200 with
a clock sped of 1.6 GHz. Ram memory available was 1Mb. In such conditions, a 200 element
simulation took roughly one minute, while a 800 element simulation (dense mesh, for more
details please refer to chapter 4) took approximately 5 minutes, following the conditions and
limitations stated in chapter 4. Microsoft Excel © 2003 was installed on the benchmarking
subject. Is not sure if the application would work with Microsoft Excel © 2007 without any
modifications.

A.2 Installation
Installation of both applications is straightforward. Just copy the directory BQUAD, given in
the installation to the C: drive. Avoid edition data inside the C:\BQUAD directories, as
support applications and formats are required to the well behavior of the application. In case
of malfunction, just repeat the installation procedure.

A1
Appendix A User Basic Refference

A.3 Maine Routine Input Parameters


Maine routine requires the following data as input in the matlab.m file

• H: Height of the finite element mesh

• L: Length of the finite element mesh

• NH: Number of elements along height

• NL: Number of elements along length

• N: Number of simulations

• T: Period Range for spectral ordinate calculations, by default a period range between
0 and 4 seconds is considered, taking ordinates each 0.05 seconds.

Maine routine calls in several Executable programs which are employed trough the
assemblage of QUAD4M input file. These are: EqRecordExcel.exe, ExelFileEnsembler.exe,
and ExcelFileInput.exe. EqRecordExcel.Exe Recalls and user interface, upon charging, the
following procedure has to be performed:

• Open an Acceleration record: select the file tab on the display menu, select Open
Accelerogram option; afterwards a standard Windows open file dialog will appear.
User will be able to select any accelerogram file on the PC.

• Afterwards a text file will appear on the display, by default the application has been
set to follow PEER NGA format, but any other can be customized by changing the
number of header lines and number of acceleration values per line. Acceleration
values have to be input as fraction of gravity. When done, the user shall press ok
button.

• Then a second set of dialog boxes will appear; based on PEER NGA standard, number
of samples and time interval are read from the heading of the acceleration record file.
If other format is employed, or they don’t correspond to actual values, the user can
input them by replacing the shown amounts. Again, user has to click Ok button

• A second standard windows dialog box will appear. Now the user has to search and
select the Excel file on which Field data has been set up, according to a format shown
later. Once this data has been provided, User is allowed to set Unit system, Effective
Cycle ratio, PGA scale factor (ratio of the desired peak PGA to the maximum PGA on
the acceleration record), Base properties (Rock), soil properties and Modulus
degradation and equivalent damping curves. Some editable values are set as default.
To continue the analysis, user has to press ok again.

A2
Appendix A User Basic Refference

If data has been defined properly, a graph showing the nodes of the finite element mesh will
be shown on Matlab©.

To perform a successful analysis, a Variogram Functional Shape has to be input as a function


inside file h.mat. Located in directory C:\BQUAD\Kring. It has to follow a positive definite
functional shape.

In the simulation performed along this research, a parameterized variogram function was
defined (sill, range and nugget variables have to be input in function kRGdef.mat located
inside C:\BQUAD\Kring) although any changes are left to future users discretion.

A.4 Field data excel file input:


A demo input data field is given in annex B. a simple color scheme has been implemented:

• Pale Yellow: Heading cell, Editable, any change won’t impair application
development, but is advisable to leave as given for further reference

• Pale Blue: Coordinate cell, Editable, Required to run the application. On Blue cells
insert the coordinates of the borders of the finite element mesh on space, following an
arbitrary user defined X and Z axis. Point O denotes the top left corner of the mesh;
Point F denotes the top right corner of the mesh on display. Setting negative
coordinates is not advisable.

• Pale Green: Arbitrary, values input here are not considered in the simulation. As the
rectangular mesh defined is rectangular, values along vertical coordinates are not
considered.

Input fields are the following:

• Id: And Id number identifying the sample

• X: X coordinate of the sample (m in SI, ft in English metric system)

• Y: Y coordinate of the sample (m in SI, ft in English metric system)

• Z: Z coordinate of the sample (m in SI, ft in English metric system)

• Vs: Shear wave velocity of the sample (m/s in SI, ft/s in English metric system)

Note about coordinate System: In the sample input field, coordinates are set on a 3D Cartesian
coordinate system, in which Z and X directions are set arbitrarily by the user, but Y direction
is always on the vertical. Y coordinates are always positive, meaning that the bottom of the
soil strata (bedrock) has a 0 coordinate; surface has a positive Y coordinate equivalent to the
depth of the analysis site.

A3
Appendix A User Basic Refference

Coordinate system in the input sample file does not agree with coordinates of the mesh, as the
finite element is two dimensional, despite the fact that kriging is performed taking into
account all spatial Cartesian coordinates. Coordinates of the mesh are taken relative along the
section cut which defines the 2D model, but for both systems (sample and model) Y direction
follows the orientation stated before (always positive, rock base has 0 coordinate).

A.5 Output results:


Main data output from function maine.mat are the variables SaEx and SaMn which contain
the all the simulations performed after user request. On the fist column, spectral ordinates are
shown, and afterwards specific generated samples are displayed. SaEx includes the maxima
envelope of the data, while SaMn array stores the mean response of selected points at ground
surface. Currently, statistics are computed of 5 equally spaced points along surface. The
amount of points taken into account can be increased or decreased by changing the parameter
ct, number of categories in function accdata.mat located in directory C:\BQUAD\BASE.

A.6 FOSM.m Routine


FOSM.m located at C:\BQUAD\BASE uses the same input parameters as MAINE.mat
routine, except that, instead of number of simulations, user has to define the variable dxi,
which sets the increment at the shear wave velocity on which the numerical partial derivative
of each spectral value is calculated.

FOSM routine outputs directly mean and standard deviation estimates of spectral ordinates in
the following variables:

• MnSaMn: Mean estimate of Mean surficial response

• MnSaEx: Mean estimate of maxima envelope of surficial response

• StdRsMn: Standard deviation estimate of Mean surficial response

• StdRsEx: Standard deviation estimate of maxima envelope of surficial response

A.7 Example File Input:

Id X Y Z Vs
Point o 0 30 0
Point f 200 30 0
1 28.5 1.5 0 246
1 28.5 4.5 0 282
1 28.5 7.5 0 336
1 28.5 10.5 0 239
1 28.5 13.5 0 204
1 28.5 16.5 0 296
1 28.5 19.5 0 224
1 28.5 22.5 0 226
A4
Appendix A User Basic Refference

A5

Das könnte Ihnen auch gefallen