Sie sind auf Seite 1von 245

DISS. ETH NO.

18781

CONSTRUCTION AND APPLICATION OF BAYESIAN PROBABILISTIC NETWORKS FOR EARTHQUAKE RISK MANAGEMENT
A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences

presented by YAHYA YILMAZ BAYRAKTARLI Dipl.-Ing., University of Karlsruhe (TH) born 03.02.1972

citizen of Germany

accepted on the recommendation of Professor Michael H. Faber, Professor Ton Vrouwenvelder, Martin Bertogg, examiner co-examiner co-examiner

2009

ii

To Esra, Selma and Mihri

iii

iv

Acknowledgements
I appreciate the support provided by the Swiss National Science Foundation (SNF) as part of the interdisciplinary research project "Management of Earthquake Risks using Condition Indicators" (MERCI). I would like to thank Professor Faber for his valuable supervision. He gave me the freedom to search my way and guidance to nd it. I would also like to thank Professor Vrouwenvelder and Mr. Bertogg for acting as co-referees. The many discussions with Ufuk Yazgan, Oliver Kbler, Jens Ulfkjaer, Jack Baker, Kazuyoshi Nishijima and Matthias Schubert contributed to this work, for which I am very thankful. I would also like to thank the MERCI research group for the many fruitful discussions. Many thanks to my friends and colleagues at the Institute of Structural Engineering for the pleasant atmosphere, the shared activities and the Friday games of soccer. My deepest gratitude goes to my wife Esra and my daughters Selma and Mihri especially for the many working weekends towards the end of this dissertation.

Zurich, December 2009

Yahya Y. Bayraktarli

vi

Abstract
Developments in earthquake engineering research during the last decades provide within acceptable limits reliable design of new structures against earthquakes. The existing building stock however, still represents a signicant risk in regard to the safety of people as well as to the economical assets of society. Assessing the earthquake risk for buildings necessitates the consideration of uncertainties arising from seismic hazard, site effects, structural response, and immediate/indirect consequences. Furthermore, uncertainties emerge when the earthquake risk is analyzed for portfolios of buildings, e.g. cities. A city is more than a conglomeration of individual buildings, since interdependencies exists between its various elements. These "system effects" are investigated in order to set a framework for capturing the complex consequences after natural catastrophes. First, a system theoretic denition for a city considering functional and hierarchic dependencies between its elements is given. Afterwards, a framework for risk assessment is proposed. Both perspectives are jointly considered by the application of Bayesian probabilistic networks (BPN). The elements within a BPN comprise the set of parameters considered within the risk analysis problem. The joint probability distribution of these parameters would be of highest possible value. In very rare cases it is possible to set the joint probability distribution. BPNs constitute a very efcient way of representing the joint probability distribution by exploiting conditional dependencies. BPNs are constructed for modules of earthquake risk analysis; seismic hazard, structural damage, soil response and consequence assessment. The application of these BPN models is illustrated by four examples considering a portfolio of 5-story buildings in a city in Turkey. The rst example considers the decision problem of whether or not to retrot a class of structures in a city. The second example illustrates that the framework also facilitates portfolio loss estimation. The proposed framework allows for a consistent representation of the effect of dependencies (e.g. common events or common models) in the estimation of losses for important buildings (e.g. hospitals). It is shown that inclusion of such effects may have a signicant impact on portfolio loss estimation. The third example illustrates the application of BPNs for updating seismic fragility curves based on data from post-earthquake building inspections. The forth example discusses the assessment of consequences using a newly developed concept of robustness. The proposed framework has several merits. By using the proposed framework consistent representation of uncertainties and the consideration of crucial effects of dependencies are possible. Furthermore, risk updating based on new information is facilitated.

vii

viii

Kurzfassung
Neue Erkenntnisse in der Erdbebeningenieurforschung ermglichen eine zuverlssige Bemessung neuer Bauwerke. Die vorhandene Bausubstanz jedoch stellt weiterhin ein signikantes Risiko sowohl fr die Sicherheit der Menschen als auch fr die wirtschaftliche Wertschpfung der Gesellschaft dar. Die Erdbebenrisikoanalyse von Bauwerken erfordert die Bercksichtigung der Unsicherheiten in der Erdbebengefhrdung, in den Standorteffekten, in den Bauwerksantworten sowie in den direkten und in den indirekten Konsequenzen. Darber hinaus entstehen Unsicherheiten, wenn das Erdbebenrisiko fr ein Portfolio von Bauwerken analysiert wird. Eine Stadt ist mehr als eine Ansammlung individueller Bauwerke, da gegenseitige Abhngigkeiten zwischen ihren Elementen existieren. Diese "Systemeffekte" werden ebenfalls untersucht, um einen theoretischen Rahmen zur Erfassung der komplexen Konsequenzen im Falle von Naturgefahren aufzustellen. Zu Beginn wird eine systemtheoretische Denition einer Stadt unter Bercksichtigung funktionaler und hierarchischer Abhngigkeiten gegeben. Daran anschiessend wird ein theoretischer Rahmen fr die Risikoanalyse eingefhrt. Durch die Anwendung von Bayesschen Netzen werden beide Perspektiven bercksichtigt. Die Elemente eines Bayesschen Netzes stellen die explizit bercksichtigten Variablen innerhalb eines Risikoanalyseproblems dar. Die gemeinsame Wahrscheinlichkeitsverteilung dieser Variablen kann efzient durch Bayessche Netze ausgewertet werden. Fr den Analysten ist die gemeinsame Wahrscheinlichkeitsverteilung von grossem Nutzen und kann analytisch nur in den seltensten Fllen ermittelt werden. Bayessche Netze werden fr die Module der seismischen Gefhrdung, der Bodenantwort, der Bauwerksschden und der Konsequenzen einer Erdbebenrisikoanalyse entwickelt. Die Anwendung dieser Module wird mit vier Beispielen anhand eines Portfolios von Stahlbetonbauwerken in einer Stadt in der Trkei illustriert. Das erste Beispiel behandelt ein Entscheidungsproblem zur Ertchtigung einer Bauwerksklasse gegen Erdbeben. Das zweite Beispiel illustriert, wie mit Bayesschen Netzen die Verlustberschreitungskurve ("loss exceedance curve") eines Portfolios berechnet wird. Es wird gezeigt, dass die Bercksichtigung der Abhngigkeiten (z.B. in den Ereignissen oder in den Modellen) einen signikanten Einuss auf das Ergebnis hat. Im dritten Beispiel wird gezeigt, wie mit Bayesschen Netzen eine Aktualisierung der seismischen Verletzbarkeitskurven ("seismic fragility curve"), basierend auf Schadensinspektionen nach einem Erdbeben, durchgefhrt wird. Das vierte Beispiel diskutiert die Anwendung eines neu entwickelten Robustheitskonzeptes auf die Ermittlung der Konsequenzen. Der hier vorgestellte theoretische Rahmen hat einige Vorteile. Mit der vorgeschlagenen Methode ist eine konsistente Bercksichtigung der Unsicherheiten und der Abhngigkeiten mglich. Ein weiterer Vorteil besteht darin, dass einzelne Risiken mit eingehenden Informationen systematisch aktualisiert werden knnen.

ix

Contents

Introduction

1.1 1.2 1.3


2

Problem denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scope and outline of the dissertation . . . . . . . . . . . . . . . . . . . . . . .

1 3 4
5

Fundamentals

2.1 2.2 2.3


3

City as a system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uncertainty, probability, utility and risk . . . . . . . . . . . . . . . . . . . . .

5 8 11
15

Methodologies

3.1 3.2
4

Existing earthquake loss estimation methodologies . . . . . . . . . . . . . . . Proposed framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 21
35

Models

4.1 4.2 4.3 4.4 4.5


5

Seismic hazard . . . . . . . . . . . . . Soil failure . . . . . . . . . . . . . . . Structural damage . . . . . . . . . . . . Consequences . . . . . . . . . . . . . . Verication and validation of the models

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

36 54 61 69 73
77

Examples

5.1 5.2 5.3 5.4


6

Example 1: Example 2: Example 3: Example 4:

Decision for retrotting structures Assessment of seismic risk . . . . Update of fragility curves . . . . Index of robustness . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. 83 . 91 . 103 . 108
111 117 129 139 147

Conclusions

References A BPN Algorithms B Soil Parameters C Software Codes

xi

xii

1 Introduction
1.1 Problem denition
Efcient management and consistent quantication of natural and man-made risks is increasingly becoming an issue of societal concern. Sustainable and consistent societal decision making requires that a framework for risk management is developed, which, at a fundamental level allows for the comparison of risks from different natural hazards such as the comparison between risks due to earthquakes with the risks due to ooding or due to draughts. The recent series of earthquakes, in Turkey (1999), Taiwan (1999), India (2001) and Sumatra (2006) not only led to a greater awareness of seismic hazards, but also highlighted the difculties involved in an efcient decision making process in such situations, especially in less developed countries. Consistent and quantitative risk assessment tools for buildings and infrastructure in seismically active areas are urgently needed to ensure an efcient decision making process that facilitates the optimal allocation of available economical resources for the management of risks before and after an earthquake. The developments in earthquake engineering research during recent decades provide within acceptable limits reliable design of new structures against earthquakes. In most countries such methods are implemented in the best practice design of new structures. The existing building stock however, still represents a signicant risk in regard to the safety of people as well as to the economical assets of society. Assessing the earthquake risk for existing buildings necessitates the consideration of uncertainties from the earthquake source mechanism, the site effects, the structural response, and the immediate consequences to the indirect consequences. Furthermore, additional uncertainties emerge when the earthquake risk is analyzed for groups of buildings and infrastructure elements, e.g. cities or portfolios of buildings. A city is more than a conglomeration of individual buildings, since interdependencies between its elements exist. These "system effects" need to be investigated in order to set a framework for capturing the complex consequences after devastating events such as natural catastrophes. Signicant efforts have been devoted in the past to assessing the seismic risk related to existing structures subject to earthquake hazards. One of the rst major projects concerning the assessment of the seismic risk is the ATC-13 (1985) project. The ATC-13 report provides a set of vulnerability functions in the form of damage probability matrices to be used in the assessment of the seismic vulnerability of a stock of structures located within the same region (Whitman et al., 1973). In the 1990s, the period designated as the International Decade for Natural Disaster Reduction (IDNDR) by the UN (1987), a number of seismic risk assessment studies were carried out

1 Introduction around the world. One of the major projects within this period is the RADIUS (Risk Assessment Tools for Diagnosis of Urban Areas against Seismic Disasters) project GHI (2004). In this project, practical tools for a preliminary estimation of the possible damage scenarios and for the preparation of a risk management plan for nine case study cities are developed. Another important earthquake loss estimation methodology is developed by the Federal Emergency Management Agency through the National Institute of Building Sciences, the HAZUS methodology (Whitman et al., 1997). Details of the HAZUS methodology are summarized in Kircher et al. (1997b) and Kircher et al. (1997a). The HAZUS methodology is implemented in the software package HAZUS99-SR2. HAZUS originally contained very general methods for estimating earthquake losses on a regional scale. With the addition of the Advanced Engineering Building Module to HAZUS99-SR2, building-specic damage estimation studies are made possible (HAZUS, 2001). An overview of other existing loss estimation methodologies is given in Section 3.1. The proposed framework in this dissertation together with Bayesian probabilistic networks (BPNs) for earthquake risk management has, unlike the aforementioned methodologies, the advantage of forming a basis for consistently integrating all aspects affecting the damage on a stock of structures located within a region subjected to the same earthquake exposure. The uncertainties which inuence the functional chain of an earthquake from the source mechanism, the site effects, the structural response, the immediate consequences (damage) to the indirect consequences can be handled consistently and with new information a consistent actualization or updating of the results can be performed. The latter facilitates the extension of the methodology to decision problems related to risk management during and after earthquakes. BPNs are constructed for modules of earthquake risk analysis; seismic hazard, structural damage, soil response and consequence assessment. The application of these BPN models is illustrated by four examples considering a portfolio of reinforced concrete structures in a city located close to the western part of the North Anatolian Fault in Turkey. The rst example considers the decision problem of whether or not to retrot a specic class of structures in a city. The second example illustrates that the framework also facilitates the assessment of the portfolio loss exceedance probability distribution function. The proposed methodology allows for a consistent representation of the effect of dependencies in the estimation of losses for important buildings (e.g. hospitals). It is shown that the inclusion of such effects may have a very signicant impact on portfolio loss estimates. Based on data of post-earthquake building inspections in Adapazari after the 17th August, 1999 Kocaeli Mw7.4 earthquake the application of BPNs for updating one of the main models, e.g. building fragility curves, is illustrated in the third example. The fourth example discusses the assessment of consequences using a newly developed concept of robustness. The proposed framework has several merits compared to traditional schemes for risk assessment in the context of large scale natural hazards. Traditional schemes generally assess risks for individual hazards on an object by object basis. By using the proposed framework consistent representation of uncertainties and the crucial effects of dependencies are possible. Furthermore, risk updating through new information is facilitated.

1.2 Objectives

1.2 Objectives
The dissertation aims to develop a indicator based generic risk assessment framework for the consistent quantitative and rational management of earthquake risks. The proposed framework is designed for decision-makers responsible for the safety of personnel, the environment and assets of a larger area such as e.g. a city. The framework is generic in the sense that it is formulated in terms of observable characteristic descriptors (indicators). It can thus easily be adapted to the characteristics of a specic region or city. The main emphasis is on the risks due to potential failures and collapse of buildings as well as infrastructure systems such as hospitals. An important feature of the proposed framework is that it provides decision support on how to optimize investments into risk reducing measures before and after an earthquake. A key element in the proposed framework is the quantication of the effect of various types of indicators on the risks. These indicators may have very different characteristics and necessitate that different types of expertise are integrated. An example of such an indicator is the information about the structural design codes applied for the design of a group of structures. Another indicator could be the characteristics of the soil implicitly describing earthquake related failures, e.g. liquefaction. In addition, more traditional indicators of damage in structures, such as interstory drift ratios, are utilized. The decision-making process with regard to the efcient and targeted allocation of resources for the purpose of reducing and/or mitigating earthquake risks for cities or regions is complex due to the broad representation of structures, possible damage states and the numerous uncertainties prevailing within the problem area. This dissertation attempts to establish a framework for risk assessment in such situations. Risks are thereby quantied, and efcient risk-reducing or mitigating activities are identied in two distinct situations; before and after an earthquake. The proposed framework is generic and facilitates the assessment of risks at different levels of accuracy as appropriate under the particular circumstances. A further benet is achieved by the fact that the proposed framework is adaptable to other types of risk assessment and management problems which allow for integral risk analysis considering all relevant risks in a given geographical region. The main objectives of the dissertation includes: Construction and application of BPN models in individual disciplines BPNs for seismic hazard assessment. BPNs for spatial modeling of soil failures. BPNs for structural response and damage assessment. BPNs for consequence assessment.

Systematic application of BPNs to cities Establishment of a risk assessment framework and a loss estimation methodology. Construction of a GIS-based tool for the integration of individual BPN models. Discussion of "system effects" when applying the methodology for cities.

1 Introduction

1.3 Scope and outline of the dissertation


Scope

The dissertation proposes a framework for risk assessment based on a system theoretic denition for a city considering functional and hierarchic dependencies between its elements. The application of Bayesian probabilistic networks (BPN) within this framework is discussed. The elements within a BPN comprise the set of parameters considered within the risk assessment and management problem. The joint probability distribution of these parameters would be of the highest possible value for the analyst. In very rare cases it is possible to set the joint probability distribution. BPNs constitute a very efcient way of representing the joint probability distribution by taking into account conditional dependencies. Earthquake risk assessment and management requires the integration of several related disciplines. The dissertation is hence written in the framework of the multidisciplinary project Management of Earthquake Risks using Condition Indicators (MERCI, www.merci.ethz.ch), which comprised of research groups from ve institutes in two departments at ETH Zurich. Furthermore, the application to building portfolios or cities impose additional difculties. The dissertation focuses on the "system effects". It does not aim to provide novel methods in dening seismic hazard, treating soil behavior or assessing structural damage. At these process oriented levels state-of-the-art methods are applied in this dissertation. The applied state-of-the-art methods are introduced with the main focus on their "translation" into BPNs and on the therefor necessary additional modeling (see Sections 4.1 to 4.4). The dissertation also does not aim to provide new algorithms or computer codes for construction and evaluation of BPNs. The performance of several existing commercial, free and open-source software packages was also not the focus. Therefore, one quality assured software package was chosen and the main focus of the dissertation is pursued with this software package. The dissertation should provide the reader the advantages and disadvantages of construction and application of BPNs in a structured way for earthquake related problems in cities. Decisionmakers, risk analysts and risk managers as well as the specialist in the relevant disciplines are the main readership of the dissertation.
Outline

Chapter 2 gives a thorough denition of cities from a system theoretic perspective, and the basics of decision theory. Existing methodologies dealing with large-scale earthquake loss estimation are discussed in Chapter 3. The strengths and weaknesses of these existing methodologies are discussed and the main characteristics required for an efcient integral methodology are derived. Different methods are considered and a framework is proposed. The application of the proposed framework is prepared in Chapter 4 by adapting state-of-theart earthquake-related modules to the proposed methodology. Using the four example applications outlined in Chapter 5 the pros and cons are discussed in Chapter 6.

2 Fundamentals
2.1 City as a system
According to predictions of demographers the fraction of urban dwellers will rise from the current level of 50 percent to 80 percent of the worlds total population in 2030. Urbanization is not simply a process of accumulation of people in towns and cities; it also involves distinctive ways of life and subcultures and patterns of individual and social interaction. Many of the worlds economical, social and political processes are connected between the worlds towns and cities (Knox and Marston, 2006). The very rst known impulse leading to urbanization was the rst agricultural revolution. Cities in Mesopotamia and the Nile valley grew up around 350 B.C. Later, cities appeared in the Indus Valley by 2500 B.C., in Northern China by 1800 B.C., in central and south-western North America by 600 B.C. and Andean America around 800 A.D. The original Middle Eastern hearth continued successive generations of city-empires, e.g. Athens, Rome and Byzantium. These European cities almost collapsed during the early Middle Ages. Interestingly elaborate systems of cities developed from them into what have today become centers of global economy. Through colonization the Europeans became the leaders of the rest of the worlds economies and societies. Colonial city systems were established in Latin America by the Spanish and Portuguese conquerors. The Spaniards founded their cities mostly in strategically important defensible sites such as hilltops and channeled their growth in relation to the population they controlled. The Portuguese founded their colonies with commercial rather than administrative considerations. They chose coastal sites with proper natural harbors or inland areas along navigable rivers. From late Middle Ages onwards a centralization of political power resulted in the formation of nation-states, the beginning of industrialization. Port cities and Atlantic coast cities took advantage of their location and grew considerably (e.g. London). The geographical planning of ancient cities were based on a grid system. Key buildings and neighborhood relations were carefully considered in the planning of settlements. In China, Taoist ideas played a great role in city planning. The major streets and the interior layout of buildings were designed to be in perfect harmony with the cosmic energy. In Europe between the 15th and 17th centuries the new wealth and power were reected in new urban design. Additionally, the advance in military ordnance lead to a surge of planned urban redevelopment featuring impressive fortications. These new constructions resulted in a need for a new design for city centers (e.g. Copenhagen, Karlsruhe and Nancy). The establishment of the important gateway cities also coincided with this period. These cities serve as a link between regions and countries (e.g. Boston, Rio de Janeiro and Cape Town). Some of these cities grew rapidly because of their function in colonial expansion. Rio de

2 Fundamentals Janeiro grew due to gold mines, Sao Paulo on the basis of coffee or Accra on the basis of cacao, to mention just a few. In the 19th century industrialization led to blossoming cities. These industrial economies needed a large pool of labor, transportation network, physical infrastructure of factories and consumer markets. This period is the fastest growing in urbanization. Migrants from rural areas were attracted by higher wages and greater opportunities in the urban labor market. Geographers conceptualize urbanization through attributes and functions of city systems. A city system is an interdependent set of urban settlements within a region. It is possible to talk of a German urban system, a European urban system or a global urban system. Every city is part of the interlocking urban system connecting local-regional, national and international scale human geography in a complex network of socio-economical interdependence. Space is organized through hierarchies of cities of different size and functions. Functional differences can be observed within urban system hierarchies. Some cities evolve as general purpose urban centers providing an evenly balanced range of functions. Based on the network for goods and services cities may be classied as leading world cities (e.g. New York), world cities (e.g. Zurich), major regional world cities (e.g. Bangkok), cities of national inuence (e.g. Ankara) and regional cities (e.g. Adapazari/Turkey). This classication is made considering not only the primary nature of the cities (the internal structure), but also the secondary nature, namely the external global network relationships (Taylor, 2004). The inuence of this characteristic on the consequences of possible damage, in particular due to natural hazards, should be considered in risk studies. The city of Adapazari/Turkey, which is the subject of the illustrative examples in the dissertation, may be considered to be a regional city. The consequences of possible earthquakes are assumed to be restricted to the region the city belongs, i.e. the potential consequences of an earthquake in Adapazari are not assumed to have any signicance outside the region. The city as a system is considered to be comprised of buildings and infrastructural elements. The damage-inducing behavior of these system constituents is assumed to be constructed by the same principles and models for different construction periods. Hence a dependency of the individual elements is always present. On the other hand, the damage to the individual elements will also have an inuence on consequences for other elements in the system, and this will be more pronounced when dealing with lifeline elements. Keeping these in mind we can dene a city as a system comprising elements (i.e. buildings and infrastructure elements) which have "common models" and which lead to so-called indirect consequences, i.e. consequences beyond the simple sum of all elements of the city. The "common models" may be applied to subclasses of buildings depending on the system representation. The aggregation of losses may be performed assuming a purely linear consequence model (i.e. for residential buildings the damage to one building has no inuence on the consequence of the damage to another building) or any kind of nonlinear consequence models (i.e. the collapse of one hospital has a signicant inuence on the total consequence of the damage of another hospital, as the lack of treatment capacity may lead to additional indirect losses for the considered society).

2.1 City as a system


Characteristics of a system

Cities can be regarded as living systems. Their elements grow over time in such a complicated way, that they could not be seen as a conglomerate. The rise and decline of cities are mostly related to war, natural catastrophes, disease and re. The inuence of these devastating effects on cities has been the topic of much research work. Recent catastrophes indicated in almost all cases an underestimation of the consequences. Not only well before the events, but also after the event took place, the consequence estimation fell well short of being realistic. For this reason, the dissertation adopts a two-way approach: a holistic denition of the subject (here city) considering system theoretical perspective, and an analytical perspective by breaking down the problem set considering a risk assessment framework is formulated. After a short excursion into the fundamentals of general systems theory, the "system" city is formulated in its broadest sense using systems theory. In science theory the "systematic way" is often understood as a counter-current to a splitting of the sciences in highly specialized areas, which may be the main cause for the lack of communication between experts. A system is in its broadest sense a functional relationship between its constituent parts, with which the whole acquires existential attributes independent of its parts. The very rst ideas in system theory are found in the works of van Bertalanffy (1968). He formulated the integral ideas, when the scientic atmosphere was dominated by the philosophical schools of rationalism. The general system theory can be understood to be a by-product of the philosophy of Kant, who formulated a strict epistemology in his philosophy of sciences. Natural science uses mathematical language in formulating general laws, which can be experimentally veried. A theory in natural science is hence primarily mathematical and should follow logical principles. The general systems theory on the other hand tries to formulate an even broader aspect, especially when it considers "living" systems. Organization, goals and aims gain focus. The denition of systems and their classication in literature are divergent. Rapoport (1986) uses a two-dimensional perspective in explaining this divergence: The analytical-holistic dimension and the descriptive-normative dimension. The analytical method aims to understand an object, phenomenon or process by understanding its constituents. The other side of this perspective is the holistic approach. Here, the "identication" is not based on the piecewise investigation of the constituents, but on the cognitive understanding of the "whole". This divergence can be found in many disciplines of science and philosophy. For example, Euclidean geometry is based on recognizing congurations of fundamental shapes and hence can be seen as an holistic approach. On the other hand, the analytical geometry of Descartes starts from the "point" to investigate geometrical gures. The descriptive approach is more interested in understanding "how" something works, whereas the normative approach deals with the more value-oriented question "for what". For example, the descriptive decision theory explains how individuals would make decisions in real situations. On the other hand the normative decision theory denes optimal and rational decisions for certain situations. Following Rapoport (1986) an integral system theoretic conception for cities considering these two perspectives will be attempted. In the following, several applications of system theoretic conceptions will be discussed. Many system conceptions were proposed, e.g. Fuchs (1972), Klir (1972) and Ropohl (1979).

2 Fundamentals These can mainly be classied into three categories: the functional system concept, the structural system concept and the hierarchical system concept. In the rst, system is understood as a whole, which performs functionality. Hereby, neither the internal parts nor their relation is of importance. The boundaries of the system and the input-output characteristics of its functionality are of central concern. Within the structural system conception the internal parts and their relationships are considered. Here, the parts of the whole have an intrinsic character of being an element of the whole. Hence, they cannot be fully understood independent of the whole, which also imposes complex interdependencies among the elements. Within the system some elements may form another whole, which shows completeness in its structure and functionality. The characteristic of this system is then related to the original system within certain functionality. These systems can be dened using hierarchical system conceptions. Depending on the problem at hand, one of these three system conceptions or a combination of these is favored. Examples for the application of the functional system concept can be found in cybernetics (Mesarovic and Takahara, 1975), of the structural system concept in Klir (1972) and of the hierarchical system concept in combination with structural system concepts in Lin (1999). A combination of all three system concepts was given in Ropohl (1979). Rapoport (1986) assigns three attributes to systems: identity, organization and goal-directedness. Preservation of identity is a fundamental property of a system for two reasons. Changes in the elements or relations among the elements should not prevent something from being recognized as a system. Additionally, if something cannot develop means to preserve its internal organization in regard to distractions, it ceases to be a system. Organization is the most fundamental property of a system. The structural and functional relations form a complete denition of an organization. Goal-directedness is naturally based on the decision theory, since rational decision is formulated with a set of predened actions regarding the preferences for reaching goals. An overview of decision theory is given in the next section. These theoretical considerations will be used in dening a city as a system in Chapter 3. After a comprehensive denition, the application to a city will be given. The problem complex will be simplied to such extent that the illustration in the examples in Chapter 5 is easy to follow, without disregarding the fundamental issues pointed out in this chapter.

2.2 Decision theory


The descriptive decision theory deals with the way people make decisions, without strictly considering the efciency of their choices. The goal of the actors can be derived from the decisions, as long as they follow certain patterns. The prospect theory of Kahneman and Tversky (1979) describes how people make choices in situations where they have to decide between alternatives. The normative decision theory on the other hand attempts to identify the way people should make decisions, if they follow certain goals. Given that human beings are entirely rational, the normative decision theory predicts their behavior. A different classication can be made when the main items of the decision theory are considered: actors and goals. Problems with a single actor and a single goal are the subject of the normative decision theory (Luce and Raiffa, 1957). Problems with a single actor dealing with

2.2 Decision theory multiple goals or criteria belong to operations research (Churchman et al., 1957). Decision problems where multiple actors follow one single goal are investigated in the team theory (Marschak and Radner, 1972). Problems with multiple actors, each following an own goal are the subjects of the game theory (Neumann and Morgenstern, 1944). In this dissertation, decision problems related to earthquake hazards are discussed. Irrespective of whether decisions are to be made by building owners regarding the retrot of their building or decisions to be made by a city governor regarding resource allocation, the decision problems belong to the rst kind. A single actor has to make decisions following one single goal of having the least cost or highest benet. The main point is to identify, what the decisions of a rational decision-maker "should" be rather than to understand how decisions are made. Hence, later on the normative decision theory for a single actor with a single goal is investigated. Decision situations can also be divided into decisions under certainty, decisions under uncertainty and decisions under risk. Decisions under certainty lead to predened results. Decisions can be dened, if the actor is able to prioritize his preferences. This is not always simple. Experiments have shown that test persons make contradicting preference prioritization. A typical example is the intransivity of preferences. When preferences were compared in pairs, an alternative x may be preferred over y and alternative y over z. But when in addition alternatives x and z is to be prioritized, x is not always prioritized over z as expected. In decisions under uncertainty, the decisions cannot be related one-to-one to the true state. A decision may lead to different results depending on the true state, which is independent of the actor. The main characteristics of decisions under uncertainty are that the true state is not known. Several decision principles were suggested as being applicable for solving decision problems under uncertainty. According to the maximin principle, regardless of which decision is made, the worst case of the unknown true state is assumed to result. The decision alternative with the "best" worst case is to be chosen. The minimax-regret principle represents an optimistic pendent. To each unknown true state a numerical value is assigned reecting the degree of preference for each decision alternative. Each of these values is replaced by the algebraic difference between itself and the maximum value related to that unknown true state given the decision alternatives. These differences are a measure for the regret of the decision maker not to have chosen that alternative. Finally, for each decision alternative, the maximum is chosen (Savage, 1954). The Hurwitz- principle combines the above two extremes by assigning a factored linear combination of them. For each decision alternative, the minimum and maximum of the assigned preference numbers are weight by and the 1 . A linear combination of these two factors is assigned to each decision alternative. The alternative with the maximum is the optimal decision. is a measure of the pessimism of the decision-maker. Setting to zero leads to the maximin principle, setting to 1 leads to minimax-regret principle. The fourth decision principle goes back to Laplace. Since the true state is unknown, no preferences were made and equal probabilities are assigned to the true states. A rational decision is dened as the one leading to the maximum expected utility. In some cases the pessimistic maximin has to be used, as the other principles require a preference prioritization. These assignments were done independent of information and knowledge. When preferences and probabilities are assigned to true states based on experience, information or convictions, the situation becomes one of decision-making under risk. Decision-making under risk is based on the utility theory. The utility theory provides a formalization of the pref-

2 Fundamentals erences of the decision-maker. Von Neumann and Morgenstern (1944) states a set of axioms, whose fulllment allows the assignment of numerical values (utility) to the preferences of the decision maker. The algebraic order of utilities reects the order of preferences. The decision maker has to choose from alternatives, which leads to a certain utility with an assigned probability. The multiplication of the probability with the utility is called expected utility. Von Neumann and Morgenstern (1944) have shown that the optimal decision is the one which leads to the highest expected utility. Assigning utilities and probabilities to preferences and unknown true states respectively is strongly associated with subjectivity. The Bayesian decision analysis is based on the utility theory, but additionally provides a formal basis for taking subjectivity into account (Raiffa and Schlaifer, 1961). The fundamental assumption of Bayesian decision analysis is the ability of the decision-maker to assign subjective probabilities to all uncertain variables and to assign utilities to all combinations of outcomes of a decision problem. Although the subject is of controversial debate because of the formalization of subjectivity, the theory has been applied to many engineering problems. In Faber (2003) and Kbler (2006) the applicability of Bayesian decision analysis to engineering problems is discussed. A fundamental reference for civil engineering applications of Bayesian decision analysis is Benjamin and Cornell (1970). Bayesian decision analysis has its strength in its ability to combine different aspects when dealing with probabilities. The uncertainty about the true state can be reected either by a frequentistic view based on data or by a subjective degree of belief. Furthermore, the inclusion of new information in the decision process is systematized. Bayesian decision analysis can be performed in three different situations depending on the state of information processing. In the prior decision analysis the probabilities to uncertain states are assigned based on present knowledge. After assigning probabilities and utilities to each possible outcome the expected utilities are evaluated. Prior decision analysis evaluates the expected utility of each alternative and selects as the optimum the one with the highest expected utility. New information related to the uncertain states can be considered systematically using Bayes Theorem. Here, observations, tests and new rationales can be regarded as new information. The probabilities assigned a priori are updated to posteriori probabilities using the new information and the expected utilities are recalculated using these posteriori probabilities. Decision problems considering posteriori probabilities are called posterior decision analysis. In the third kind of Bayesian decision analysis the "potential" of new information is assessed, not the new information itself. Prior to evaluating the process of acquiring new information the possible outcomes are assessed. Systematically all possible outcomes are evaluated each forming in itself a posterior decision analysis. The maximum expected utility for each posterior decision analysis is calculated. As the analysis is done prior to the new information acquiring process these kinds of problems are referred to as pre-posteriori analysis. The value of information is related to pre-posterior decision analysis. It is assessed as the difference between the expected utility of the optimal action with the new information and the expected utility without the new information minus the cost of acquiring the information. This so-called Value of Information analysis is extensively described in Raiffa and Schlaifer (1961) and Benjamin and Cornell (1970).

10

2.3 Uncertainty, probability, utility and risk So far, two basic conceptions concerning risk have been considered; probability and utility. In the following a short overview is given of these conceptions.

2.3 Uncertainty, probability, utility and risk


In decision problems, uncertainty is expressed quantitatively in terms of probabilities. A careful and consistent modeling of these uncertainties is essential for the identication of optimal decisions. Uncertainties can be classied into two categories; aleatory uncertainties and epistemic uncertainties. Aleatory uncertainty, also referred to as randomness, represents the natural variability in the considered phenomenon. On the other hand, epistemic uncertainty results from insufcient knowledge of the considered phenomenon. The insufcient knowledge may lie in the description of models representing the real phenomena or it may lie in the modeling of the random variables. The former is known as model uncertainty, the latter as statistical uncertainty. Epistemic uncertainty can be reduced by improved models and better data basis, whereas aleatory uncertainty remains unaffected. The different interpretations of probability can be classied as classical, frequentistic and subjective. The classical interpretation denes probability of an event as the ratio of the number of cases favorable to it, to the number of all cases possible. This conception requires the complete knowledge of the considered phenomenon as all possible cases need to be identied. The frequentistic interpretation is based on counting the favorable cases relative to a large number of observations. It does not require a complete knowledge of the considered cases. The subjective interpretation expresses a degree of belief that a considered event will occur. In complex engineering decision problems the application of only one of these conceptions is rarely possible. Either data for a frequentistic interpretation is missing, or experience in some specic cases or a complete physical understanding is lacking. A combination of these three interpretations is required in such cases in particular. The Bayesian perspective constitutes a consistent modeling basis for such complex engineering problems, see JCSS (2001). Considering the consequences of earthquakes for a city, the utility can be assumed linearly with respect to monetary units for the considered range of events. Therefore the risks are assessed based on the expected cost criterion. By analogy, expected benet can also be used if benets of activities were also included in the analysis (Benjamin and Cornell, 1970). The expected cost criterion requires that all consequences of an event are expressed in monetary terms. Hence the loss of human life during a catastrophic event needs also to be quantied. Although still the subject of controversial debate among structural engineers, decision analysts point out that disregarding this would lead to inconsistent decisions (Benjamin and Cornell, 1970). The Life Quality Index (LQI) provides a theoretical basis for the quantication of life saving costs, see Nathwani et al. (1997) and Rackwitz (2001). An outline of the LQI is given in Section 4.4. To date, discussions have been about making decisions by comparing expected costs or benets of the different decision alternatives. But when the different decision alternatives have different time horizons a simple comparison would be misleading. For example, when deciding whether to retrot a building with regard to a possible damaging seismic event, the expenditure

11

2 Fundamentals for retrot at present is compared to a future cost due to a possible damaging seismic event. Since the value of money changes over time, it has to be considered explicitly. Discounting is the process of nding the present value of cash at some future date. The discounted value of a cost or benet is determined by reducing its value by an appropriate discount rate for each unit of time between the time when the cash ow is to be valued to the time of the cash ow. The discount rate is usually expressed as an annual rate. The discount rate is the interest rate at which a central bank lends to commercial banks. Discounting became a major issue in the years after the 1929 market crash. The rst formally expressed discounting was done by Williams (1938). The nominal costs or benets C(t) are reduced by a discount factor (t) to obtain the corresponding present value C(t = 0). C(t = 0) = (t)C(t) The discount factor is given by (t) = 1 (1 + )t (2.2) (2.1)

where is the discount rate. As discussed above, all consequences of an event need to be expressed in monetary terms, including life saving costs. This makes the quantication of the interest rate difcult. Corotis (2005) differentiates between private and public investments and argues that the rate of interest can be different for public and private sectors. In Rackwitz et al. (2005) an intergenerational discounting model is suggested, which takes the economic growth per capita into account. This is also known as natural interest rate. Additionally, the rate of pure time preference and the elasticity of marginal consumption is considered. The former takes into account that individuals prefer to consume earlier than later. The interest rate is given as: = + (2.3)

This model considers a discounting with for all present generations and a discounting with for future unborn generations. On the same basis, Nishijima et al. (2007) determine inationfree public interest rates of about 2% annually, which is signicantly lower than interest rates for private investments. Closely related to discounting is the notion of net present value (NPV ). The sum of all costs and revenues discounted to their present value represents the NPV . It takes into account all consequences throughout the projects lifetime. A project is evaluated as positive and hence economically feasible when the NPV is positive. Different alternatives are compared by referring to their NPV , and the one which yields the highest NPV is generally selected. In principle, the point in time at which it is discounted is arbitrary. In decision analysis the point in time is chosen at which the decision is to be made. In this dissertation "decision problems under risk" are considered from a normative perspective. In the following, the second concept "risk" is considered in more detail. After the decision problem is formulated, the governing risks need to be assessed. The management of the risk is then based on this assessment. Risk assessment and management form the basis for designing

12

2.3 Uncertainty, probability, utility and risk new structures and assessing existing structures as well as for the planning of inspection and the maintenance of structures. Risk assessment is the rst step in a risk management problem. It is the estimation of quantitative or qualitative risk related to a specic situation and a recognized hazard. When quantifying risk, two components need to be calculated: the magnitude of the potential loss and the probability that the loss will occur. Expressed in mathematical terms, the risk RE associated with the particular event E is: RE = pE CE (2.4)

where pE is the probability that the event will occur and CE is the consequence associated with the event. The consequence can be quantied in a common metric as currency or some numerical measure of a locations quality of life. In civil engineering it is quantied as the expected consequences which are associated with activities throughout the service life of the structures, JCSS (2001). Nathwani et al. (1997) formulated four principles for managing public risk: accountability, maximum net benet, compensation and life measure. According to accountability, decisions must be open, quantied, defensible, consistent and applied across the full range of hazards. When risks are assessed openly and transparently, the consideration of public preferences is assured. When the risks of the full range of hazards are quantied, they become comparable. Transparency allows unpopular decisions taken by the decision-maker to become defensible. The principle of maximum net benet is in line with the normative decision theory. Only the quantication of immaterial consequences such as environmental impacts and human life still present difculties. As discussed above, the life quality index represents a mean to overcome this difculty, see Section 4.4. Some people in society take greater risks through hazardous activities and must therefore be compensated. The benet of an activity is given if these people are fully compensated and there is still benet. The length of life is an indicator of maximum net benet to society. A reliable and accepted measure for assessing this is life expectancy. When dealing with the maximum net benet of society, the effect of a decision on life expectancy needs to be considered. A structured approach to managing the uncertainties related to a hazard is referred to as risk management. Several strategies are possible, including transfer of the risk to another party, in order to avoid risk, to mitigate the effect of risk, and to accept partly or all the consequences of the risk. The main objective is to reduce risks related to a dened domain to a level which is accepted by society. Risk management can be structured into a generic format as illustrated in Figure 2.1, (AS/NZS, 1999). First the context must be established. Important questions such as who the decision maker is, which other parties may be affected and what the acceptable level is, need to be answered. In other words, the system has to be identied at this rst step. All relevant risks are then identied, and irrelevant hazards and opportunities are eliminated through a risk screening process. In the next step the risks are analyzed. The analyzed risks are then evaluated for their acceptability. Risks identied as not acceptable have to be treated by mitigation, reduction or transfer.

13

2 Fundamentals In the whole risk management process a review and monitoring as well as communication and consultation with the involved parties are required.

Figure 2.1: Risk Management process (AS/NZS, 1999).

14

3 Methodologies
Loss estimation studies can be categorized as regional loss estimation and building-specic loss estimation studies. This dissertation concentrates mainly on the former. Regional loss estimation deals with the quantication of economic losses for a portfolio of buildings within a geographical region such as a city, county, state or country. Describing the advantages, disadvantages, and limitations of existing earthquake loss estimation methodologies requires considerable attention to detail which is beyond the intent of the dissertation. Thus, the existing methodologies will only be briey discussed here. For details, please refer to items in the selected bibliography. After the short overview, the methodology proposed in this dissertation is introduced and its capabilities with regard to the limitations of the existing loss estimation methodologies are discussed. The remaining part of the dissertation deals with the development of the models required for the application of the proposed methodology and the illustration through examples. Here, the limitations of the proposed methodology are also discussed.

3.1 Existing earthquake loss estimation methodologies


In loss estimation, mathematical models for the contributory elements of the loss-inducing process are used to arrive at estimates for the potential losses that might result from an earthquake. Loss estimation is a quantitative science which uses tools developed in seismology, geology, geotechnical engineering, structural engineering and economics linked by probability theory and statistics (Khater et al., 2003). Loss estimation quanties earthquake losses, providing a rst step to a better understanding of the loss contributors, of the alternatives for risk reduction and the level of the acceptable loss. Loss estimation methodologies are needed since the relatively infrequent earthquakes in any special location do not allow precise knowledge of the precise impact of future earthquakes. The estimation of the impacts of future events are however necessary for the design of new constructions, planning for emergencies and the management of insurance commitments (Scawthorn, 1995). The topic was rst treated by Freeman in the 1930s. In Freeman (1932) the earthquake damages of past events and their relation to insurance were reviewed. In the 1960s, Steinbrgge and coworkers developed scenario loss estimates for major U.S. cities (NOAA, 1972). Algermissen et al. (1972) estimated damage and losses that would result from major earthquakes in the San Francisco Bay Area in an interdisciplinary group of experts in seismology, geology and structural engineering. In the 1970s and 1980s loss estimation was rst treated probabilistically in Whitman et al. (1973) and Scawthorn (1981). In the late 1980s loss estimation methodologies began to be used by insurance and reinsurance companies. The Mw6.7 Northridge earthquake

15

3 Methodologies in 1994 accelerated the research in loss estimation methodologies. By that time it had become evident that extrapolating loss results from the past as a basis for the expectation of losses in the future leads to an underestimation of the losses. In the U.S., governmental institutions started to support the development of comprehensive earthquake loss estimation methodologies. The most important of these methodologies is HAZUS. The earthquake loss estimation methodology HAZUS has been developed by the U.S. Federal Emergency Management Agency (FEMA) to assess the physical, economical and human consequences of earthquakes, hurricanes and oods throughout the U.S. (HAZUS, 2001). HAZUS is a GIS-based tool for use as a U.S. nation-wide decision support tool for policy analysis, emergency response planning and disaster response preparedness at all levels - federal, regional and local. It generates an estimate of the consequences to a city or region of a "scenario earthquake", i.e. an earthquake with a specied magnitude and location. After initial release of HAZUS97, the model was released three times; HAZUS99, HAZUS99-SR1 and HAZUS99-SR2. In 2004 a multihazard version HAZUS-MH was released including ood and hurricane hazards besides earthquakes. HAZUS enables analysis at several levels depending on the data set available. At Level 1 national level data sets are used. At Level 2 local data may be substituted for national data. At Level 3 besides local data, specic analysis tools for studying special conditions such as liquefaction may be implemented by the user. Capabilities in HAZUS include a hazard characterization tool for earthquakes, oods and hurricanes, damage analysis tool for buildings and lifelines, casualty and shelter estimation tool and economic analysis tool. In the hazard characterization tool for scenario earthquakes with a specied magnitude and location, the probabilistic ground motion data is provided by the U.S. Geological Survey. Historical earthquakes are supplied by several catalogues and databases. HAZUS provides a general building stock database comprising 36 structural types (e.g. steel braced frame, concrete frame with masonry inll) and 28 occupancy types (e.g. single-family dwelling, heavy industry). Each structural type is further subdivided based on the number of stories and construction period indicating the foreseen earthquake resistance for that structural type. For each structural type, capacity curves and fragility curves are provided. Capacity curves are used in combination with damping-modied response spectra to determine the peak structural response of the structure according to the ATC-40 (1996) procedure. Fragility curves describe the exceedance probability of different damage states given the peak structural response. The damage to structural and nonstructural systems is described in HAZUS by ve damage states: None, Slight, Moderate, Extensive, and Complete. The damage probabilities are then calculated using the damage analysis tool. The casualty estimation tool estimates injuries and deaths caused by structural damage. Casualties are estimated in four classes: minor injuries, more severe injuries not requiring hospitalization, injuries requiring hospitalization and deaths. Casualty estimations are produced for three scenario times: day time, night time and commute time. The economic analysis tool calculates the building, content and inventory costs, business, personal and rental income and disruption costs and lifeline valuations. HAZUS also include models for estimating shelter needs, lifeline damages, debris generation and re following damage. The RADIUS (Risk Assessment Tools for Diagnosis of Urban Areas against Seismic Dis-

16

3.1 Existing earthquake loss estimation methodologies asters) initiative was launched in 1996 by the secretariat of the International Decade for Natural Disaster Reduction (IDNDR 1990-2000) of the United Nations with nancial assistance from the Japanese government. The main motivation was the inadequacy of the existing seismic risk assessment and management tools with reference to developing countries. Nine case study cities were selected, Addis Ababa (Ethiopia), Antofagasta (Chile), Bandung (Indonesia), Guayaquil (Ecuador), Izmir (Turkey), Skopje (The former Yugoslav Republic of Macedonia), Tashkent (Uzbekistan), Tijuana (Mexico), and Zigong (China) from 58 applicant cities. Technical guidance was provided by three international institutes, namely, GeoHazards International (GHI, USA), International Center for Disaster-Mitigation Engineering (INCEDE)/OYO Group (Japan), and Bureau de Recherches Gologiques et Minires (BRGM, France). In an 18-month period earthquake damage scenarios and seismic risk mitigation action plans were developed based on a methodology developed by GHI for risk management projects in Quito (Equador) and Kathmandu (Nepal). The case studies in RADIUS were carried out in two phases: the evaluation phase and the planning phase. In the evaluation phase, seismic risk assessment for the city was performed by collecting existing data and estimating the potential damage due to a hypothetical earthquake. The potential damage was estimated by a theoretical and non-theoretical step. In the theoretical estimation seismic intensity distributions for the hypothetical event were combined with the building stock inventory and infrastructure using vulnerability functions. In the non-theoretical estimation, opinions from the local experts were collected through a series of interviews allowing adopting the special characteristics of the city system to be included in the damage estimation. In the planning phase, the results of the rst phase was used to develop an action plan that would reduce the earthquake risk to the city. Using these case studies, practical tools for earthquake damage estimation were developed, thus providing a basis for similar efforts as a rst step of earthquake risk management for other cities in developing countries. The OYO Group developed a computer programme for simplied earthquake damage estimation, aiding users to understand the seismic vulnerability of their cities. As input data, population, structure types, soil types and lifeline facilities are required. The tool provides outputs of seismic intensity, structure and lifeline damage and casualties (GHI, 2004). The RISK-UE initiative supported by the European Commision developed a methodology for creating earthquake risk scenarios with special focus on the distinctive features of European cities, including existing buildings and monuments. The project duration was about 42 months and ended with a nal conference in Nice, France at the end of March 2004. The aim was to produce a standard manual for assessing earthquake risk in urban areas, for use not only by all European countries, but also by countries subject to conditions similar to those found in Europe, e.g. the Mediterranean region. The initiative comprises two parts: a methodology enabling the assessment of seismic risk of European cities and an application of this methodology to seven EU and Eastern European cities, which are Barcelona (Spain), Bitola (Former Yugoslavian Republic of Macedonia), Bucharest (Romania), Catania (Italy), Nice (France), Soa (Bulgaria) and Thessaloniki (Greece). The methodology comprises a state-of-the-art seismic hazard assessment, a systematic inventory typology of the structures at risk, with emphasis on distinctive features of European cities. Distinctive European urban features particularly concern complex building aggregates in old city centers and monuments and historical buildings. The classica-

17

3 Methodologies tion of European buildings resulted in a European Building Typology Matrix (BTM), where 23 building types were identied leading to a total of 65 typologies. The vulnerability of the building typologies helping to identify the weak points of the infrastructure systems are estimated either by assigning a score or vulnerability index or by the capacity spectrum method (ATC40, 1996). The economic loss scenarios are based on the reposition cost of the buildings: the expected number of casualties and injuries was obtained using the model proposed by Coburn and Spence (2002). The established earthquake scenarios, structured within a GIS, could then be put forward to town councils as a basis for discussion and drawing up plans of action for systematically reducing earthquake risk (Risk-UE, 2004). The recently completed LESSLOSS project funded by the European Community is another interdisciplinary approach for earthquake loss estimation. For selected European cities the vulnerability of the building stock, the estimation of human casualties and the direct economical losses were performed. The main goal of the project was the assessment of earthquake risks, the impacts of an earthquake on the environment and on the cities as well as emergency and mitigation measures (LESSLOSS, 2005). In 1995 the Russian Ministry of Emergency Situations was actively engaged in the development of a GIS-based automated system for the estimation of consequences of severe earthquakes. A global GIS System EXTREMUM was developed for forecasting the consequences of destructive earthquakes. This system allows performance round-the-clock and forecasts (estimate) consequences of earthquakes all over the world. Since August 2000 EXTREMUM is used for the common benet of the world community (Shakhramanjyan et al., 2001). The displacement-based earthquake loss assessment (DBELA) method is a fully probabilistic framework that incorporates variability both in demand and capacity parameters. It was developed by Pinho et al. (2002) and Crowley et al. (2004). The main focus of this methodology is the evaluation of the displacement capacity of classes of buildings at various damage limit states. Pointing to the ever growing research efforts in natural sciences, engineering and social sciences regarding catastrophes, the Alliance for Global Open Risk analysis (AGORA) started the initiative for open source software code development. Current end-to-end risk models have not been designed to respond to emerging knowledge and data. The paradigm of having an open platform promises shorter implementation periods for new ndings. The open source seismic risk related software (OpenRisk) is being developed by AGORA. OpenRisk estimates the performance of assets such as buildings subjected to earthquakes in terms of economic costs, human safety and loss of use. Besides OpenRisk, AGORA supports the development of several other open source softwares; OpenSHA for seismic hazard, OSRE for risk assessment, MIRISK for decision making (AGORA, 2009). Beside those listed above, many other earthquake loss estimation methodologies have been developed and successfully applied. No claim is made to provide a complete list of existing earthquake loss estimation methodologies worldwide, nor is it possible. As can be seen from the short descriptions of the methodologies, the following properties can be identied.

18

3.1 Existing earthquake loss estimation methodologies


Integrality and generality

Risk-based decision making requires loss estimation which represents an integral approach. This requires that the interaction between all relevant agents, i.e. technical systems, nature, human beings and organizations are explicitly considered. An integral approach to risk assessment ensures that signicant risk contributors originating from the interactions between the different agents are taken into account (Faber, 2008). All of the aforementioned loss estimation methodologies can be regarded as integral approaches to some degree. Generality, i.e. context independence, is especially important when the loss estimation methodology is to be applied to other regions as well as for other hazard types. The lack of generality should not be seen categorically as a shortcoming of a loss estimation methodology. Some of the aforementioned earthquake loss estimation methodologies have been developed to be applied to different regions like HAZUS, even though this is only in the US. Others are designed as projects of nite duration with application in chosen pilot cities, e.g. Radius and Risk-UE, and do not claim to be by implication generic.
Modularity

Earthquake loss estimation requires an interdisciplinary approach. Research in individual disciplines such as seismology, soil dynamics and earthquake engineering provides scientic and technological improvements which may result in the existing loss estimation methodologies becoming obsolete, if these improvements are not considered. A modular structure enables the implementation of new models in all the disciplines without the need to reset the overall methodology. All of the aforementioned earthquake loss estimation methodologies have a modular structure. They comprise modules for seismic hazard, for soil response, for structural response, for damage and losses. The interfaces of these modules is dened and calculations are executed from end-to-end. They classify the buildings in the city or region considered into classes and predict the portion of the building class falling within a predened damage class for specied earthquake demand by vulnerability modeling. The vulnerability methods range from damage probability matrices such as that developed by the Applied Technology council of the U.S. (ATC-13, 1985) based on expert opinion, over analytical derived fragility curves as in HAZUS to more mechanically-derived formulae that describe the displacement capacity of the classes of buildings at different damage limit states in DBELA. The following shortcomings of the aforementioned earthquake loss estimation methodologies can be stated.
Inference

The earthquake risk is evaluated in the aforementioned methodologies in forward direction. The analysis of the seismic hazard, soil response, structural response, damage and loss assessment are performed in a sequence leading to a quantication of the risk or to a damage scenario. Queries of the kind "What would be the total loss due to an earthquake of magnitude MW = 7?" can be answered. For a decision-maker it is also valuable to perform diagnostic analysis. Typical

19

3 Methodologies queries are of the kind "Which magnitude of earthquakes would lead to complete unavailability of important structures such as hospitals?" The performance of sensitivity analysis is also valuable. Sensitivity analysis refers to analyzing how sensitive the condition (e.g. the probability of collapse of a hospital) is to minor changes. The changes may be variations of the parameters of the model or may be changes in the diagnosis of evidence. None of the existing loss estimation methodologies considers the prevailing parameters explicitly by considering their uncertainties. Explicit consideration of the parameters would enable the inference in both directions.
Updateability

The main engineering task is to represent reality by means of models. These models provide a means for understanding the problem. The continuous adaptation of the models to reality is a major challenge. In Bayesian understanding the constructed models represent the current knowledge and with all incoming information which could be for the present case a damaging or non-damaging earthquake, the models could be updated. The existing loss estimation methodologies in some ways consider the knowledge from new earthquake by remodeling, but none of them provides a framework for systematic update even though the underlain modules are continuously improved and replaced as in the case of HAZUS. The knowledge and information basis is not very broad when assessing earthquake risks. Damaging earthquakes are not very frequent; data on built environment is usually very scarce and research on the occurrence of earthquake and its effects on soil and structures is ongoing. Hence, the processing of knowledge and data of any source, from scientically veried knowledge over statistically representative data to experience to reecting expert opinions, is necessary. Especially when the problem complex is modeled explicitly by observable characteristic descriptors called indicators, an update is facilitated. A Bayesian perspective in loss estimation modeling provides a sound and thorough means for this. None of the aforementioned earthquake loss estimation methodologies considers such an integral approach to knowledge.
Dependencies

The lack of explicitly considering the dependencies of the prevailing parameters is another pitfall of the existing methodologies. Disregarding the dependencies leads to suppressing important system effects when considering loss estimation to portfolios of buildings. Statistical dependency may be appropriately represented through correlation. Functional dependency or common cause dependency is appropriately represented through hierarchical probabilistic models.
Multi-detailing

A decision is an allocation of resources. Considering risks due to earthquakes, decision-makers are to some degree all individuals within the earthquake prone region. The decision-maker is an authority or person who decides on the allocation of the available resources and takes responsibility for the consequences of the decisions on others. Hence, the formulation of the

20

3.2 Proposed framework decision problem depends on the decision-maker. Different decision-makers will have different preferences and objectives. The following decision making levels can be distinguished: Private owners, e.g. individuals, local authorities, e.g. municipalities, national authorities, e.g. governmental agencies, international private companies, e.g. insurance and reinsurance companies, Supranational authorities, e.g. United Nations. For individuals and private owners information about the earthquake risk is important as it allows the adaptation of behavior and activities in order to minimize risks. Information on risk also makes it possible to consider risk transfer to third parties at affordable costs. Local and national authorities deal with earthquake risks from the perspective of resource allocation. Here, neither the risks for one group of persons should be considered, nor the risks associated with individual hazard processes. In a holistic perspective risks due to all prevailing hazard types must be considered before decisions can be made. At a national level, codes and regulations concerning the design of structures and the use of land can be based on the assessed overall risks. International private companies accept risk for other stakeholders and help in reducing the impact of earthquakes at a cost. Knowledge of portfolio risks, including the statistical characteristics of their magnitude, dispersion and inuencing factors is important for setting the premiums and for diversifying or uncoupling potential losses. Supranational authorities provide and allocate economic and knowledge resources for the purpose of reducing risks especially in countries with insufcient capacity to do so themselves (Faber et al., 2007). None of the aforementioned loss estimation methodologies can be applied to different decision makers with different levels of detail. The proposed framework in this dissertation aims to represent the problem complex earthquake risk by explicitly modeling the prevailing parameters, to explicitly model the dependencies among the parameters to consider system effects and to provide a framework for systematic update. Comparable to the aforementioned methodologies, it will use a modular structure in modeling seismic hazard, soil response, structural response and damage and loss assessment. The main difference is the explicit modeling of the prevailing parameters, especially those which can be observed, and hence denoted as indicators, using so-called Bayesian probabilistic networks. The following section describes the proposed methodology of "Indicator-based large-scale risk assessment framework" along with the main tool for evaluation, the Bayesian probabilistic networks.

3.2 Proposed framework


Risk assessment is the rst step in quantifying risks. In this step, risks are already analyzed and through a screening process limited to their governing constituents. Managing the risk within a decision-making process requires a sound and reliable risk assessment. Assessing and managing earthquake risks should be seen relative to the occurrence of the seismic event, i.e. before and after the earthquake event occurs. This is necessary as the decision alternatives and problem context or boundary conditions change over the corresponding time frame. Before an earthquake occurs the issue of concern is the allocation of resources for optimal preventive measures such as adequate strengthening or renewal of the built environment. The

21

3 Methodologies

BEFORE

AFTER

Optimal allocation of available ressources for risk reduction Loss estimation

Condition assessment and updating of reliability and risk Loss assessment

Figure 3.1: Decision situations for management of earthquake risks.

estimation of expected losses due to a potential earthquake is another major interest. After an earthquake the issue is to estimate the losses occurred and to assess the structural condition of the buildings and lifelines. The knowledge gained from the effects of the event which has occurred is then used in updating the underlying models used to assess the losses, especially the models for assessing the behavior of soils and structures. After an earthquake, the situation is comparable to the situation before an earthquake. Figure 3.1 illustrates the different decision situations. The identication of optimal decisions could be performed by means of traditional cost/benet analysis, if all constituents of the decision problem would are known with certainty. As the present understanding of the physical phenomena "earthquake", its spatio-temporal occurrence, its effect on soil and structure as well as its nancial effects are far from perfect, the decision problems are subject to signicant uncertainties. However, the risks associated with the different decision alternatives can be assessed. The assessed risks can then be used to rank the decision alternatives consistently. This dissertation aims to establish a framework for assessing earthquake risks in a region. The framework attempts to meet the identied modeling basis discussed in Section 3.1: 1. Integrality and generality 2. Modularity 3. Inference 4. Updateability 5. Dependencies

22

3.2 Proposed framework 6. Multi-detailing In the following the risk assessment framework, Bayesian probabilistic networks and the details of the modeling basis are outlined.
Risk assessment framework for systems of cities

A system can be considered as an ensemble of interrelated constituents (or assets); buildings, components, lifelines, human beings and environment. The individual constituents and their interrelation dene the characteristics of the system. In describing a system, spatial and temporal representation of its constituents are required; including the interrelation between all relevant exposures (hazards), the assets and possible consequences. Direct consequences are related to damages to the individual constituents of the system. Indirect consequences are any consequence occurring beyond the direct consequences, i.e. associated with losses in regard to system effects. Modeling a system always requires a choice of an appropriate level of detail or scale. The choice depends on the characteristics of the system and the spatio-temporal characteristics of the consequences. A decision-maker responsible for one or a couple of buildings in a city may probably choose a more rened model for evaluating the structural behavior of the buildings due to earthquakes than a decision-maker responsible for hundreds of buildings. The representation of a system should accommodate information collection about the individual constituents. A performance update of the system in regard to the response to external effects such as earthquakes is thus possible. The system is modeled based on the available knowledge of the individual constituents. The mapping of reality which is nothing but the modeling process, lacks perfect knowledge. The physical phenomenon earthquake, its effect on the built environment and the generation of losses are all subject to signicant uncertainty. Hence the consistent treatment of knowledge and uncertainty play a key role. The consistent representation of knowledge and uncertainty justify the integration and aggregation of risk estimates obtained for different assets and for individual hazards (Faber, 2008).

Vulnerability

Robustness

Figure 3.2: Illustration of risk assessment framework.

Risk reduction measures

Exposure Risk indicators

23

3 Methodologies Risk assessments may be facilitated by considering the generic representation illustrated in Figure 3.2. In this framework three levels are distinguished: exposure, vulnerability and robustness. The risk assessment framework allows the use of any type of risk indicators in regard to the exposure, vulnerability and robustness of the considered system. Risk indicators are any observable or measurable characteristics of the system or its constituents containing information about risk. The possibilities for collecting additional information in regard to the uncertainties associated with the risk indicators can be considered as comprising the total set of measures for risk reduction. The risk reduction measures may be considered as decision alternatives. Risk measure can be implemented at different levels in the system representation, in regard to exposure, vulnerability and robustness (Figure 3.2). Exposure can be considered to be an indicator of the hazard potential for a given object or system of consideration. Considering earthquakes, the exposure EX is an inherently uncertain phenomenon with probabilistic characteristics usually provided in terms of earthquake intensities and corresponding return periods. The vulnerabil ity of a system, assessed through the term P(D|EX)CD , can be considered as the ratio between the risks due to direct consequences and the total value of the considered asset considering all events in a specied time frame. Considering an earthquake event, vulnerability is associated with signicant uncertainty and is appropriately described by a probability distribution of different damage states of structures conditional on the exposure event, e.g. the earthquake intensity. Robustness, assessed through the complement of the term P(CID |D, EX)C , can be considered ID as the ratio between the risks due to direct consequences and the sum of the risks due to direct and the indirect consequences. Considering again the event of an earthquake, robustness is associated with the conditional probability of losses of various degrees conditional on the exposure and a given damage state. For a system corresponding to a city or region, societal losses including loss of lives as well as economical losses may depend strongly on the specic time of the year, week and day when an earthquake occurs. In this way the robustness of the system exposed to an earthquake will also be dependent on the specic time when an earthquake takes place. Whereas the uncertainty modeling associated with the assessment of exposure and vulnerability may be based on well established frameworks such as the Joint Committee on Structural Safety Probabilistic Model JCSS (2001), the uncertainties involved in the assessment of the robustness are generally subject to signicant epistemic uncertainty. One of the reasons for the signicant subjective element of uncertainty in consequence modeling is the uncertainty on the decision maker in regard to appropriateness and completeness of the applied consequence assessment models. However, in correspondence with the risk assessment framework outlined above and consistent with Faber and Maes (2003) it is proposed that risk assessment be performed on the basis of the following expression for the risk R or the equivalent expected direct consequences CD and indirect consequences CID : R = E[CD +CID ] E[CD ] = E[CID ] = CD p(D|EX)p(EX)dDdEX C p(CID |D, EX)p(D|EX)p(EX)dCID dDdEX ID (3.1) (3.2) (3.3)

24

3.2 Proposed framework


Earthquake risk assessment framework

Based on Equation 3.1 and consistent with the PEER performance-based earthquake engineering framework (Cornell and Krawinkler, 2000), the following expression is elaborated in the proposed methodology:

E[CD ] = E[CID ] =

CD dP(D|EDP)dP(EDP|IM)dP(IM|SE)p(SE)dSE C dP(CID |D)dP(D|EDP)dP(EDP|IM) ID dP(IM|SE)p(SE)dSE

(3.4) (3.5)

where CD , CID are the direct and indirect consequences, D is the damage state, EDP is the engineering demand parameter (i.e. interstory drift ratio, dissipated energy, etc.), IM is the intensity measurement (i.e. the spectral displacement, peak ground response, magnitude, etc.) and SE is the seismic event. In the following sections the components of equations 3.4 and 3.5 are further explained. Exposure is represented by the probability that a specic ground motion intensity measurement, IM, exceeds a specic value at a given site in a specic time interval. This probability is also denoted as the seismic hazard at the site. In Equation 3.4, the terms dP(IM|SE) and p(SE) are related to the assessment of the seismic hazard. The main sources of uncertainties involved in evaluating the seismic hazard at a given site arise from the denition of seismic sources, recurrence relations, attenuation relationships, local site effects and soil-structure interaction effects. Seismic sources are points, lines or areas with a uniform level of seismic activity and are dened on the basis of geological, geophysical and seismological data. Recurrence relationships are dened for each source zone according to the assessed recurrence frequencies of earthquakes with different magnitudes. Attenuation relationships provide estimates of major characteristic parameters of a strong ground motion (i.e. peak ground acceleration, acceleration response spectra) at a given site due to a seismic event with given major characteristics, such as magnitude, distance and faulting mechanism. A review of the major developments in the eld of seismic hazard analysis is available in Atkinson (2004). In particular, the probabilistic approaches for estimating the seismic hazard provide a structured framework for explicit quantication of the uncertainties involved. The Probabilistic Seismic Hazard Analyses is one of the major studies in this eld (Cornell, 1968). It should be noted that explicit quantication of uncertainties involved in the estimation of seismic hazards in turn provides the possibility of evaluating the sensitivity of results to various uncertain parameters. Such information forms a valuable basis for allocating the available resources in order to gain more information for the improvement of hazard estimates. As a result of the modular approach followed in this study, a number of seismic hazard models with varying levels of detail can be utilized. The vulnerability of a structural system is expressed through the probability that a specic level of damage, D, occurs when the structure is subjected to a specic loading intensity, IM. In equations 3.4 and 3.5, the terms dP(D|EDP) and dP(EDP|IM) are directly related to the assessment of the seismic vulnerability of a structure. The methods for obtaining the vulnerability

25

3 Methodologies functions for structures can mainly be categorized as: methods based on expert opinion, methods based on observed damage distributions after earthquakes and methods based on structural response analysis. A review of the major approaches related to seismic vulnerability assessment of structures is available in Porter (2003). Methods based on structural response analysis, such as Mosalam et al. (1997), Singhal and Kiremidjian (1997), provide the advantage of investigating the effect of individual parameters on the structural response in an analytical manner. Due to the term dP(D|EDP), the dependency between D and EDP also plays an important role in the evaluation of the usefulness of EDPs. In many of the available vulnerability assessment studies this link is established based on empirical methods, such as experiments and post-earthquake damage observations. The updatability of BPNs makes them an efcient tool, provided that the selected EDPs can be updated. Robustness, i.e. the complement of dP(CID |D), is an indicator of the indirect consequences CID due to the damages D of the system under consideration. Considering seismic events, robustness is associated with the conditional probability of losses of various degrees conditional on the damage state. For a city, societal losses including loss of lives as well as economical losses depend strongly on the specic time of the year, week and day when an earthquake occurs. To ensure consistent decision-making, the uncertainty of any type of economic impact due to earthquakes has to be included. These economic impacts are direct economic losses due to damaged structures, their contents and lifelines (Geipel, 1990), indirect economic losses due to business interruption (Benson and Clay, 2004), loss of revenues and increases of costs in the public sector, expenses and losses of individuals and loss of household incomes due to death, injury, or job interruption (NRC, 2004). Loss of individuals may be quantied by the Societal Life Saving Cost (SLSC) (Rackwitz, 2006) or by the Implied Cost of Averting Fatalities (ICAF) (Skjong and Ronold, 1998), which are derived on the basis of the Life Quality Index (Nathwani et al., 1997).
Bayesian probabilistic networks

The proposed framework allows for the utilization of any type of quantiable characteristic, denoted as indicator, in regard to the exposure, vulnerability and robustness of the considered system. Within the proposed framework risk reduction measures can also be implemented at different levels in the system representation, in regard to exposure, vulnerability and robustness (Figure 3.2). The hierarchical structure of risk assessment can effectively be mapped by modern risk assessment tools such as Bayesian probabilistic networks (BPN) and inuence diagrams. In this subsection the main concepts for the construction of BPNs are introduced. The most important algorithms for the evaluation of BPNs are given in Annex A. BPNs constitute a exible, intuitive and strong model framework for Bayesian probabilistic analysis. They were originally developed as an extension to predicate logic based on deterministic production rules in the eld of articial intelligence. The advantage is that the variables may have values other than binary cases and that relations among variables need not be deterministic. Despite the modeling power, the popularity of BPNs as a modeling tool did not increase until efcient inference algorithms were developed (Lauritzen and Spiegelhalter, 1988), (Jensen et al., 1990).

26

3.2 Proposed framework Bayesian probabilistic networks, alternatively referred to as belief networks or probabilistic causal networks, have become popular during the last two decades in the research areas of articial intelligence, probability assessment and uncertainty modeling (Pearl, 1988). The ideas and techniques have also gained recognition in other engineering disciplines and natural sciences, especially in problems involving high complexities and large uncertainties, see also Faber et al. (2002). An application of causal networks for the purpose of aiding technicians to assess historical buildings subject to earthquake hazards is given in Salvaneschi et al. (1997). As an example the seismic vulnerability is evaluated by modeling the available knowledge in the form of logical trees in Miyasato et al. (1986), Ishizuka et al. (1981) and Pagnoni et al. (1989). Furthermore, in Zhang and Yao (1988) conceptual networks and frames are applied to map observable information into damage states. In Friis-Hansen (2000) BPNs were developed as a decision support tool in marine applications. The description and assessment of natural hazards and the quantication of their related risks appears to be a problem for which BPNs can be a helpful tool. In Bayraktarli et al. (2005) BPNs are applied to assess the earthquake risk for single structures, in Bayraktarli et al. (2006) to assess the earthquake risks for cities, in Bayraktarli and Faber (2007) for value of information analysis, in Straub (2005) to natural hazards risk assessment in general and in Gret-Regamey and Straub (2006) BPNs are linked to a Geographic Information System (GIS) for natural hazards risk assessment. BPNs are designed as a knowledge representation of the problem domain, explicitly encoding the probability dependence between the variables in the model. Model building intuitively focuses on causal relationships between variables. Hence a BPN automatically reveals the analysts analytical understanding of the problem. This enables validation of the models and communication between different parties especially useful when dealing with interdisciplinary problems. In probabilistic terms a BPN represents the joint probability density function of all variables explicitly considered in the problem. Considering causal relations among the variables lead to the most compact representation of the joint probability density function (Jensen, 2001). The outcome of the compilation of the BPN is the marginal probability distribution of all variables. An important feature of BPNs is that easy inference based on observed evidence is allowed. Observing one variable in the domain, the probability distributions of the remaining variables in the model are easily updated. Furthermore, any of the variables in the BPN can be conditioned to a certain state and the probability distributions of the remaining variables in the model can be evaluated. The difference from updating lies in the fact, that through conditioning, a BPN is adjusted to a new conguration. Several program packages exist for constructing and evaluating BPNs, such as Hugin (2008) or Netica (2008). Many other non-commercial packages are also available, GeNIe-Smile (2008) or WinBUGS (2008). Throughout the dissertation the inference engine of Hugin (2008) has been used. The basic feature of BPNs may be described by the following steps: Formulation of causal interrelationships of events leading to the events of interest (consequences). This is graphically shown in terms of nodes (variables) connected by arrows.

27

3 Methodologies Variables with ingoing arrows are referred to as children. Variables with outgoing arrows are referred to as parents. Assigning to each variable a number of discrete mutually exclusive states. Assigning probability structures (tables) for the states of each of the variables (conditional probabilities in cases where the variables are children). Assigning consequences corresponding to the states represented by the BPN. Having developed the BPNs, the required probability tables and consequences, risk assessment and decision analysis is straightforward.
Elements in Bayesian probabilistic networks

After analyzing the problem at hand, the governing parameters need to be identied. This is always carried out considering the needs of the analysis, which determines the detailing level. The risk assessment framework introduced makes it possible to perform this step in a structured manner. The next step is to identify the dependencies among the parameters. The easiest way to construct these dependencies is to rely on causality. A correct model based on causality reduces the number of connections among the parameters. It is however not easy to base modeling on causality, as causal relations are not always obvious. Causality is not a well understood concept. The debate in philosophy after David Hume is, whether a causal relation is a property of the real world or a concept in our minds helping us to organize our perception of the world. Putting the philosophic debate aside a more practical reason may hamper our reliance on causality. The acquisition of the conditional probability tables for a correct causal model may not be possible; the information may be available for other types of conditional probabilities. Although it is an advantage, the structure of a Bayesian probabilistic network need not necessarily reect cause-effect relations. The only requirement is that the so-called directionalseparation (d-separation) property of the network for the domain modeled holds true. It is a property of directed graphs which may be used to identify irrelevant information for specic queries in a Bayesian probabilistic network or inuence diagram. Two nodes of a network are d-separated if they are conditionally independent given a specied set of nodes. If P(A|B,C) = P(A|C) then A and B are conditionally independent, or d-separated, given C. It is important to note that A, B and C may be variables or sets of variables. Having dened the main construction rules for Bayesian probabilistic networks it is now appropriate to consider the three main types of connections found in a Bayesian probabilistic network. The d-separation principle will also be discussed for these main types. The BPNs given in this section and in Appendix A, where the most important algorithms are given for evaluating BPNs, are constructed for a very crude earthquake-related problem. The characteristic of an earthquake is given only by its magnitude (M). The ground motion intensity parameters are characterized by peak ground acceleration (G) and spectral displacement (S). Liquefaction (L) is considered as one type of soil related failure. Structural damage (D) is assumed to be caused

28

3.2 Proposed framework by either liquefaction or spectral displacement. In formulating a decision problem the risk reduction measures are considered in alternative actions (A). The structural damage and the risk reduction measures may result in costs (C). In many engineering problems, the objective is to model physical phenomena which are inherently continuous. The consideration of continuous variables is not directly possible with BPNs as the associated algorithms are tailored to handle discrete variables. Attempts to include continuous variables into BPNs were made by Pearl (1988), Lauritzen (1992) and Alag and Agogino (1996). The consideration of discrete variables is not a limitation, because the state-of-the-practice is to consider some main variables as earthquake magnitude and structural damage anyway in discrete states. Earthquake magnitude is evaluated mostly in 0.5 steps down to mostly one signicant digit in seismic hazard calculations. The damage to structures is mostly considered with classes as "immediate occupancy", "life safety" or "collapse prevention" (FEMA, 356). The approximation of a continuous space of a variable of a BPN is known as discretisation. The continuous space is hereby subdivided into a set of bins or intervals. Discretisation may also be understood as a categorization or classication of a data set. The most important point in a discretisation is that the continuous function or the given data set is compactly represented without disregarding the most important properties of the function or data set. The discrete states should represent mutually exclusive ranges. Discretisation of a data set can be made according to equidistant split-points, equal frequency or supervision. In the equidistant splitpoints approach the range is split into intervals of equal length. This approach gives reasonable accuracy when the uncertainty of the variable is very large. When the variable at hand has a very low uncertainty, the equal frequency approach may be applied. An equal number of data points is targeted in each interval, so that a ne discretisation of the dense part of the distribution and fewer and longer intervals in the sparse regions can be achieved. In a supervised discretisation the interval number and interval lengths are chosen iteratively until the histogram of the data set is reasonably represented. This step is in principle comparable to the choice of the number of bins when drawing the histogram of a data set in a statistical analysis. A BPN may include variables, which follow a known distribution. The task is to determine the length of the individual intervals so that the discretized distribution represents the probability distribution in a reasonable way. The discretisation approaches are the same as given above. The choice of the discretisation principle is determined by the purpose of the study and the characteristic of the probability distribution considered. For a thorough discussion of this subject, see Friis-Hansen (2000). In Figure 3.3 a very simple BPN with two nodes Magnitude (M) and structural damage (D) is illustrated, where magnitude has an inuence on structural damage. In BPN terminology the magnitude node is denoted as a parent node and the structural damage node as a child node.
M D

Figure 3.3: Simplest BPN.

29

3 Methodologies For each node a probability table is specied. Parent nodes have by denition unconditional probability tables. The magnitude node M may be discretized, for example into categorical states: small magnitudes, moderate magnitudes and large magnitudes, or into any other number of discrete states: M<5, 5<M<6, 6<M<7 and M>7. In any case the probabilities of the mutually exclusive states must total 1. A child variable requires a conditional probability table with the assignment of a probability to each of the mutually exclusive discrete states of the child node given the state of the parent node. In the example given, the structural damage node may be discretized into three states: "immediate occupancy", "life safety" and "collapse prevention". The probabilities of being in each of these damage states need to be assigned for each of the given magnitude ranges. The same principles apply analogously when there are more than one parent nodes and/or child nodes in a BPN.
Serial, diverging and converging connections

A typical serial connection is given in Figure 3.4. Magnitude (M) has an inuence on Spectral displacement (S), which in turn has an inuence on Damage (D). Evidence (or observation) on M will inuence the certainty of S, which then inuences the certainty of D. Similarly, evidence on D will inuence the certainty on M through S. But, if the state of S is known, then M and D become independent. In this case it can be said that M and D are d-separated given S. When information on spectral displacement can be retrieved, the estimation of the structural damage is no longer dependent on the earthquake magnitude, given that the BPN in Figure 3.4 correctly reects the situation. In a typical diverging connection (Figure 3.4, middle) evidence may be transmitted if the state of the parents are not known. In this case it can be said that peak ground acceleration (G) and S are d-separated given M. Although not relevant in practical situations, where the magnitude of an event is mostly known before any other information, the following is assumed: if the magnitude of an earthquake is not known, having information on spectral displacement indicates something on the peak ground acceleration; but when the magnitude is known, information on spectral displacement gives no additional information given that the BPN in Figure 3.4 correctly reects the situation. In converging connections (Figure 3.4, right) evidence is transmitted through the child node, when the child node or any one of the parents receive evidence. In this case it can be said that liquefaction (L) and S are d-connected (opposite of d-separated) given D. If the state of damage is known, information on liquefaction indicates something on the spectral displacement.
Sample BPN and sample inuence diagram

BPNs of any size can be constructed using the three main types. A sample BPN is given in Figure 3.5. This is the BPN used in explaining existing algorithms for evaluating BPNs given in Appendix A. BPNs extended to solve decision problems are also known as inuence diagrams. In inuence diagrams two additional node types are used: Decision nodes (usually symbolized by rectangles) and utility nodes (usually symbolized by diamonds), see Figure 3.5. A decision node comprises

30

3.2 Proposed framework


M M S G D S D L S

Figure 3.4: Serial, diverging and converging connections.

the alternative actions considered by the decision-maker. The parents of a decision node dene the available information at the point of the decision. From this it follows that when more than one decision node exists in an inuence diagram (typical sequential decision problem), the decision nodes need to be connected by arrows and decision analysis needs to be performed consecutively following the arrows. Utility nodes may be conditional on probabilistic and/or decision nodes, but do not have child nodes. In the corresponding table utilities which quantify the decision-makers preference for each conguration are given, and not probabilities. The rational basis for decision-making is established by comparing the expected utilities of each considered action alternative in the decision nodes. The construction and evaluation of the inuence diagram follows very close to the BPNs. Most of the program packages available for BPNs can also be used for inuence diagrams. Besides marginal probability distributions of all variables in the domain, the expected utilities of all decision alternatives are also evaluated.
M G L S G L M S

D A C

Figure 3.5: Sample BPN (left) and sample inuence diagram (right).

The proposed framework in regard to the modeling basis

Using the sample BPN introduced above, the capability of the proposed framework with regard to the six identied aspects will be discussed.

31

3 Methodologies
Integrality/generality

The proposed framework is applied using indicators representing exposure, vulnerability and robustness. A consistent modeling at all levels is ensured by the modeling with BPNs. The indicators may be identied and quantied for specic or general situations.

Inference

BPNs have the advantage of allowing inference based on observed evidence. It is not necessary for all variables explicitly considered in the BPN to be observable. The model can be updated in line with the observations using Bayes theorem. Considering Figure 3.6, any evidence received on any of the variables in the BPN can be used to update the probability distribution of any variable. For example, a site may be observed to indicate liquefaction. In this case the node Liquefaction receives certainty for the corresponding state "liquefaction is observed" and the other variables are updated following Bayes theorem. This feature allows the exploitation of the BPN model to answer queries and to investigate different scenarios.
Exposure

M G S G

M S

Vulnerability

D
Robustness

D C

Figure 3.6: Proposed framework with regard to integrality/generality.

Modularity

Modularity enables the easy adaptation of alternative methods and models to the integral model. The application of the framework using BPNs has the advantage that dependencies are explicitly modeled. Parts of the integral BPN may be considered as a module as soon as it is separated from other parts of the BPN. Considering Figure 3.7, the peak ground acceleration (G) and liquefaction (L) may be considered as a stand-alone module. Any model in geotechnical earthquake engineering providing an estimate for liquefaction given peak ground acceleration can be applied to quantifying this module. The only point where caution must be exercised is the interface with the other modules, which is assured by using the same discrete states of the nodes, here G and L.

32

3.2 Proposed framework


Seismic hazard Module Structural damage Module M M

G Consequences Module

L Soil Failure Module

Figure 3.7: Proposed framework with regard to modularity.

Updateability

One of the strengths of the proposed framework is its ability of systematically update of any parameter in the model. It is possible that data in the form of reports on the damages on buildings in an affected area may be available after an earthquake. The state of damage of each building in the city is certain. Any model parameter can be updated with this information (Figure 3.8).
M

...
eModel

Information on structural damage after an earthquake with M

Figure 3.8: Proposed framework with regard to updating.

Dependencies and hierarchical modeling

Explicit modeling of dependencies is one of the strengths of BPNs. For instance, research on the correlation of the two ground motion intensity parameters in Figure 3.9 may indicate their dependency. This is considered by an arrow from G to S. The probability table of the node with an incoming arrow needs to be changed accordingly. Another important feature of BPNs

33

3 Methodologies is their advantage in hierarchical modeling. Not all variables within a model may display only local dependencies. There may be parameters which inuence variables globally. For example, when a BPN as given in Figure 3.5 is applied not just for one single building, but for a portfolio of structures, the node magnitude comprising the size and probability of an earthquake affects all buildings. Hence the node magnitude is treated as a hyper parameter. The state of the node magnitude has to be the same for all buildings considered. This is especially important when the individual damage nodes are combined with joint node costs (C) as illustrated in Figure 3.9.
M

...

Figure 3.9: Proposed framework with regard to dependencies and hierarchical modeling.

Multi-detailing level

In principle the same BPN can be applied for different decision-making levels as illustrated in Figure 3.10. An important point is to identify the hyper parameters and to consider hierarchical modeling.
M G S M

...

Building 1

Building 1

Building 2

Building n

Figure 3.10: Proposed framework with regard to application for multi-detailing levels, e.g. private owner (left), insurer (right).

34

4 Models
The earthquake risk is dissected into its constituent steps (Figure 4.1). Earthquake risk begins with rupture along a surface. Phenomena related to the rupture process are summarized into "source". The rupture process results in a number of earthquake hazards. Most fundamentally is faulting, i.e. the surface expression of the differential movement of blocks of the Earths crust. Faulting is typically a long narrow feature affecting a geographically small area. A much greater area is affected by ground shaking, typically the primary hazard due to earthquakes. The seismic waves resulting from the rupture process propagate along the "path" reecting and refracting at boundaries between different rock types. The transfer of seismic waves from hard deep soils to softer supercial soils may result in seismic energy focalization and hence higher levels of ground shaking. These are known as "site" effects, which may also result in soil failure. Depending on the earthquake and site characteristics, liquefaction, other forms of soil failure, tsunamis, or other types of hazards may be signicant source of damage. Buildings, infrastructure or other structures may not fully resist to these hazards, and sustain some degree of damage. Damage to the structural components of the buildings can vary from minor cracking to collapse and/or contents of the building may be severely damaged. Structural damage has consequences, i.e.loss. Primary losses are life loss or injury, nancial loss (e.g. repair or replacement cost of the buildings) or loss of function (e.g. outfall of hospitals or emergency services). These primary losses lead to secondary losses, such as loss of revenues resulting from business interruption. The application of the proposed framework is prepared in this chapter by adapting state-of-

Damage

Seismic hazard Soil failure

Site Sedimentschichten
Festgestein

Soil failure

Structural Damage Consequences

Source

Path

Figure 4.1: Earthquake loss process.

35

4 Models the-art earthquake-related models to the proposed methodology. BPN models for seismic hazard, for soil failure, for structural damage and for consequence assessment are developed. These BPN models form the basis of the BPN models in Chapter 5, where four example applications illustrate the application.

4.1 Seismic hazard


Phenomena related to source, path and site (see Figure 4.1) are considered within seismic hazard studies. State-of-the-art seismic hazard studies calculate the probability of occurrence of a ground motion intensity parameter within a given time period due to an earthquake using earth science models for the characteristics of earthquakes in the region of interest. Uncertainty about the causes and effects of earthquakes and about the seismic characteristics of potential active faults lead to uncertainties in the input parameters for the seismic hazard analysis. Cornell (1968) proposed a mathematical approach for systematically incorporating the uncertainties for calculating probability of exceeding some level of ground shaking at a site. The methodology is known as Probabilistic Seismic Hazard Analysis (PSHA) and comprises in summary four steps: Identication of all earthquake sources capable of producing ground motions, specifying the uncertainty in location. Source-to-site distance is used to characterize the decrease in ground motion as it propagates away from the earthquake source. As a distance measure the Joyner-Boore distance is used throughout the dissertation. The Joyner-Boore distance is the closest horizontal distance to the surface projection of the rupture area. Characterization of the temporal distribution of earthquake recurrence, specifying the uncertainty in size and time of occurrence. The moment magnitude scale, Mw , is used to measure the size of earthquakes in terms of the energy released. Unlike the local magnitude scale, ML , which is also known as Richter scale, there is no upper limit on the highest measurable magnitude, nor are there problems in measuring earthquake sizes at large distances. Throughout the dissertation the size of an earthquake is characterized by the moment magnitude, Mw . For brevity it is however termed as "magnitude". Prediction of the resulting ground motion intensity as a function of location and magnitude. Combination of the uncertainties in location, magnitude, time and ground motion intensity, using the total probability theorem. In more advanced seismic hazard studies two types of uncertainty, namely aleatory variability and epistemic uncertainty, are distinguished. The uncertainty in size, location and time to the next earthquake and the resulting ground motion are considered to be inherent in the natural physical process and indifferent to change in our knowledge. Hence they are called aleatory variability or sometimes also randomness. Epistemic uncertainty results from our imperfect knowledge of earthquakes and can be reduced with a better knowledge basis and additional data (NRC, 1997).

36

4.1 Seismic hazard The combination of the aleatory variability in location, size and time and ground motion intensity yields a single hazard curve (i.e., a curve reporting annual rates of exceedance for varying levels of ground motion intensity). Considering different assumptions, hypotheses, models and parameter values for the location, size and time to the next earthquake and for the predicted ground motion yields a suite of hazard curves representing the epistemic uncertainty. These epistemic uncertainties are typically organized and displayed by means of logic trees (Kulkarni et al., 1984; Coppersmith and Youngs, 1986). Quantication and treatment of these epistemic uncertainties is carefully considered in the SSHAC Level 4 methodology, where a systematic and well-balanced integration of experts was a central issue (NRC, 1997; Stepp et al., 2001; Abrahamson et al., 2002). In this section the application of the state-of-the-art PSHA methodology using Bayesian Probabilistic Networks (BPN) is considered (Bayraktarli et al., 2009). After illustrating a generic application, aspects regarding the correlation of several ground motion intensity measures, incorporation of epistemic uncertainties and non-Poisson earthquake recurrence are discussed. In traditional PSHA studies, the primary analysis output is the annual frequency of the exceedance of some level of ground shaking at a particular site. In contrast, the main output of the BPNbased seismic hazard calculation is the probability distribution of the maximum ground motion intensity measurements observed during a certain time frame. The two output formats have a one-to-one relationship. This transformation from one to the other is introduced because of the need for these distributions in the examples given in Chapter 5.
4.1.1 A Bayesian probabilistic network for a generic seismic source

The application of the BPN approach for seismic hazard analysis is described using a generic line source as specied in Kramer (1996) (Figure 4.2a). Line sources are tectonic faults capable of producing earthquakes of different sizes. In cases where individual faults cannot be identied, the earthquake sources may be described by area zones. The application of the approach to area sources is analogous to the line source described here. To predict the ground shaking at a site, the distribution of distances from the earthquake epicenter to the site of interest is necessary. The seismic sources are dened by epicenters assumed to have equal probability. In a line sources these equal probability locations fall along a line; in a point source they fall on a single point; in other cases area sources are postulated. Using the geometric characteristics of the source, the distribution of distances can be easily calculated for a chosen number of discrete states (Figure 4.2b). Gutenberg and Richter (1944) observed that the distribution of earthquake sizes in a region generally follows a distribution given by: log m = a bm (4.1)

where m is the rate of earthquakes with magnitude greater than m, and a and b are constants (Figure 4.2c). a and b are generally estimated using statistical analysis of historical observations. a indicates the overall rate of earthquakes in a region, and b the relative ratio of small to large magnitudes.

37

4 Models The Gutenberg-Richter recurrence law mentioned above is sometimes applied with a lower and upper bound. The lower bound is represented by a minimum magnitude mmin below which earthquakes are ignored due to their lack of engineering importance. The upper bound is given by the maximum magnitude mmax that a given seismic source can produce. Setting a lower and upper magnitude for a bin, mL and mU respectively, Equation 4.2 can be used to compute the probability for the earthquake magnitudes that are between a lower and upper earthquake magnitude (Figure 4.2d).
a) c)
ri

Rate of exceedance lm

lmmin

Peak Acceleration [g]

(-50,75) Site (0,0) (-15,-30)

e)

1.00 0.20 0.10 0.02 Faulting mechanism unspecified Soil, shear wave velocity=310 m/s 0.01 1 2 10 20 100 Distance [km]
0.06 0.05 0.04

Source 1 (Mw=7.3)

lmL

M=6.5

lmU

lmmax

mmin

mL mU Magnitude Mw

mmax

b)
P(Distance=r)

0.6 0.5 0.4 0.3 0.2 0.1 0.0 27 34 40 47 54 60 67 73 80 87

d)
P(Magnitude=m)

0.6 0.5 0.4 0.3 0.2 0.1 0.0 4.2 4.5 4.8 5.2 5.5 5.8 6.1 6.5 6.8 7.1

f)

P(e)

0.03 0.02 0.01 0.00 -3 -2 -1 0 1 2 3

Distance [km]

Magnitude Mw

Figure 4.2: Specication of the generic line source and discrete probabilities of distance (a, b), Gutenberg-Richter recurrence law and discrete probabilities of magnitude (c, d), and a sample ground motion prediction equation and discrete probabilities for 50 states of epsilon (e, f).

P(mL M mU | mmin M mmax ) =

mL mU mmin mmax

(4.2)

The ground shaking resulting from an earthquake at any site can be characterized by ground motion intensity measures, e.g. the maximum value of the acceleration time history called peak ground acceleration (PGA). Alternatively, the maximum value of the acceleration, velocity or displacement response of a single-degree-of-freedom system (SDOF) can be used as a ground motion intensity measure. The SDOF is represented by a mass and a stiffness, characterizing its fundamental period or frequency. The maximum value of the response of the SDOF is termed as the spectral value at the fundamental period of that SDOF. Earthquakes in a source result in different ground motion intensity measures at the surface depending on the magnitude, source-to-site distance, faulting mechanism, near-surface site condition, etc. Ground motion prediction equations, also referred to as attenuation functions, are

38

4.1 Seismic hazard generally developed using statistical analysis of measurements from past earthquakes (Figure 4.2e). As an example the ground motion prediction equation proposed by Boore et al. (1997) is given: lnY = b1 + b2 (M 6) + b3 (M 6)2 + b5 ln( r2 + h2 ) + bv ln( jb Vs ) VA (4.3)

where Y is the predicted mean of ground motion intensity parameter, M is the moment magnitude, r jb is the Joyner-Boore distance and Vs is the average shear wave velocity to 30m. The coefcients b1 , b2 , b3 , b5 , h, bv and VA were determined by regression and provided in tabulated form for certain fundamental periods. As there is signicant scatter in the measured ground motion intensities, the standard deviation of the overall regression, ln y is also provided for each fundamental period (Figure 4.2e). The distribution of the ground motion intensity measure can be calculated then by adding a normalized residual (i.e., a factor times ln y ) to the predicted mean. This standard normal distributed factor is often denoted as epsilon, . A BPN for the generic line source is constructed by conditioning the ground motion intensity parameter (here, spectral displacement (SD)) on magnitude M, distance R and epsilon . Equation 4.2 is used to calculate the probability distribution of magnitudes for 10 bins of equal intervals. Using simple geometric considerations, the distribution of distance of the site to the generic line source is also calculated for 10 bins. The distribution of the standard normal distributed epsilon is discretized into 50 bins. The spectral displacement node is also discretized into 10 bins. For any of the combination of the 10 magnitudes, 10 distances and 50 epsilons the corresponding spectral displacement value is calculated using the ground motion prediction equation (Boore et al., 1997). The vector with the size of the number of bins for spectral displacement has an entry of unity in the bin, where the ground motion intensity value falls. The 5000 vectors (multiplying 10x10x50) form the conditional probability table of spectral displacement. Having constructed the structure of the BPN and the corresponding probability tables, the BPN can be evaluated to yield the marginal distributions of any parameter in the network. These probability distributions may now be used to calculate the joint distribution of all or any set of the parameters in the BPN using simple statistical calculation schemes. In Figure 4.3 the BPN for seismic hazard analysis of the generic line source is illustrated with the marginal distribution of the spectral displacement. Once constructed, the BPN can be used to nd the conditional distribution of any parameter, given knowledge of the state of any other parameter in the BPN. In Figure 4.3 the distribution of spectral displacement is given for the situation that the magnitude is known to be 5.5 and the distance to be 60 km. The software code for the construction and evaluation of the BPN is given in Appendix C. The information about which earthquake scenarios are most likely to produce a specic level of ground motion intensity can be retrieved from a PSHA computation through a process known as deaggregation (McGuire, 1995). Using the constructed BPN and by instantiating, i.e. by assigning certainty to any state of any node, the conditional probabilities of the other nodes or the joint probability of any node combination can easily be retrieved. A sample deaggregation result for the magnitude - distance deaggregation given a SD (T=0.5 s) between 4 mm and 5 mm is given in Figure 4.4. To verify the constructed BPN, the same magnitude-distance deaggregation is computed using a traditional PSHA analysis procedure (Figure 4.4). The small discrepancies

39

4 Models

a)
Magnitude Distance

SD SD

b)
0.6 0.5

c)
P(SD=sd|m=5.5,r=60km)
0<=SD<1 1<=SD<2 2<=SD<3 3<=SD<4 4<=SD<5 5<=SD<6 6<=SD<10 10<=SD<40 40<=SD<70 70<=SD<100 0.6 0.5 0.4 0.3 0.2 0.1 0<=SD<1 1<=SD<2 2<=SD<3 3<=SD<4 4<=SD<5 5<=SD<6 6<=SD<10 10<=SD<40 40<=SD<70 70<=SD<100 0.0

P(SD=sd)

0.4 0.3 0.2 0.1 0.0

Spectral Displacement SD [mm]

Spectral Displacement SD [mm]

Figure 4.3: BPN for a generic seismic line source (a), discrete probabilities of SD (T=0.5 s) evaluated using the BPN for the probability distributions given in Figure 4.2 (b, c).

in the probabilities arise from differences in the discretisation schemes between the traditional PSHA analysis and the BPN.
4.1.2 Incorporating correlation of ground motion intensity parameters

Reliability assessments that attempt to simultaneously consider structural and geotechnical failures are currently not applicable, because structural and geotechnical responses are generally predicted using different ground motion intensity measures, and the tools are not available for determining a probabilistic characterization of the joint occurrence of these parameters (Baker, 2007). Structural response (and structural failure) is often predicted using elastic spectral displacement (SD) (Pinto et al., 2004). Liquefaction failure, on the other hand, is typically predicted using peak ground acceleration (PGA) (Cetin et al., 2004; Youd et al., 2001). The correlation of the ground motion intensity parameters PGA and SD(T ), which is used in developing BPN models, is developed based on Baker (2007). First a large set of recorded ground motions is selected. For each recorded ground motion (a vector of PGA and spectral parameters for given fundamental periods, T ), the corresponding ground motion intensity measures are calculated using ground motion prediction equations. For SD the ground motion prediction equation of Abrahamson and Silva (1997) and for PGA the ground motion prediction equation of Boore et al. (1997) is used.

40

4.1 Seismic hazard

Figure 4.4: Deaggregation by magnitude and distance for SD= 4.4mm using traditional PSHA (left), and for 4 mm<=SD<5 mm using BPN (right).

Once the means and standard deviations of the intensity for a given ground motion is computed, a normalized residual, , that indicates the number of standard deviations, the given observation is away from the mean prediction can be evaluated: = ln x ln X ln X (4.4)

where x is the observed ground motion intensity (dened using one of the above parameters, and here denoted as X), ln X is the predicted mean value of the logarithm of that intensity (given magnitude, distance, etc.) and ln X is the predicted standard deviation of the log intensity. These "epsilons" represent the record-to-record aleatory variability that is not considered by the ground motion prediction equation. This variability is explicitly considered in probabilistic assessments such as probabilistic seismic hazard analysis (PSHA). A given ground motion will have a different value for each ground motion intensity measure considered, and it is the correlation among these different values that must be considered if seismic reliability analysis is to be performed (Baker and Cornell, 2006). Using this approach, empirical correlation coefcients are computed for the large database of values. The following piecewise linear equation provides a good t to the observed values, as a function of the fundamental periodof a single-degree-of-freedom system, T : 0.500 0.127 ln T PGA,SD(T ) = 0.968 + 0.085 ln T 0.568 0.204 ln T if if if 0.05 T < 0.11 0.11 T < 0.25 0.25 T < 5.00

(4.5)

41

4 Models The correlation of the ground motion intensity parameters PGA and SD(T ) is considered in the BPN by conditioning the epsilon values on each other. PGA is discretized from the standard normal distribution into 10 states. SD(T ) on the other hand is a conditional normal distribution with a mean of PGA,SD(T ) PGA and a standard deviation of 1 PGA,SD(T ) 2 and discretized also into 10 states. Since the correlation coefcient depends on the fundamental period, the node SD is also dependent on the fundamental period. The extended BPN is given in Figure 4.5 and a sample output in the form of a magnitude-distance deaggregation and a PGA-SD deaggregation is given in Figure 4.6.
PGA Magnitude Distance

SD

PGA

SD

Period

Figure 4.5: BPN considering correlation of ground motion intensity parameters PGA and SD.

0.10

Likelihood

0.20

0.30

0.40

Figure 4.6: Deaggregation by magnitude and distance using BPN for PGA=0.05g, SD(T=0.5s)=3mm (left) and contribution by PGA and SD for M=6.8, R=27km (right).

0 0. 3 0. 08 1 0. 3 0. 18 0. 23 2 0. 8
1.0 2.0 3.0 4.0 5.0 6.0

place l Dis ectra Sp

10.0

me

m] nt [m

40.0

70.0

80.0

PG

A [g

0. 33 3 0. 8 4 0. 3 48 0.

4.1.3 Incorporating model uncertainties

For one particular seismic hazard model (dened by specifying a source model, a recurrence model and a ground motion prediction model) the aleatory variability described by that model is systematically considered. But there is still uncertainty about the best choices for elements of the

42

4.1 Seismic hazard seismic hazard model itself. This is now commonly addressed by combining the uncertainties about the various inputs in logic trees (Kulkarni et al., 1984; Coppersmith and Youngs, 1986; NRC, 1997). Each branch of a logic tree represents a set of chosen elements for a seismic hazard model. For each of the seismic hazard models the hazard calculations are performed and a single hazard curve representing ground motion versus annual frequency of exceedance is produced. The relative weighting of each hazard curve is then determined by multiplying the assigned weights in each of the branches. From this set of hazard curves a mean, a median and curves for different fractiles can be dened. The BPN for the seismic hazard model introduced before will now be extended to incorporate model uncertainties. For each of the elements producing branches in a logic tree, a node is introduced into the network and the required dependencies with the existing nodes are set using additional arrows. The simple logic tree shown in Figure 4.7 allows uncertainty in the selection of models for ground motion prediction equations and maximum magnitude to be considered. The ground motion prediction equations of Boore et al. (1997) and Abrahamson and Silva (1997) are considered. For illustrative purposes weights of 0.7 and 0.3 assigned to these, respectively. At the other level of nodes, again for illustrative purposes weights of 0.4 and 0.6 are assigned to the maximum magnitudes which the single line source is capable of producing. A sample output of the BPN in Figure 4.7 is given in Figure 4.8.
Ground MotionPrediction Model Maximum Magnitude Model MW =7.3 (0.4) Boore et al. (0.7)
MW =7.7 (0.6) MW =7.3

Max M Model

Prediction Model

ePGA
Magnitude Distance

eSD

(0.4) (0.3) Abrahamson & Silva


MW =7.7

PGA

SD

Period

(0.6)

Figure 4.7: Logic tree and corresponding BPN considering modeling uncertainties.

4.1.4 Incorporating time-dependent seismic hazard

Earthquake occurrences are stochastic in nature, both in time and space. Small and medium magnitude earthquakes may occur independently implying a Poisson model. Large magnitude earthquakes on a particular fault segment, however, should not be independent from each other according to the elastic rebound theory (Reid, 1911). As earthquakes occur in order to release the stress accumulation in a fault, the occurrence of a large earthquake should reduce the chances of the occurrence of a following independent large earthquake in the same source. Paleoseismic studies on fault slip data led to the characteristic earthquake recurrence model (Kramer, 1996). The seismic sources tend to regularly generate earthquakes of similar sizes near to the maximum magnitude known as characteristic earthquakes. This tendency is not observed for smaller earthquakes, which occur more or less randomly. Hence, earthquakes are classied into two

43

4 Models

1.00

Likelihood

Likelihood
n l. t a tio o ee or Bo d M Pr
. al et ion n t so ua am ah Eq br A on

0.75

0.50

0.25

0.25

0.50

0.75

1.00

Figure 4.8: Contribution by ground motion prediction equation and maximum magnitude choice for PGA=0.15g, SD=6mm (left) and for PGA=0.25g, SD=5mm (right).

groups; small/ medium size earthquakes and characteristic earthquakes. For small and medium size earthquakes a time-independent recurrence model and for characteristic earthquakes a timedependent recurrence model is assumed. A review of non-Poisson models is presented in Anagnos and Kiremidjian (1988). The application of BPNs for non-Poisson recurrence models is illustrated using the Brownian Passage Time (BPT) model developed by Matthews et al. (2002) and used in Takahashi et al. (2004). The occurrence of earthquakes of magnitude m constitutes a renewal process with fT (t, m) denoting the probability density function (PDF) of the interarrival times, t. Such a process becomes a Poisson process, if fT (t, m) is assumed as an exponential distribution. For all non-characteristic earthquakes in the source a Poisson model and for characteristic earthquakes a more generalized renewal model is assumed. Setting the origin of time to the most recent occurrence of the characteristic earthquake and denoting the waiting time to the n-th occurrence of the characteristic event by Wn , the conditional PDF of the waiting time to the n-th characteristic event, given that no characteristic earthquake has happened before the start of the operating time of the structures of interest (t = t0 ) is denoted by fWn (t, m|W1 > t0 ). In cases where analysis is performed for structures with a life span much shorter than the mean interarrival time of the characteristic earthquake, it is reasonable to consider the contribution to seismic activity only from the rst characteristic earthquake. As the results of this section will be used in Chapter 5 on earthquake risk for cities with residential buildings, the contributions of the 2nd and subsequent events to the rate of activity are neglected (Takahashi et al., 2004). A special case of the renewal process for which the interarrival times are exponentially distributed is the Poisson process. The interarrival times for the non-characteristic earthquakes are modeled by the exponential distribution: fT (t, m) = (m) exp[ (m)t] (4.6)

n l. t a tio o ee or Bo d M

un ro

ro

un

ic ed Pr

. al et ion n t so ua am ah Eq br A on

ed

w ude M agnit mM imu Max


7.7 7.3

7.7 e Mw nitud Mag imum Max 7.3

ti ic

ti

44

4.1 Seismic hazard where (m) is the constant mean occurrence rate of earthquakes greater than m. For characteristic earthquakes, the interarrival times are modeled by the Brownian Passage Time (BPT) distribution. This model requires two parameters: mean recurrence interval () and aperiodicity () of the mean period between events (coefcient of variation): fT (t) = (t )2 exp[ ] 2 2t 3 2 2t 3 (4.7)

Since the Poisson process is memoryless, the conditional distribution of the waiting time to the rst event, given that no event has occurred prior to the operating time of the structure, remains exponential with the time origin shifted to t0 . fW1 (t, m | W1 > t0 ) = (m) exp[(m)(t t0 )] (4.8)

The conditional PDF for the rst characteristic earthquake, given that no characteristic earthquake has occurred prior to the operating time of the structure, t0 is given by fW1 (t, mchar | W1 > t0 ) = fW1 (t, mchar ) t0 0 fW1 (, mchar )d (4.9)

For the generic line source a mean recurrence time of 100 years and an aperiodicity of 0.5 is assumed. It is further assumed that the last characteristic earthquake occurred 10 years ago. A period of 50 years from now is considered as this is the residential building lifespan. Figure 4.9 illustrates the rate of occurrence of the next characteristic earthquake. This time-varying rate is then combined with the Gutenberg-Richter recurrence relation (Figure 4.9, right). For each year of the building lifespan, the rate of occurrence of the characteristic earthquake (mchar ) is equal to the time-varying rate, whereas the rate of smaller and medium size earthquakes is constant (following the Poisson model). The distribution of magnitudes is then computed using equations 4.10 and 4.11 for each of the 50 years. The mean occurrence rate of the characteristic earthquake for each year of the lifespan of the building is read from the conditional BPT distribution and assigned to the magnitude-Rate of exceedance curve. Equation 4.10 can be used to compute the probability for the magnitudes other than the characteristic earthquake for each year. P(mL M mU | mmin M mmax M = mchar ) = mL mU mmin mmax + mchar (4.10)

The probability of a characteristic earthquake occurring in that year, given that there is an earthquake larger than a minimum magnitude, can be computed using Equation 4.11. P(M = mchar ) = mchar mmin mmax + mchar (4.11)

For each year of the lifespan of the building a BPN is constructed (Figure 4.10). The probability distribution of the magnitude for that year is assigned to the magnitude node and the distributions of PGA and SD are calculated using the BPNs. In Figure 4.11 sample results of the

45

4 Models

Occurrence rate lmchar

t0=5 years t0=50 years

Rate of exceedance lm

0.02

lm

min

0.01

lmL lmU lmmax lmchar

0.00

50

100

150

200

250

300

mmin

mL mU mmax

mchar

Time [year]

Magnitude Mw

Figure 4.9: Mean occurrence rates using BPT model for mchar = 7.3 and extended GutenbergRichter equation considering characteristic earthquakes.

BPNs for t0 = 10 years, t0 = 50 years and for comparison for a Poisson process for all magnitudes are given. Minor changes in the probabilities can be observed for the lower PGA and SD values in the renewal model as they are caused by the (Poissonian) small magnitude events. In contrast, the probabilities for higher PGA and SD values change more over time as the probability of occurrence of a characteristic earthquake changes. Here, only the methodical issues are discussed, and the application is shown. In the following section an application to a real case will be illustrated, and in Chapter 5 the inuence of considering the renewal process on a risk management problem will be discussed.
T=1 year PGA Magnitude Distance T=2 year

SD

PGA Magnitude Distance

SD

...
Period

PGA

SD

Period

PGA

SD

Figure 4.10: BPN considering epistemic uncertainty.

4.1.5 PSHA using BPN for Adapazari, Turkey

The application of PSHA using BPN on a real case is considered for the city of Adapazari. This region in Northwestern Turkey has been a site of many severe earthquakes. In Chapter 5 the seismic risk for the city is considered. The city includes the most affected region during the Kocaeli Mw7.4 earthquake as well as areas with liquefaction during the same event (DRM, 2004).

46

4.1 Seismic hazard


a)
100 10-1 10-2 10-3 10-4 10-5 10-6 0.025 0.075 0.125 0.175 0.225 0.275 0.325 0.375 0.425 0.025 0.075 0.125 0.175 0.225 0.275 0.325 0.375 0.425 0.500 0.500 0.025 0.075 0.125 0.175 0.225 0.275 0.325 0.375 0.425 25.0 55.0 0.500 100.0

c)

e)

Peak Ground Acceleration PGA [g]

Peak Ground Acceleration PGA [g]

Peak Ground Acceleration PGA [g]

b)
10 10 10 10 10 10

100
-1 -2 -3 -4 -5 -6

d)

f)

0.50 1.50

2.50 3.50

4.50

5.50 8.00

25.0 55.0

100.0

0.50 1.50

2.50 3.50

4.50

5.50 8.00

25.0 55.0

100.0

0.50 1.50

2.50 3.50

4.50

Spectral Displacement SD [mm]

Spectral Displacement SD [mm]

Spectral Displacement SD [mm]

Figure 4.11: Discrete probabilities of PGA and SD evaluated using the BPN in Figure 4.10 assuming a Poisson process for all magnitudes (a, b), assuming a more generalized renewal model for the characteristic earthquake, results given for t0 = 10 years (c, d) and for t0 = 50 years (e, f).

Hence, the output in this section in the form of probability distributions of peak ground acceleration (for liquefaction analysis) and spectral displacement (for structural response analysis) will be used as input for the earthquake risk studies. In Figure 4.12 the northwestern part of Turkey with the city of Adapazari (in red) is illustrated. The spatial distribution of the seismicity using the earthquake catalogue from the International Seismological Centre (ISC) is given in the top-left gure. The fault segmentation model for the region as shown in Figure 4.12 top-right by Erdik et al. (2004) is used for the characteristic earthquake recurrence model. The characteristic earthquake parameters associated with the segments are given in Table 4.1. For the non-characteristic magnitudes the zonation model proposed by Atakan et al. (2002) is used. There, the earthquake sources are based on a gross zonation taking into account the entire North Anatolian fault zone as a single zone. In the west of Adapazari the zone is divided into a northern and a southern strand following the general trend of the fault system (Figure 4.12 bottom-left). The relevant source parameters with the areas are given in Table 4.2. Thereby, the regional rate of earthquake activity, a, , is calculated by relating the area of the source within the considered 100km radius to the area of that source. In Figure 4.12 bottomright the two zonation models are combined in order to use the fault segmentation model for the characteristic and the area sources for the non-characteristic earthquakes. Earthquakes within 100 km of the city center are included in the analysis. The BPN in Figure 4.5 is applied to the example application. The earthquakes are classied into six states according to their magnitudes, 4.75 Mw <5.25, 5.25 Mw <5.75, 5.75 Mw <6.25, 6.25 Mw <6.75, 6.75 Mw <7.25, 7.25 Mw <7.75, and their representative values as Mw=5, Mw=5.5, Mw=6, Mw=6.5, Mw=7, Mw=7.5. The magnitude range 7.25 Mw <7.75

5.50 8.00

47

4 Models
42O 29O 30O 31O 32O 42O 42O 29O 30O 31O 32O 42O

41O

41O

41O

S5

S4

S3 S13

S2 S12

S1 S21 S19 S22

41O

S25 S15 40O 29O 29O 30O 30O 31O 31O 40O 32O 32O 42O 40O 29O 29O

S14

30O 30O

31O 31O

40O 32O 32O 42O

42O

42O

A5 41O 41O 41O S5 S4 S3 S13 A2 A4 S25 S15 40O 29


O

A1

S2 S12

S1 S21 S19 S22

41O

S14

30

31

40O 32O

40O

29

30O

31O

40O 32O

Figure 4.12: Spatial distribution of the seismicity in the region (Atakan et al., 2002) (top-left), seismic zonation models proposed by Erdik et al. (2004) (top-right) and Atakan et al. (2002) (bottom-left), hybrid zonation with the considered area around Adapazari (bottom-right).

Table 4.1: Characteristic earthquake parameters associated with the segments Erdik et al. (2004). Segment Last charac COV Mean recurrence Characteristic Time since last teristic EQ time magnitude characteristic EQ S1 1999 0.5 140 7.2 9 S2 1999 0.5 140 7.2 9 S3 1999 0.5 140 7.2 9 S4 1999 0.5 140 7.2 9 S12 1967 0.5 250 7.2 41 S13 0.5 600 7.2 1000 S14 0.5 600 7.2 1000 S21 1999 0.5 250 7.2 9 S22 1957 0.5 250 7.2 51

is assumed to represent the characteristic earthquakes. The occurrence of events belonging to the rst ve states is modeled as Poisson events, while the occurrence of characteristic earthquakes classied into the last state are modeled by a non-Poisson renewal model. The probability distributions of the magnitudes are calculated for each year as described in the preceding section. The earthquake distance node is discretized into ve states; R=10km, R=30km, R=50km, R=70km, R=90km. Simple geometrical considerations as illustrated above are used to calculate the probability distributions of the earthquake distance, R. The node of the standard normal distributed

48

4.1 Seismic hazard

Segment S1 S2 S3 S4 S12 S13 S14 S21 S22

Table 4.2: Parameters of the area sources. Area source a a b Characteristic/Max. magnitude A1 2.14 0.98 1.12 7.2 A1 2.14 1.28 1.12 7.2 A1 2.14 1.55 1.12 7.2 A1 2.14 1.49 1.12 7.2 A1 2.14 1.31 1.12 7.2 A2 2.85 2.48 1.00 7.2 A2 2.85 2.61 1.00 7.2 A1 2.14 0.92 1.12 7.2 A1 2.14 1.15 1.12 7.2 Background 0.47 0.47 1.00 5.5

parameter PGA is discretized into 10 equally spaced states between -3.5 and 3.5. The correlation of the ground motion intensity parameters PGA and SD is considered in the BPN by conditioning the SD on PGA . The correlation coefcient is calculated using Equation 4.5 for T = 0.64s, which is the fundamental period of the 5-story buildings considered in Chapter 5 where the result of these analyses are used. SD is a conditional normal distribution with a mean of PGA and a standard deviation of 1 2 and discretized also into 10 states. Evaluating each state in PGA from -3.5 to 3.5 with the corresponding correlation coefcient results in a probability distribution of SD , which is atter than for PGA .The range of values for the equally spaced 10 discrete states for the node SD is hence taken from -5 to 5. For each of the nine segments in the zonation model and each of the 50 years a BPN as given in Figure 4.5 is constructed. The probability tables of the ve nodes other than the magnitude node are constructed with the specication of the segments. For the magnitude node the probability tables are calculated for each year according to the equations 4.10 and 4.11 (see Figure 4.9). Thus 450 BPNs are constructed, which yield a marginal distribution for PGA and SD for each segment and each year through evaluation of the corresponding BPN. In Figure 4.15 and Figure 4.14 sample results for each segment for the years 2018, 2038 and 2058 are given. The distributions of PGA and SD for each segment are also given for the case, where the occurrence of all the magnitudes is modeled as Poisson events. As the distribution of PGA and SD are calculated with the condition that at least one earthquake larger than Mw=5 will occur, the nal results when using the output for further analyses will have to be multiplied by the rate of exceeding Mw=5.
Calculation scheme for seismic hazard model

The discrete probabilities of PGA and SD (Figure 4.14 and 4.15) are calculated using the scheme given in Figure 4.13. The hazards from the seismic line sources at the site are assumed to be independent, i.e. rupturing in cascades (immediately one line source after the other) and hence producing larger rupture areas is disregarded.

49

4 Models The BPN in Figure 4.5 is constructed within the software HUGIN. Only the nodes and arrows need to be specied at this step. The number of discrete states in the nodes and the probability tables are controlled in MATLAB. The main le BPN_PSHA_Adapazari.m calculates the probability distributions for PGA and SD for each seismic source as illustrated in Figure 4.14 and 4.15. The le EPS_PGA_5.m discretizes the standard normal distributed parameter PGA . The correlation coefcient between PGA and SD is calculated with the le SD_PGA_cor.m. Given the correlation coefcient which depends on the fundamental period of the structures considered in the city, the le EPS_SD_5.m discretizes the node SD . For both magnitude-recurrence relationships considered, the time-independent (Poisson) and time-dependent (Non-Poisson) cases, the les EQ_M_5.m and EQ_M_NonPoisson_5.m calculate the discrete probabilities for the states in the node Magnitude. The probability distribution of the distances from the seismic source to the site of interest is calculated in the le Line_EQ_R_5.m. Files calculating the probability distribution of the distances from area and point sources may also be incorporated. For any combination of magnitude, distance and epsilon given the seismic source, the ground motions at the site are calculated in the les PGA_5.m and SD_5.m. The conditional probability tables of PGA and SD are then specied in the main le by classication of each of these ground motions into the discrete states of the nodes PGA and SD. All les are given in Annex C. The main le evaluates the given BPN with the assigned conditional and unconditional probability tables. The inference engine of HUGIN which is called in the MATLAB environment evaluates the BPN. The probability distribution for the time-independent and the time-dependent cases are illustrated in Figure 4.14 and 4.15.

50

4.1 Seismic hazard

BPN_PSHA_Adapazari.m

Construction of BPN in Figure 4.4


BPN_PSHA_Adapazari.net

Discretisation of the standard normal distributed node ' ePGA '

EPS_PGA_5.m

Correlation of SD with PGA (or ' eSD' with ePGA given 'Period' )

SD_PGA_cor.m

Discretisation of the node ' eSD '

EPS_SD_5.m

Poisson

MagnitudeRecurrence Relation ?

Time-dependant

Discretisation of the node 'Magnitude'


EQ_M_5.m

Discretisation of node 'Magnitude' for each year in the considered time


EQ_M_NonPoisson_5.m

Discretisation of node ' Distance'


Line_EQ_R_5.m

Ground motion prediction equation


bjf_atten_5.m

Discretisation of the node 'PGA'


PGA_5.m

Discretisation of node 'SD'


SD_5.m

Figure 4.13: Calculation scheme for the BPN in Figure 4.5.

51

4 Models

Poisson Model Source S1


Occurrence rate Probability .01 .008 .006 .004 .002 0 .01 Occurrence rate 1 .8 .6 .4 .2 0 1 Probability .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 T=2018

Non-Poisson Renewal Model


T=2038 T=2058

Source S2

.008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01

Source S3

Occurrence rate

Occurrence rate

Source S4

.006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 2009

Source S12

Occurrence rate

Source S21

Occurrence rate

Source S22

Occurrence rate

2209 Year

2409

Probability Probability Probability

Probability

Probability

Probability

.008

Probability

Source S13

Occurrence rate

.01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 1509 2009 2509 Year

1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0
0.1 0.3 0.5 0.7 0.9 1.1 1.3 0.1 0.3 0.5 0.7 0.9 1.1 1.3 0.1 0.3 0.5 0.7 0.9 1.1 1.3 0.1 0.3 0.5 0.7 0.9 1.1 1.3

Source S14

Occurrence rate

PGA [g]

PGA [g]

PGA [g]

PGA [g]

Figure 4.14: For each seismic source the mean occurrence rates using BPT model for the characteristic earthquakes (rst block), discrete probabilities of PGA evaluated using the BPN in Figure 4.5 assuming a Poisson process for all magnitudes (second block), assuming a more generalized renewal model for the characteristic earthquake, results given for T=10 years (third block), for T=30 years (fourth block) and for T=50 years (fth block).

52

4.1 Seismic hazard

Poisson Model
Occurrence rate .01 Probability .008 .006 .004 .002 0 .01 Occurrence rate 1 .8 .6 .4 .2 0 1 Probability .8 .6 .4 .2 0 1 Probability .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0 1 .8 .6 .4 .2 0

Non-Poisson Renewal Model


T=2018 T=2038 T=2058

Source S1

Source S2

.008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 .01 .008 .006 .004 .002 0 2009

Source S3

Occurrence rate

Occurrence rate

Source S12

Source S4

Occurrence rate

Source S21

Occurrence rate

Source S22

Occurrence rate

2209 Year

2409

Probability Probability Probability

Probability

Probability

Probability

Source S13

Occurrence rate

.01 .008 .006 .004 .002 0 .01

1 .8 .6 .4 .2 0 1 .8 .6 .4 .2
0.25 1.25 3.50 7.50 20.0 40.0 75.0 0.25 1.25 3.50 7.50 20.0 40.0 75.0 0.25 1.25 3.50 7.50 20.0 40.0 75.0 0.25 1.25 3.50 7.50 20.0 40.0 75.0

Source S14

Occurrence rate

.008 .006 .004 .002 0 1509 2009 2509 Year

SD [cm]

SD [cm]

SD [cm]

SD [cm]

Figure 4.15: For each seismic source the mean occurrence rates using BPT model for the characteristic earthquakes (rst block), discrete probabilities of SD evaluated using the BPN in Figure 4.5 assuming a Poisson process for all magnitudes (second block), assuming a more generalized renewal model for the characteristic earthquake, results given for T=10 years (third block), for T=30 years (fourth block) and for T=50 years (fth block).

53

4 Models

4.2 Soil failure


Earthquakes generally cause damage to buildings through ground shaking. They may also cause damage locally through ground deformations to pipelines, sewers and buildings. Ground failures range from slope failures in mountains to ground cracking in alluvium-lled valleys (Holzer et al., 1999). In Section 4.1 the probabilities of the occurrence of ground shaking intensity parameters within a given time period were estimated. In doing so, the inuence of the soil conditions were taken into account within the ground motion prediction equations. However, as extensive liquefaction problems occurred in the city considered for the examples in this dissertation during past earthquakes, the soil failure mechanism liquefaction is explicitly considered. During an earthquake, loosely packed water-saturated sediments near the ground surface may lose their strength and stiffness as a result of pore water pressure increase. This phenomenon, known as liquefaction, may cause serious damage to the built environment as experienced during the earthquakes in Niigata (1964), Loma Prieta (1989) and Kocaeli (1999) (Geoengineer, 2006). Soil liquefaction may be quantied using deterministic or on probabilistic techniques based either on laboratory tests or empirical correlations of in-situ index tests with eld case performance data. The deterministic empirical correlation in using Standard Penetration Test (SPT) blow number count proposed by (Seed and Idriss, 1971) is widely used in practice to evaluate the potential for soil liquefaction. Here an empirical liquefaction criterion proposed by Cetin et al. (2004) is used. The limit state function is expressed as:

g(N1,60 ,CSReq , Mw , FC, v , L ) = N1,60 (1 + 0.004FC) 13.32 lnCSReq 29.53 ln Mw 3.70 ln

(4.12)

v + 0.05FC + 16.85 + L Pa

where N1,60 is the SPT blow count, CSReq is the equivalent cyclic stress ratio, Mw is the moment magnitude of the earthquake, FC is nes content, v is effective vertical stress, Pa is the standard atmospheric pressure in the same units used for v and L is a random variable representing model uncertainty. The CSReq is calculated with: CSReq = 0.65 PGA v rd g v (4.13)

where g is the acceleration of gravity, v is total vertical stress, and rd is the nonlinear shear mass participation factor. For rd , the model of Cetin et al. (2004) is adopted: 1+ rd = 1+
23.013+2.949PGA+0.999Mw +0.0525Vs,12m s,12m 16.258+0.201 exp 23.013+2.949PGA+0.999Mw +0.0525Vs,12m 0.341(0.0785Vs,12m +7.586) 16.258+0.201 exp 0.341(d+0.0785V +7.586)

+ rd

(4.14)

rd =

d 0.85 0.0198 if d 12[m]; 120.85 0.0198 if d > 12[m].

(4.15)

54

4.2 Soil failure


where Vs,12m is the representative shear wave velocity over the top 12m soil at the site and is assumed to be deterministically dened as 150m/s, rd is a Gaussian random variable representing model error and d is the depth of the critical liquefaction susceptible layer. The soil layer with the lowest SPT blow number is assumed to be the critical liquefaction susceptible layer for the entire site. The soil prole may be considered as a serial system and liquefaction occurrence in one layer may cause liquefaction on the site. Cetin et al. (2004) specify the following distributions for the model errors: L has a Gaussian distribution with a mean of zero and a standard deviation of 2.7, and rd is Gaussian with zero mean and standard deviation equal to rd . The parameters required for the evaluation of liquefaction triggering are thus the SPT blow number Nm, the nes content FC, the soil classication according to the Unied Soil Classication System USCS, the ground water table GW and the depth of the critical layer d. The last three of these ve parameters are required for calculating the total and effective stress within the layer. The joint probability distribution of magnitude (M) and peak ground acceleration (PGA) are obtained from the results of the seismic hazard model and are assumed to be perfectly correlated in the city (i.e. one value is assumed throughout the city for M and PGA).

The evaluation of probability of liquefaction triggering for a single point thus requires the evaluation of equations 4.12 through 4.15. The soil database for the region Adapazari consists of 312 borings (DRM, 2004). Not all borings provide data on all of the ve parameters (see Appendix B). For 312 of the borings the SPT blow number of the critical layer is available. The depth of the critical layer is available for 312, the ground water table for 254, the USCS classication for 120 and nes content for 51 of these borings. For those locations where all of the required parameters are available, the probability of liquefaction may be calculated. In risk assessment and management studies estimates for locations in the city where no data is available, are also required. There are several ways of estimating unknown values at specic locations, which will be briey discussed below (Isaaks and Srivastava, 1989).

Polygonal method

In the polygonal method the sample value that is closest to the point to be estimated is assigned as the unknown value. The polygonal estimator can be regarded as a weighted linear combination that gives all the weight to the closest sample value (Isaaks and Srivastava, 1989).

Triangulation

The polygonal method may result in sharp discontinuities, as with changing distance a different sample value with a different value may become the closest value. Discontinuities in the estimated values are generally not desirable. Real values may show discontinuities, but the discontinuities mentioned arise from the procedure itself and have nothing to do with reality. The triangulation method overcomes this problem by tting a plane through the three closest samples and assigning the unknown value of the location by substituting the appropriate coordinates.

55

4 Models
Local sample mean

The polygonal method uses only the nearest sample and the triangulation method the closest three samples to the location to be estimated. The local sample mean method considers all nearby samples by assigning equal weight to them. This estimate is heavily inuenced by extreme values and hence may result in much higher estimates than the aforementioned two methods.
Inverse distance methods

Local sample mean method gives naively equal weight to all nearby samples disregarding distance to the point to be estimated. An intuitive way which also follows the rst law of geology near things are more closely related than distant things, lends weight to each sample inversely proportional to its distance from the point to be estimated. A general formulation of the estimator for the unknown point can be written as: x=
1 n d p xi i=1
i

n d1p i=1
i

(4.16)

where di are the distances from each of the n sample locations to the point being estimated, xi are the sample values and p is a power factor. With a decreasing p the weights assigned to the samples become more uniform. The local sample mean is the extreme case when p approaches 0, i.e. equal weight is assigned to all nearby samples. The polygonal method is the other extreme, where p approaches innity, i.e. all the weight is assigned to the closest sample. Generally p is chosen as 2 due to efciency in the calculations (Isaaks and Srivastava, 1989). The appropriateness of these deterministic methods in estimated values for several locations at the same time can be judged based on a number of different criteria; the similarity of the distribution of the estimates to the estimates of true values, the proximity of the means or medians thereof, and the proximity of the variability of the estimates to the true variability. The choice of criteria depends on the application. Different methods may be more suitable for different estimation criteria.
Kriging

Kriging is a geostatistical approach to modeling. The theory behind kriging was developed by Matheron (1963) based on the work of Krige (1951). Instead of weighing nearby samples by some factors depending on distance, kriging relies on the spatial correlation structure of the data to determine the weighing factors. Kriging aims to reduce the standard deviation of the difference between the estimated and the true value. Different types of kriging exist. Simple kriging uses the assumption of a known and constant mean value for the parameter to be estimated. In ordinary kriging the mean value is assumed to be constant and unknown. Universal kriging assumes a linear mean value. Indicator kriging uses indicator functions instead of the process itself. The spatial correlation can be described by variograms, covariograms or correlograms (Akin and Siemens, 1988). In principle the same information is given in all three representations. The

56

4.2 Soil failure means most commonly used to represent spatial dependency in geostatistics is the variogram. The variogram, denoted by (h) is equal to the variance of the increment in data points separated by a distance h. The notation "semivarigram" is also adopted, where one half of the variance of the increment is used: 1 (h) = var[Z(u) Z(u + h)] 2 (4.17)

where Z(u) is the distribution of the random variable at location u. The vector distance h accounts for both length and direction. For each distance and direction pairs from the samples are derived and the half of the variance of the differences in the data pairs is calculated. Repeating this for multiple distances and directions provides data pairs, which when plotted in a semivariance to distance plot, yields the semivariogram. This empirical semivariogram can be used to assign an analytical formulation for a semivariogram for computational convenience. The covariogram illustrates the expected value of the covariances given the vector distance h: C(h) = E[[Z(u) E[Z(u)]][Z(u + h) E[Z(u + h)]]] (4.18)

With second-order stationarity the relation between covariogram and semivariogram is: (h) = () C(h) = C(0) C(h) (4.19)

A covariogram normalized by the variances in Z(u) and Z(u + h) is referred to as a correlogram. The empirical semivariograms for the parameters Nm , FC, GW and d are estimated based on the data given in Appendix B. Figure 4.16 illustrates the empirical semivariogram for Nm . Here, a semivariogram function referred to an exponential model was chosen for Nm : (h) = C (1 exp( 3h )) a (4.20)

where C is the sill and a is the range of the semivariogram. Sill is the semivariance value at which the variogram levels off. Range is the distance at which the semivariogram reaches the sill value. Beyond the range, correlation is assumed to vanish. The empirical semivariogram for Nm is modeled by an exponential model with a sill value of 0.85 and a range of 5000. This exponential model is also plotted in Figure 4.16. The spatial characteristics of the other parameters FC, GW and d are analogously modeled by an exponential semivariogram with a sill of 0.9 and a range of 5000.
Stochastic simulation

In the interpolation algorithms described briey above, the goal is to provide a best estimate of the variable without specic regard to the resulting spatial characteristics of the estimates taken together. For instance, in kriging a set of local values is provided with emphasis on local accuracy. Furthermore, only an incomplete measurement of this local accuracy is provided as joint

57

4 Models
* * * * * ** * * * ** * * * * * * * * 0.85 * * * * * * * * * ** ** *** ** * * ** * *** * * * * * * * * * * * ** * * ** * * * * * * * * * * ** * * * * * * * ** * * ** * * ** * ** * *
1

Normalized covariance

1000

2000

3000

4000

5000

6000

Separation distance [mm]

Figure 4.16: Empirical semivariogram (crosses) with an exponential model (line) for Nm .

accuracy is not considered when several locations are under consideration. In stochastic simulation the missing soil variables are simulated, accounting for the uncertainty of the variables, for their spatial dependency and for the observed values obtained in the borings. In the region of Adapazari, Turkey, for each of the grid point of 100 x 150 elements with a dimension of 100 m x 100 m the soil properties are required. Using a sequential Gaussian simulation approach (Deutsch and Journel, 1997) a set of random elds is constructed. Using the semivariogramms of Nm , FC, GW and d random elds are modeled. All of these random elds are used in a crude Monte Carlo scheme to calculate the probability of liquefaction for each grid point in the region using the equations 4.12 to 4.15. Details on this approach are given in Baker and Faber (2008). An open source software code for the sequential Gaussian simulation procedure is available as part of the Geostatistical software library GSLIB (Deutsch and Journel, 1997). For each of the (M,PGA) combinations the probability of liquefaction for each grid point is calculated. The probability of liquefaction for each grid point is implemented in the GIS platform from where the corresponding probabilities are incorporated into the BPN for the soil response model (Figure 4.18).

Calculation scheme for soil failure model

The calculation scheme for the soil liquefaction modeling is given in Figure 4.19. For each of the grid point (100 x 150 elements with a dimension of 100 m x 100 m) the depth, soil class according to USCS, the Fines Content and the SPT blow count of the critical layer, and the ground water level is simulated assuming independence between the parameters. The simulations are performed using the code SGSIM from the program package GSLIB. The critical layer is dened as the layer with the lowest number of SPT blow count. The input les for simulating the ve parameters Depth.par, USCS.par, GW.par, FC.par and Nm.par are given in Annex C. The output les of the simulations are read in MATLAB with the les Depth_eld.m, USCS_ eld.m, GW_eld.m, FC_eld.m and Nm_eld.m. Using these parameters, the stress, effective stress and the shear wave velocity are calculated for each simulation and each grid point using the

58

4.2 Soil failure


Nm FC(%) GW(m) d(m) d GW Nm FC

Figure 4.17: Conditional simulations of a set of soil parameters (top) and probability of liquefaction for a given magnitude of 7.5 with a PGA of 0.1g (bottom).

Magnitude

Liquefaction

PGA

Figure 4.18: Bayesian probabilistic network for the soil response model.

Matlab les Vs_eld.m, Sigma_eld.m and Sigmaeff_eld.m, respectively. The corrected number of SPT blow counts and cyclic stress ratio for each simulation and each grid point are calculated by the Matlab les CSR_eld.m and N1_60_eld.m. Using the simulated and calculated elds the probability of liquefaction for each grid point is calculated for each pair of magnitude and PGA by the le Liq_eld.m. All les are given in Annex C.

59

4 Models

Depth to the critical layer

USCS of the critical layer

Ground water level (GW)


GW.par GW_field.m

Fines Content of the critical layer


FC.par FC_field.m

SPT blow number Nm


Nm.par Nm_field.m

Depth.par Depth_field.m

USCS.par USCS_field.m

Sigma_field.m

Sigmaeff_field.m

Vs_field.m

Cyclic stress ratio in the critical layer


CSR_field.m

Corr. SPT blow number N1_60


N1_60_field.m

Cyclic stress ratio in the critical layer


Liq_field.m

Figure 4.19: Calculation scheme for the BPN in Figure 4.18.

60

Calculated random fields

For each state of the nodes 'Magnitude' and ' Distance'

Calculated random fields

Stress in the critical layer

Eective stress in the critical layer

Shear wave velocity in the critical layer

Simulated random fields

4.3 Structural damage

4.3 Structural damage


The risk due to earthquake results mainly from its effect on the built environment. The built environment may suffer damage and losses due to earthquake-induced ground failure. One of the most important type of such failures, liquefaction is considered in Section 4.2. In this section, the response of the structures to earthquakes and the damage which may occur is considered. Based on state-of-the-art methods in structural response calculation and damage assessment, the construction of BPNs is discussed. Finally, a BPN is set up for further use in the examples of Chapter 5. Two terms related to seismic induced damage to structures are distinguished: vulnerability and fragility. Seismic vulnerability denes loss as a function of a ground motion intensity parameter, whereas seismic fragility curves dene the probability of being in a damage state as a function of ground motion intensity parameter. A fragility function can provide the probability that a building will collapse given a ground motion intensity measure. Analogously, vulnerability functions would provide the direct damage factor (repair cost divided by replacement cost as discussed in Chapter 3), given the intensity of ground shaking. A better modularization is attained when the consequences are excluded from the structural response assessment. This is more feasible when the structural damage is covered using seismic fragility curves. There are three fundamental approaches to developing seismic fragility curves: statistical expert opinion analytical The statistical method uses loss data from past earthquakes. The data includes loss and the intensity of ground motion experienced by individual buildings or class of buildings. The seismic vulnerability function is estimated by regression analysis of the loss and the ground shaking intensity. With the expert opinion approach no data about historical loss or detailed structural characteristics is required. The main problem is to nd experts who are willing to judge the loss value concerned. Eliciting expert opinion may result in disagreements leading to unacceptably high uncertainties, heuristic biases or vulnerability functions that are too low or too high. With the analytical method vulnerability functions are generally estimated in three steps: structural analysis, damage analysis, and loss analysis. Structural analysis estimates the response of the structure to a ground motion intensity, in terms of internal forces and deformations. The structural response parameters are then classied into performance levels calibrated through laboratory testing or past earthquake observations. The uncertainty of the ground motion intensity parameters leading to the same performance levels are nally modeled resulting in analytical fragility curves. The present dissertation discusses within a Bayesian perspective the construction and application of BPNs for seismic risk in a city. Since multiple parameters may be handled in a large scale risk analysis, the analytical method is chosen for estimating the fragility curves. The fragility curves estimated for the dissertation provide a prior estimate in the risk analysis in Chapter 5.

61

4 Models There, it will also be outlined how these prior estimations may be updated within a Bayesian framework using damage data from earthquakes. The focus of the dissertation is not to provide seismic vulnerability curves for different structure classes in a city. Therefore a rather rigorous way of modeling is chosen for modeling earthquake demand, for structural modeling and for damage assessment. The proposed framework allows the incorporation of alternative seismic fragility curves.

Earthquake demand modeling

Demand is simulated by synthetically-generated ground motions representing probable earthquakes in the western part of the North Anatolian fault system. For earthquake demand modeling a set of acceleration time histories is generated. The ground motion prediction equation proposed by (Boore et al., 1997) is used for estimating the pseudo-acceleration response spectra for the random horizontal component at 5% damping. A number of 16 pairs of magnitudes Mw (5.5, 6.5, 7.0, 7.5), epicenter distances (10, 20, 40, 80 km) and site class (sand) are selected and depending on the fundamental periods of the structures, the spectral displacement (SD) and peak ground acceleration (PGA) values are estimated. Using a version of SIMQKE (Gasparini and Vanmarcke, 1976) modied by Lestuzzi (2000) 20 samples of accelerogram time histories for each pair of these (Mw,R) are generated, resulting in 320 simulations (Figure 4.20). These 320 simulations are then used in the vulnerability model (Bayraktarli et al., 2005).

1 Acceleration [m/s2] 0.5 0

0.5 1 0 5 10 15 20 25

1.5

Time [s]

4 3 Sa [m/s2 ] 2 1 0 0 2 4 6 Frequency [Hz] 8 10 12

Figure 4.20: Target acceleration response spectrum calculated using ground motion prediction equation by Boore et al. for Mw=5 and R=30km (bottom), simulated earthquake time history (top).

62

4.3 Structural damage


Structure types

Assessment of the performance of an individual structure is an intricate task in itself. Therefore, in investigating large building stocks it is inevitable that the process be simplied by grouping the structures which are expected to have a similar seismic performance. Each building is on the other hand unique. The properties which dominate the seismic performance need to be specied and generic building types containing those properties should be established. The uncertainty induced in reducing the unique structure to the generic structure type needs to be considered. The identied properties are, in the notation of the dissertation, referred to as indicators. By doing so, the seismic performance of each structure can be estimated. Common classication schemes dene structure classes by various combinations of use, year of construction, construction material, lateral load-resisting system, number of stories and applied building code. One of the most comprehensive classications of structures with regard to seismic behavior was developed by the Applied Technology Council (ATC-13, 1985). The structures in California were classied into 78 classes of industrial, commercial, residential, utility and transportation structures. The classication was mainly based on construction material, lateral load-resisting system and number of stories. Similarly HAZUS (2001) provides 36 structural classes covering about 99% of all US structures. The main classication criteria in HAZUS (2001) is the number of stories, the seismic design level and the construction type. The classication of the structures in Adapazari, Turkey, the city under consideration in this dissertation, is based on a similar approach. The city center is assumed to be bounded in the west by the river Cay, in the east by the river Sakarya, in the north by Ankara Avenue and in the south by the Istanbul-Ankara highway. In total there are 22492 buildings in the city (one to six-story reinforced concrete moment resisting frames and one to two-story masonry structures). The structural performance of the buildings is assumed to be adequately represented by generic reinforced concrete frames (Figure 4.21). The construction year of the individual buildings, which indicates the seismic design code the structures are originally designed for and the occupancy class of the building (e.g. hospitals), are also considered in designing the generic frames. For illustration purposes it is assumed that all buildings constructed before 1980 are designed according to the Turkish Standards for reinforced concrete structures TS500-1975 (1974) without taking into account lateral loads. The buildings constructed after 1980 are assumed to be designed according to the Turkish Standards for reinforced concrete structures TS500-1984 (1983) and the Turkish Seismic Code (TDY, 1975). These generic structures constitute the structures whose structural performances are further investigated. The proposed framework in this dissertation is illustrated using one structure class: 5-story reinforced concrete moment resisting frames. The naming convention for the structure class is as follows: "Typ5" for 5-story, "NR"/"R" for not retrotted/retrotted, "O"/"N" for constructed before 1980 (old)/constructed after 1980 (new), "Res"/"Hos" for residential/hospital use. E.g. Typ5-R-O-Res is the structure class "5-story retrotted residential building constructed before 1980".
Structural damage assessment

For the calculation of the structural response the open source nite element program OpenSees is applied (OpenSees, 2008). OpenSees is used to perform nonlinear dynamic analysis of the struc-

63

4 Models
Designed according to the TS500-1975 Generic frame
3000 3000 3000 3000 3000

Beams 300x500 1116

Columns 500x500 2416

Columns retrofitted 700x700 4016

stirrups 8@150
5000 y 5000 5000 5000

stirrups 8@120

stirrups 8@120

Designed according to the TS500-1984 and TDY 1975 Beams 300x500 1320 Columns 500x500 3220 Columns retrofitted 700x700 4820

stirrups 10@75

stirrups 10@75

stirrups 10@75

Figure 4.21: Generic frame representative for the 5-story reinforced concrete moment resisting frames (residential use); construction year before 1980 (top right), after 1980 (bottom right).

tures. The structures are subjected to the 320 ground motion time histories and the maximum inter-story drift ratio (MIDR) is calculated using OpenSees. The beams and columns are modeled with non-linear beam-column elements, which are based on the non-iterative force formulation and consider the distribution of plasticity along the element. The reinforced concrete cross sections are modeled using a bre cross section model. Rigid diaphragms are used to model the slabs. The concrete material is modeled after Park et al. (1972) with degraded linear unloading/reloading stiffness, based on the work of Karsan and Jirsa (1969), and assuming no tensile strength. The conned and unconned concrete is modeled by applying two different sets of material parameters. The damping is modeled as Rayleigh damping, which implies a combination of the mass and the stiffness matrices at the current state. Masses are lumped in the rigid diaphragms. All material parameters are assumed to be deterministic. The columns on ground level are xed for all degrees of freedom. Ground acceleration is applied in the y-direction. The solution algorithm is of the Newton type with a convergence criteria expressed in terms of the norm of the displacement increment vector. The integrator is of the Newmark type. The time step is set to 0.01 s and the length of the time series is 25 s, resulting in 2500 time steps for each series. It takes about 500 s to calculate one time

64

4.3 Structural damage series for the ex-ample structure on an Intel Pentium PC. Adapting the performance limit states given in Huo and Hwang (1996), the MIDRs are classied into three damage states (Green-Safe, Yellow-Limited use and Red-Unsafe). MIDRs larger than 1.0 are classied as Red-Unsafe, smaller than 1.0 and larger than 0.5 are classied as Yellow-Limited use, and smaller than 0.5 are classied as Green-Safe. This damage classication is in line with ATC-20 (1989) methodology which species the procedure for postearthquake inspections of buildings. The fragility of a structural system is modeled, following the state-of-practice, using a lognormal distribution. The justication for this lies in the fact that MIDR and hence SD is a result of a multiplicative inuence of several parameters. Furthermore the distribution of the input parameters for a certain limit state is skewed to the right and no negative parameters are allowed. Each fragility curve is characterized by lognormal median ( ) and lognormal standard deviation ( ) of the ground motion intensity parameter (e.g. spectral displacement). The probability of being in or exceeding a given damage state, D, is modeled as a cumulative lognormal distribution: P(D|Sd ) = ln Sd ln ln (4.21)

where () is the standard normal cumulative distribution function, is the logarithmic median capacity, and is the standard deviation of capacity. The estimation of the parameters is performed using the maximum likelihood method (Table 4.3). The fragility curves are given in Figure 4.22. Table 4.3: Parameters of the fragility curves. Yellow Yellow Red Red Typ5-NR-O-Res 3.758 0.390 4.215 0.346 Typ5-NR-N-Res 3.742 0.226 4.337 0.235 Typ5-R-O-Res 3.806 0.347 4.277 0.289 Typ5-R-N-Res 3.845 0.299 4.328 0.265 Typ5-NR-N-Hos 3.873 0.372 4.314 0.272 Typ5-R-N-Hos 4.029 0.323 4.384 0.222 HAZUS C3M 2.906 0.900 4.515 0.900 HAZUS C1M 3.640 0.680 5.432 0.680 The uncertainty of the distribution is explicitly considered in the model (node Model uncertainty) using a discretisation with three states. Given the ground motion intensity parameter as spectral displacement from the attenuation model, the probabilities of being in a particular damage state are read from the fragility curves for the states of node SD. These probabilities form the conditional probability table of the node Damage. The BPN for the vulnerability model is given in Figure 4.23.

65

4 Models
Typ5 Residential constructed before 1980 1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0 1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0 Typ5 Residential constructed after 1980

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm]

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm]

Typ5 Residential constructed before 1980 - Retro tted 1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0 1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0

Typ5 Residential constructed after1980 - Retro tted

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm] Typ5 Hospital constructed after1980

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm] Typ5 Hospital constructed after1980 - Retro tted

1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm] HAZUS C3M - PreCode

1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0

Probability

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd ( T=0.64s, =5%) [mm] HAZUS C1M - HighCode

1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 Spectral displacement, Sd (=5%) [mm] 300

1 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0

Probability

Probability

limited use (yellow) Collapse (red) 0 20 60 100 140 180 220 260 300 Spectral displacement, Sd (=5%) [mm]

Figure 4.22: Seismic fragility curves for generic 5-story buildings.

Epsilon Damage

SD

Liquefaction

Damage

Structure Class

Figure 4.23: Bayesian probabilistic network for the vulnerability model.

66

4.3 Structural damage


Alternative BPN model

Single-degree-of-freedom (SDOF) systems with modied Takeda hysteresis can be used as substitute systems for representing the dynamic characteristics of the structural class. The dynamic properties of the substitute SDOF systems are assigned according to the relationships proposed by Priestley (1998) and the TDY (1998). The yield displacements and base-shear capacities of the retrotted buildings are estimated according to the procedure by Dazio (2000). The response of substitute SDOF systems to the set of acceleration time histories, is simulated. It is assumed that the maximum and residual displacements attained by the substitute SDOF systems can be used as an estimate of those to be attained by the real structures themselves (Figure 4.24, left). In general terms, for the case of maximum displacements of buildings excited dynamically predominantly in their rst mode, this assumption has been veried. However, for the case of residual displacements it should be noted that the adopted modeling approach has a signicant impact on the computed results (Yazgan and Dazio, 2006). It will be assumed that the residual displacements computed for a substitute SDOF system can be used as an estimate for the residual displacements of the buildings in the corresponding structural class. Maximum displacements are known to be well correlated with structural damage. Furthermore, residual displacements are critical for the post-earthquake usability/reparability of a structure. These two structural response parameters together can provide a good picture of the seismic performance of a structure. The performance level denitions provided in the Vision 2000 (SEAOC, 1995) can be adopted to relate the structural response parameters to damage states.
3000 3000 3000 3000 3000

SD

Periode

Maximum Displ.

Residual Displ.

Structure Class

Liquefaction

Damage

5000

5000

5000

5000

Figure 4.24: Generic frame representative for the 5-story reinforced concrete moment resisting frames and the substitute SDOF system (left). Bayesian probabilistic network for the alternative vulnerability model (right). A BPN explicitly considering residual displacements (Figure 4.24, right) is advantageous, as residual displacements could be measured after an earthquake and with these measures the models could be updated (Bayraktarli et al., 2006). This requires the availability of efcient ways for measuring the residual displacements with an acceptable precision, e.g. Altan et al. (2001), as well as structural response assessment methods explicitly using residual displacements for damage assessment, e.g. Yazgan (2009). For further analysis the BPN in 4.23 for seismic vulnerability will be used. Information from damage surveys after earthquakes on the state of damage to structures will be used for updating.

67

4 Models
Calculation scheme for structural damage model

The calculation scheme of the seismic fragility curve for the structure classes are given in Figure 4.25. In the calculation scheme, the description in this section and the les in Annex C only one structure class is illustrated. The calculations are analogous for the other structure classes considered in this dissertation. The MIDRs for each of the simulated acceleration time histories for the structure class considered are calculated in OpenSees using the les Typ5.Nr.O.Residential.tcl and BuildRCrectSection.tcl. These MIDRs and the corresponding SDs of the simulated acceleration time histories are classied into damage states given the performance limit MIDRs of the damage states. The resulting vectors of SDs within each damage state are assumed to be lognormal distributed and modeled using Maximum Likelihood Method.

Estimation of the seismic fragility curve in Figure 4.19

Generation of total 320 acceleration time histories for each (Mw,R) pair

Design of the generic structure class

Calculation ot the 320 MIDRs with the FEcode OpenSees


Typ5.NR.O.Residential.tcl BuildRCrectSection.tcl

Classification of the 320 MIDRs and the corresponding SDs of the time histories given the performance limits of the damage states

Estimation of the parameters of the lognormal distribution for the set of SDs in each damage state using the Maximum Likelihood Method

Figure 4.25: Calculation scheme for the BPN in Figure 4.22.

68

4.4 Consequences

4.4 Consequences
A consistent treatment of risk requires a realistic estimation of consequences. The goal should be to consider all types of consequences which may occur if an adverse event takes place. Besides consequences resulting from the failure of the objects at risk, in Section 3.2 denoted as direct consequences, indirect consequences which go beyond the costs related to the objects also need to be considered. Depending on the decision-making level at which the risks are considered and consequently depending on the system denition, the border between direct and indirect consequences is dened. The consequences of events may be expressed in different units, in monetary units or in the number of casualties, etc. A consistent treatment of different decision alternatives with different types of consequences is possible when using the same units. In the present dissertation monetary units will be used for all kinds of consequences. This also enables the aggregation of consequences. Natural hazards generate a variety of socio-economic impacts. These may be classied as structural losses (repair, replacement and retrot of structures), nonstructural losses (building contents and inventory), direct economical losses (business interruption), indirect economical losses (supply shortages and demand reductions) and demographical consequences (loss of lives, injuries) (Faizian and Faber, 2004; King et al., 1997; Brookshire et al., 1997). Depending on the decision-making level, an inventory of different consequences must be drawn up to obtain a consistent assessment and management of earthquake risks. For example, a building owner may consider only structural losses as direct consequences and nonstructural losses as indirect consequences, whereas a city governor classies the structural, nonstructural and direct economical losses as direct consequences, and indirect economical losses and demographical losses as indirect consequences. The proposed methodology in the present dissertation is illustrated on residential and hospital buildings in the city of Adapazari, Turkey. In this context direct consequences comprise of structural repair, replacement and retrotting costs. Loss of lives in these structures due to possible earthquakes are considered as indirect consequences. For Adapazari, the average unit rebuilding cost is estimated as 175 USD/m2, the average unit retrotting cost for column jacketing as 250 USD/column and the unit repair costs for "limited use" damage state as 10% of the unit rebuilding cost (Has-Insaat, 2007).
Statistical value of life

The value of goods and services is estimated in the free market. Hence, the quantication of most of the above mentioned consequences in monetary units poses no problems. However, the quantication of the loss of lives, which is decisive in decision problems related to safety issues, forms the subject of controversial debate. When evaluating the value of life the question is not the value of a single life, but rather the value people are willing to invest in risk reduction measures to save life. The question has no ethical dimension as long as undened and impersonalized life is considered. That is the reason for the wording "statistical life".

69

4 Models Various methods exist for evaluating the statistical value of life (Schubert, 2009). In methods based on compensation, the value of life is estimated through the loss of income of a person who has died. This human capital compensation approach neglects both immaterial damages like pain and suffering as well as material damages to restore the original situation. The approach using compensation granted by the courts as a measure for the value of statistical life is more differentiated (Miller, 1990). When deriving the value of statistical life, the temporal shortage of benet resulting from psychological and physical impairment is considered (Hofstetter and Hammitt, 2002). Compensation methods are mainly used in the US. Contingent valuation methods use public opinion polls where people are asked to estimate the amount they are willing to pay or they are willing to accept for a specic immaterial good in a specic hypothetical scenario. The method was rst developed by Ciriacy-Wantrup (1947) and applied for estimation of the statistical value of life, e.g. in Blaeij et al. (2003). The supporters of this method argue that a direct estimation of the willingness of society to pay is possible (Hanemann, 1994). The opponents point to the fact that different estimates result when the survey is done individually for goods in a set or for the set as a whole (Kahnemann and Kentsch, 1992). Different estimates also result in a survey in dependence of the order for goods in a set. Contingent valuation methods assume that the interviewee acts in a credible, reliable and precise manner. The theory of revealed preferences was developed by Samuelson (1938). Whereas Contingent Valuation Methods are based on the declaration of people on the amount they would pay or accept, Revealed Preference Methods try to nd out how people would behave in such situations. The Revealed Preference Method estimates the best option based on the behavior of the consumers. The fundamental assumption of the consumption theory is that consumers make decisions which maximize the utility function (Hicks and Allen, 1934a,b). Revealed Preference Methods provide a means for deriving utility functions based on consumer behavior. An overview of the applications of this theory to the estimation of the statistical value of life is given in Blaeij et al. (2003). The estimation may be calibrated on the development of wages (Viscusi, 1993), on the development of the property market (Thaler, 1978) or based on consumption goods (Miller, 1990). Observable economic and social indicators are considered in the Life Quality Index originally proposed by Nathwani et al. (1997). This method can also be classied among the Revealed Preference Methods as it is based on observations on the preferences of society reected in investments. Originally, the LQI was formulated as a social indicator for estimating rational and efcient decision-making in regard to safety. LQI is consistent with the consumption theory and provides a consistent method for estimating the statistical value of life. An efcient life safety activity may be understood as a measure which cost-effectively reduces the mortality or equivalently increases the statistical life expectancy. The increase in life expectancy, l, through expenditure on risk reduction measures results in loss of economic resources measured by Gross National Product (GNP) per capita, g, together with the time spent at work, w. Based on these indicators which are all assessed for a statistical life in a given society, the LQI is formulated:

70

4.4 Consequences

gq l q

(4.22)

where in the optimum q = w/(1 w). Rackwitz et al. (2005) derived based on the LQI, societal life saving cost (SLSC): lr 1w SLSC = g[1 (1 + ) w ]lr (4.23) l where g is the Gross Domestic Product per capita, l is the life expectancy, lr is the number of life years saved and w is the proportion of life spent at work. In the illustrative examples given in Chapter 5, loss of lives of individuals is quantied by the Societal Life Saving Cost (SLSC). For Turkey, based on data for 2006 with a life expectancy of 71.5 years and Gross National Product per capita of 5500 USD, the SLSC is estimated as 150000 USD per fatality.
Estimation of fatalities

For the illustrative examples given in Chapter 5 fatalities are estimated by determining the lethality ratio for each building class damaged by the earthquake (Coburn and Spence, 2002). The lethality ratio is the number of people killed to the number of occupants present in the building. It is estimated from examination of data from previous earthquakes. Coburn and Spence (2002) give ve M-factors to estimate the number of fatalities likely to occur in an earthquake (Figure 4.26). In evaluating the examples, data on the number of story and oor area is used to estimate the number of fatalities of each building. Data on the number of occupants for each building is then incorporated in the GIS platform for each building in the city and evaluated using the BPN for the consequence model (Figure 4.27).

71

4 Models

Figure 4.26: Factors for estimating the number of casualties (Coburn and Spence, 2002).

Occupancy Class Damage Structure Class Construction Year Cost

Figure 4.27: Bayesian probabilistic network for consequence model.

72

4.5 Verication and validation of the models

4.5 Verication and validation of the models


Models provide a means for understanding the real world, rather than for trying to reproduce it. Once a model is set, it may be used for predicting outcomes of interest, provided that a certain condence in the underlying model exists. Fundamental criticism has been made of the possibility of accurate quantitative modeling to predict the outcome of natural processes on the Earths surface (Pilkey and Pilkey-Jarvis, 2007). Oreskes et al. (1994) even argue that the validation of numerical models of natural systems is impossible. Verication means that the calculations within the model are demonstrably correct. Given the scale and the detailing level, the model gives the expected output. Models may be veried by comparisons with known solutions (e.g. experimental or analytical) and by cross comparisons between different models. In contrast to verication, validation aims to check the consistency of model output with relevant observations. Only a validated model may take the credit for being a representation of the real system.
Verication and validation of the seismic hazard model

The plausibility of the Probabilistic Seismic Hazard Analysis (PSHA) methodology used here may be checked by verifying the methodology itself or by validating the application to a certain context. The PSHA methodology is a mathematical procedure which has been subjected to verication by standard mathematical theory. The main output of a PSHA is the annual rate of occurrence of various ground motion intensities at a site. Testing a PSHA application thus means determining whether the occurrence rate of ground motion intensities is consistent with the model predictions. This is not possible for hazard estimates for low annual rates, as this would require hundreds or thousands of years to verify. Varying the spatial extent of the region and time period considered may provide a means of checking: i.e. observations of ground motion occurrences over a relatively large region over a time period of tens of years may be used to verify the predictions of the rate of occurrence of these ground motions (McGuire and Barnhard, 1981). This kind of comparison cannot check the consistency of local hazard estimates with local sources of seismicity (NRC, 1988). The BPN model for seismic hazard presented here is veried by cross comparison between known PSHA codes. Due to limitations imposed by lack of data, the only practical means for evaluating or testing PSHA are this test of reasonableness and consistency.
Verication and validation of the soil failure model

The soil liquefaction potential evaluation in the present dissertation is based on empirical correlations in Standard Penetration Test blow counts. The required soil parameters for these empirical correlations are modeled by taking their spatial variability into account. For different earthquake intensity parameters (moment magnitude and peak ground acceleration) and for each point of the test area a probability of liquefaction is assigned. The soil failure model may be validated using observations of liquefaction events from previous earthquakes. Even though liquefaction events were reported from the 1967 earthquake,

73

4 Models comprehensive data on observations only exists from the recent Kocaeli Mw7.4 earthquake of August 17, 1999. In downtown Adapazari a region was investigated and liqueed areas reported (DRM, 2004). A peak ground acceleration of 0.41g was recorded at the nearby Sakarya accelerograph located in southwestern Adapazari at a distance of 10km. Motions in Adapazari would differ from those at the accelerograph due to ground response effects associated with the soft and deep alluvium. Recorded ground motions at similar conditions suggest a PGA in Adapazari of the order of 0.35g to 0.45g. A direct comparison with liquefaction observations from a single event is not meaningful, as the model outputs are in the form of probabilities. A comparison is nevertheless made by using those grid points for which the model estimates certain liquefaction or certain non-liquefaction, given the Mw=7.4 and PGA=0.3 to 0.5g. In Figure 4.28 the investigated area with observed liquefaction after the Kocaeli Mw7.4 earthquake of August 17, 1999 and the predicted liquefaction is illustrated. The reason for the differences may be due to the different practices in SPT techniques in Europe and in the U.S., where the empirical correlation models are founded.

750

1500 m

Figure 4.28: Liquefaction observed (left) and predicted (right) in Adapazari during the Kocaeli earthquake in 1999.

Verication and validation of the structural damage model

A verication of the structural damage model may be performed by again referring to the Kocaeli Mw7.4 earthquake of August 17, 1999. Post-earthquake damage survey teams have identied buildings with different degrees of damage. The data is summarized in the percentage of damaged buildings in each district (Figure 4.29). For the area under investigation, about 27% of the 5-story buildings constructed before 1980 are reported as heavily damaged. The seismic hazard BPN model in Figure 4.5 is applied with the characteristics of the Kocaeli earthquake of August 17, 1999. The magnitude was Mw7.4; the distance to the activated seismic source S3 was about 50 km. With this information the nodes magnitude and distance are conditioned in Figure 4.29.

74

4.5 Verication and validation of the models The resulting probability distributions for PGA and SD are used as input for the soil failure BPN and structural damage BPN respectively. The main output of the structural damage BPN is in the form of discrete probabilities for three damage states (no-damage, repairable damage, collapse). For the 5-story buildings constructed before 1980 the probability of damage is estimated as 21%. This 21% includes the two states indicating damage to the structures. There may be several reasons for the difference from the 27% resulting from the post-earthquake survey. The structural models may not be suitable for older structures with low ductility capacity resulting in underestimation of the estimated damage. Important structural details, e.g. joints, which are not modeled may be the reason for the difference.

23 22 20 19 18 15 16 14 10 9 4 2 5 7 6 8 11 21

24

13

17

12

Structure Types
Typ5_O_Res Typ5_N_Res

Figure 4.29: Collapsed building statistics in Adapazari during the Kocaeli earthquake in 1999.

Verication and validation of the consequence model

The consequence model may also be validated with the data from the Kocaeli Mw7.4 earthquake of August 17, 1999. Several studies were conducted to estimate the consequences of the earthquake with different boundaries for the cities considered and different sets of incorporated consequences. For example, the total loss for housing in all cities affected by the earthquake is estimated by TSIAD (Turkish Industrialization and Businessmens Association) to be USD 4 billions and by the World Bank (Bibbee et al., 2000) to be USD 1.1 to 3 billions. In the present dissertation the application of the proposed methodology is illustrated by referring to two occupancy and structure classes. A complete practical estimation of consequences of earthquakes in the region is not pursued. Such an application is however required to validate the consequence model. The estimated costs given in the two aforementioned studies could be broken down to the herein considered structure and occupancy classes. A validation of this type has not been pursued as this would result in signicant bias.

75

76

5 Examples
The proposed framework is illustrated on earthquake risk problems for a class of structures located in a test area. First the chosen test area is introduced. Briey the geology, seismicity, built environment, demography and previous damaging earthquakes within the test area are discussed. Finally the building portfolio considered in the examples is described.
Choice of the test area

The Kocaeli Mw7.4 earthquake of August 17, 1999 caused serious damage to the city center of Adapazari. Besides damage to the built environment due to severe ground shaking (Figure 5.1), foundation-related failures of buildings such as tilting, overturning and sinking were also experienced (USGS, 1999). The foundation-related failures were due to the soft ood-plain deposits of the Sakarya river on which Adapazari has been built. Furthermore it is a city of regional inuence following the classication in Chapter 2. Finally, data on both structural and soil-related damage after the recent earthquake in 1999 is available to conduct Bayesian updating of the probabilistic models.
Geology and seismicity of Adapazari

Adapazari is located at the edge of a former lake basin of about 25x40km2 . The lake sediments are overlain by Pleistocene and early Holocene alluvium transported from the mountains to the north and south of the basin. In some areas Quaternary alluvium deposited by the Sakarya River and its tributaries overlay the older lake alluvium (Rathje et al., 2000). The Quaternary alluvium layer reaches a depth of about 15m and is comprised primarily of silt and ne sand. The city is mainly constructed over the very at Quaternary alluvial sediments of the basin with soft near-surface sediments deposited by the river, making it susceptible to liquefaction induced damage during earthquakes, as experienced in the 1967 and 1999 events (Glkan et al., 2003). Many soil proles are characterized as loose silts and silty sand in the upper 5m overlaying clay deposits (Bray and Stewart, 2000; Sancio et al., 2002). The bedrock formation descends sharply through the north and reaches a depth of about 200m within the city center. The city center of Adapazari is located on the northeastern foothills, to the west of the Sakarya River which runs from south to north in the basin and enters the Black Sea and to the east into the smaller Cark River. The main fault, the North Anatolian Fault, forms the southern boundary; the Dzce Fault forms the southeastern boundary. During the Kocaeli Mw7.4 earthquake in 1999 surface ruptures with displacements of up to 5m were observed along the North Anatolian Fault (Komazawa et al., 2002).

77

5 Examples
Built environment and demography of Adapazari

Adapazari is an important industrial and agricultural area in the western part of Turkey. It is the capital of the Sakarya Province with a population of 183000, according to the 1997 census. The center of the city lies within the fertile plain described above, formed by recent uvial activity of the Sakarya and Cark Rivers. The city center comprises both new and old construction. Adapazari has experienced significant growth during recent decades. The population doubled from 1967 to 1990. During the industrial boom of the early 1980s, the city received immigrants from poorer regions of the country. The pressure to develop new housing to accommodate the new arrivals led to a loosening of the rule of not exceeding the two-story construction limit (Green, 2005). From 1990 to 1997 the population increased by another 10%. The main construction types in the city are 3 to 6 story reinforced concrete frame buildings and older 1 to 2 story timber/brick buildings. The reinforced concrete frame buildings are mostly non-ductile with large openings in the ground stories for commercial use and masonry inll walls in the upper stories. The foundations are very stiff compared to foundations of buildings of similar height. The foundations generally consist of about 30cm thick reinforced concrete mat which is stiffened with 30cm wide and 100cm deep grade beams spaced typically about 5m in both directions (Sancio et al., 2004). The space between the grade beams is lled with compacted soil and than covered with a thin concrete oor slab. The previous foundation-related failures which occurred during the earthquake in 1967 are probably the reason for these atypically stiff foundations. The stiffness of the foundations may be the reason for the tilting of many structures without experiencing signicant structural damage during the 1999 earthquake. Many of the buildings with the stiff foundations responded like a rigid body undergoing signicant differential movement, tilt or lateral translation (Figure 5.1).
Damaging earthquakes in Adapazari

Of all the cities affected by the August 17, 1999 Kocaeli earthquake, Adapazari suffered the largest level of gross building damage. The Ministry of Public quantied the number of severely damaged or collapsed building to 5078, which is 27% of the total building stock in Adapazari. The ofcial loss of life in Adapazari was 2627. The fact that the city center of Adapazari lies only 7km from the ruptured fault, and that the city center is underlain by 200m deep alluvial sediments contributed to the the severity of the building damage. Building damage concentrated more in the central area of the city, whereas relatively less damage was observed in the south, where the bedrock is closer to the surface. Another factor for this heterogenous distribution of building damage is the greater density of the mid-rise buildings (4 to 6 stories) in comparison of the low-rise residential buildings in the outskirts of the city (Bird et al., 2004; DRM, 2004). The building damage can be classied into two failure modes: structural system failures due to excessive ground shaking, and foundation displacements of various forms and levels due to bearing capacity failure. The two failure modes were observed to be mutually exclusive. The observed foundation displacements imply nonlinear response of the soil, which in turn may have

78

provided a means of natural base isolation for some buildings during the 1999 earthquake (Bakir et al., 2005; Mollamahmutoglu et al., 2003). In the last century, Adapazari suffered damage from two other major earthquakes associated with the North Anatolian Fault. The rst of the two was in 1943 with a magnitude of 6.6 on the Richter scale and an epicenter 10km east of the city. About 70% of the buildings were damaged during the event. The second major earthquake was in 1967 with a magnitude of 7.1 on the Richter scale and an epicenter of 27km in the southeast of the city. The structural damage was not severe during the 1967 earthquake (Bakir et al., 2005; Ambraseys and Zatopek, 1969).
GIS for Adapazari

Adapazari is bound by the Sakarya River in the east, the Cark River in the west, the main road "Cevre Yolu" in the north and the Istanbul-Ankara Highway in the south. The municipality of Adapazari provided a CAD drawing with the buildings indicated as polygons. The number of stories of each polygon along with the occupancy were also given for some of the polygons. The polygons with a size less than 50m2 were screened out assuming them to be objects other than buildings. The polygons without an indication of occupancy were assumed to be residential buildings. In this way a GIS database was established comprising a total of 22489 buildings. These 22489 buildings were classied into six occupancy classes (residential, industry, schools, ofcial, sacral and hospital) and six structural classes (1 and 2-story masonry structures, 3 to 6-story reinforced concrete frames). Table 5.1 summarizes the building types in the test area and Figure 5.2 illustrates the GIS map of the buildings in Adapazari. For illustration purposes the 1246 5-story residential buildings were chosen. Since seismic design codes applicable to Turkey evolved particularly after 1975, the construction year of the buildings is classied as pre-1980 and post-1980. The generic reinforced concrete bare frames are designed according to the TS500-1975 (1974) for the pre-1980 structures and according to the TS500-1984 (1983) and the Turkish Seismic Code TDY (1975) for the post-1980 structures (see also Section 4.3. It is assumed that all the buildings in the old districts of Adapazari belong to the pre-1980 class (534 buildings) and the remaining buildings belong to the post-1980 class (712 buildings). All hospital buildings are from the post-1980 construction period. Figure 5.3 illustrates the buildings considered. Table 5.1: Building stock in the test area in Adapazari. Residential Industry Schools Ofcial Sacral Hospital 6932 342 6 13 1 0 7690 93 27 24 57 0 3734 14 32 17 3 0 2100 8 16 20 1 0 1246 7 4 6 0 5 90 0 0 1 0 0 21792 464 85 81 62 5

Structure class Masonry 1-Story Masonry 2-Story RC frame 3-Story RC frame 4-Story RC frame 5-Story RC frame 6-Story Total

Total 7294 7891 3800 2145 1268 91 22489

79

5 Examples

Figure 5.1: Building failures in Adapazari (from http://www.sdr.co.jp/damege_tr).

80

Residential

Hospital Religious Official Schools Industry River

0.75

1.5

3.0 km

Figure 5.2: GIS map of the city of Adapazari (Occupancy classes).

81

5 Examples

Typ5_O_Res Typ5_N_Res Hospital 0 750 1'500 3'000 m River

Figure 5.3: GIS map of the considered buildings.

82

5.1 Example 1: Decision for retrotting structures

5.1 Example 1: Decision for retrotting structures


Two decision alternatives are considered: strengthening the reinforced concrete moment resisting frames by column jacketing, or no action. First the BPNs for the seismic hazard (Section 4.1), the BPN for the soil response (Section 4.2), the BPN for structural response (Section 4.3) and the BPN for the consequence assessment (Section 4.4) are integrated. For the identied decision situation, possible activities are given in the Retrot node. In the retrot node no action and retrotting by column jacketing are considered as decision alternatives. The main BPN for the risk management problem is constructed in this way (Figure 5.4).
Magnitude Epsilon PGA Distance Epsilon SD

PGA

SD

Epsilon Damage Structure Class

Occupancy Class Damage

Liquefaction

Construction Year Retrofit ? Cost

Figure 5.4: Bayesian probabilistic network for earthquake risk management.

The BPN given in Figure 5.5 is applied for each of the 534 5-story buildings constructed before 1980. First the specic data for each building concerning the story area and the probability of liquefaction for the given combination of magnitude and peak ground acceleration for the location of the building are incorporated into the BPN from the GIS database. For the present example the structure class is either a "Typ 5 residential building constructed before 1980" or a "Typ5 retrotted residential building constructed before 1980" depending on the action in the "Retrot" node. In the "Damage" node the fragility curves, which were developed in Section 4.3 are implemented including the uncertainty given the state of liquefaction and the spectral displacement level.
Case 1: Time-independent seismic hazard (Poisson)

Nine seismic sources are considered within a radius of 100km from the city center of Adapazari (see Section 4.1). Assuming that only one of the seismic sources activates during an event, i.e. the rupture of two and more sources as well as cascading effects are disregarded; the earthquake risk for each building due to each seismic source is calculated. For each of the seismic sources the distribution of magnitudes and distances are incorporated

83

5 Examples in Figure 5.5. As explained in Section 4.1 a minimum magnitude for damaging earthquakes of Mw=5.0 is considered when calculating the seismic hazard. This means that the incorporated magnitude distributions are calculated for the occurrence of one earthquake with a magnitude greater than Mw=5.0. Hence the calculated earthquake risks must be multiplied by the rate of occurrence of earthquakes greater than Mw=5.0 for each of the seismic sources. The consequences are implemented in terms of costs, conditional on the damage, story area and number of fatalities. Direct consequences as well as indirect consequences, i.e. the number of fatalities, are considered when implementing the table for the Cost node. Damage can inuence the cost directly, i.e. for the damage state "Yellow-Limited use" repair of the building and for the damage state "Red-Unsafe or collapse" rebuilding costs are considered or indirectly via the number of fatalities. Details for the calculation of the consequences are given in Section 4.4. For the present example the future costs for each of the 50 considered years have to be discounted to the present value, when the decision has to be made. The discount factor given in Equation 2.2 must be applied when calculating the costs. In the literature, mainly two reconstruction strategies are considered (Rosenblueth and Mendoza, 1971); i) surrendering the structure after rst failure and ii) reconstructing the structure after failure. In this example a combination of these is considered: collapsed structures are surrendered and damaged structures are renewed. Using the BPN given in Figure 5.4 according to
Soil information
Probability of Liquefaction

Seismicity data

Structure data

S5

S4

S3 S13

S2 S12

S1 S21 S19 S22

S25 S15

S14

Magnitude Epsilon PGA

Distance Epsilon SD

PGA

SD

Epsilon Damage Structure Class

Occupancy Class Damage

Liquefaction

Construction Year Retrofit ? Cost

Figure 5.5: Bayesian probabilistic network for earthquake risk management.

84

5.1 Example 1: Decision for retrotting structures the scheme given in Figure 5.5 the expected value of the earthquake risk is calculated for both decision alternatives in the "Retrot" node for each of the considered buildings. These expected values are summed up for all nine seismic sources. The optimal decision is based on a comparison of the expected total costs of the decision alternatives given the reference period of 50 years. Formally, the expected value of the costs for the two decision alternatives is calculated by:
9 3

E[u|a = a1 ] = E[u|u = a2 ] =

i=1 j=1 9 3

P(DSi j,a )C j,a e0.02t i


1 1 2 2

(5.1) (5.2) (5.3)

i=1 j=1

P(DSi j,a )C j,a e0.02t i

a = argminaa1 ,a2 E[u|a] a

Here, a1 is the decision alternative "No Action", a2 is the decision alternative "Retrotting", is the optimal decision, E[u|a = a1 ] is the associated expected cost for a1 , E[u|a = a2 ] is the associated expected cost for a2 , DSi j is the j-th damage state of the building due to the seismic source i, C is the associated cost for the damage state, the term e0.02t is the discounting factor, and i is the rate of occurrence of earthquakes with magnitude greater than Mw=5.0 for each seismic source i. The summation over j is for the three damage states "Green-No damage", "Yellow-Repair needed" and "Red-Collapsed". The summation over i is for the nine seismic sources considered. The calculation of the marginal probabilities of the damage states in equations 5.4 and 5.2 is carried out using the BPNs for each building and seismic source with the structure and sitespecic information. The evaluation of the BPNs was carried out in the GIS environment using the commercial software Hugin (2008). The maximum expected costs are thus calculated for each building and an optimal action is proposed. For all 534 5-story buildings constructed before 1980 the optimal decision with a time-independent seismic hazard is calculated. In Figure 5.6 the optimal action for each building is illustrated.
Case 2: Time-dependent seismic hazard (characteristic earthquakes)

For 14 of the 534 5-story buildings constructed before 1980 the optimal decision is also calculated with a time-dependent seismic hazard. The calculation scheme when using a timedependent seismic hazard is in principle the same. The distribution for magnitude is different from the time-independent seismic hazard case which is no longer assumed to be constant for each year. In Section 4.1 a magnitude frequency distribution for time-dependent characteristic earthquakes is considered. Given the time-dependent seismic hazard model, the distribution of magnitudes is calculated for each of the 50 years of the reference period as described in Section 4.1. The aforementioned calculation scheme needs to be extended by one additional loop considering the 50 years, each with the corresponding distribution of magnitude. Formally, the expected value of the costs for the two decision alternatives is calculated by:

85

5 Examples

E[u|a = a1 ] = E[u|u = a2 ] =

i=1 j=1 t=0 9 3 50

P(DSi jt,a )C j,a e0.02t i


1 1

50

(5.4) (5.5) (5.6)

i=1 j=1 t=0

P(DSi jt,a )C j,a e0.02t i


2 2

a = argminaa1 ,a2 E[u|a] a

Here, a1 is the decision alternative "No Action", a2 is the decision alternative "Retrotting", is the optimal decision, E[u|a = a1 ] is the associated expected cost for a1 , E[u|a = a2 ] is the associated expected cost for a2 , DSi j is the j-th damage state of the building due to the seismic source i, C is the associated cost for the damage state, the term e0.02t is the discounting factor, and i is the rate of occurrence of earthquakes with magnitude greater than Mw=5.0 for each seismic source i. The summation over j is for the three damage states "Green-No damage", "Yellow-Repair needed" and "Red-Collapsed". The summation over i is for the nine seismic sources considered. In Figure 5.7 the optimal action for each building using a time-dependent seismic hazard is illustrated. In Table 5.2 the number of buildings for which the optimal action is "No Action" and "Retrotting" are given. For comparison, the corresponding values when using a time-dependent seismic hazard model are also given for the subset of 14 buildings. The time-dependent seismic hazard model leads to a substantial increase in seismic activity, which is the reason that for all 14 buildings of the subset being considered the optimal action is "Retrotting". Table 5.2: Buildings with optimal decision "No action" and "Retrotting". Time-independent seismic hazard Time-dependent seismic hazard No Action Retrotting No Action Retrotting All 201 333 Subset 3 11 0 14

Calculation scheme for Example 1

The optimal decision regarding retrotting for each of the buildings in the city is calculated using the scheme given in Figure 5.8. The BPN in Figure 5.5 is constructed in HUGIN for each seismic source. Only the nodes and arrows need to be specied at this step. The number of discrete states in the nodes and the probability tables are introduced in MATLAB. The main le BPN_PSHA_Adapazari_RM.m calculates the probability distributions related to seismic hazard as described in Section 4.1.5. The le Fragility_Typ5_O_Res_RM_1.m calculates the discrete probabilities for the states of the node Damage given the states of the node Epsilon Damage and the states of the node SD specied in BPN_PSHA_Adapazari_RM.m. Upon completion of this step one single BPN is constructed and quantied for the structure class considered. The nodes Liquefaction and Cost are site and building specic, respectively. They are quantied in GIS.

86

5.1 Example 1: Decision for retrotting structures For each of the 42 combinations of magnitude (6 states) and PGA (7 states) the probability of liquefaction in each grid point was calculated using the scheme provided in Section 4.2. These probability of liquefaction values were assigned to the buildings given the the location and the magnitude-PGA pair. These probabilities were imported into GIS as 42 additional columns in the attribute table of the corresponding shape le of the building class. In GIS, for each of the buildings the total story area is calculated and added as an additional column to the attribute table of the shape le. Based on the total story area, an average number of columns (required for estimating retrotting costs) and the number of people at risk (required for estimating fatality costs) are calculated and appended as columns to the attribute table of the shape le. Using the visual Basic macro les OpenShape.bas, Module1.bas and Module1-NonPoisson.bas, the BPN for the structure class is called and for each building the site and building specic nodes, i.e. node Liquefaction and Cost, are quantied. By doing this, building and site specic BPNs are generated. These BPNs are evaluated in GIS using Visual Basic as a client and the inference engine of HUGIN as a server. The building and site specic BPNs are evaluated for each decision alternative and each seismic source. Totaling the costs for each decision alternative over all seismic sources yields the total expected cost due to seismic hazard. The minimum of the total expected costs indicated the optimal decision for each structure (Figure 5.6). The above described scheme is applicable when the magnitude-recurrence relationship is assumed to be time-independent. Considering a time-dependent magnitude-recurrence relationship requires following modications: Using the le BPN_PSHA_Adapazari_RM_NonPoisson.m for each seismic source (here, 9 seismic sources) and each year of the time frame considered (here, 50 years) one BPN is quantied. The Liquefaction and Cost nodes of each of these 450 BPNs are quantied for each of the buildings in GIS. These BPNs are evaluated in GIS using Visual Basic as a client and the inference engine of HUGIN as a server. The building and site specic BPNs are evaluated for each decision alternative and each seismic source. Totaling the costs for each decision alternative over all seismic sources yields the total expected cost due to seismic hazard. The minimum of the total expected costs indicated the optimal decision for each structure (Figure 5.7). For comparison the results are given only for a subset of 14 buildings.

87

5 Examples

Retrofit?
0 750 1'500 3'000 km

No Yes

Figure 5.6: Optimal decision for each building (time-independent -Poisson- seismic hazard model).

88

5.1 Example 1: Decision for retrotting structures

Retrofit?
0 750 1'500 3'000 km

No Yes

Figure 5.7: Optimal decision for each building (time-dependent seismic hazard model).

89

5 Examples

Construction of the BPN in Figure 5.4 for the structure class Typ5-NR-O-Res

Nodes related to seismic hazard are incorporated as described in the calculation scheme in Section 4.1

BPN_PSHA_Adapazari_RM_Poisson.m BPN_PSHA_Adapazari_RM.m

Fragility_Typ5_O_Res_RM_1.m

For each decision alternative the the structure and site specific BPN is evaluated and the optimal actions are identified
Module1.bas Module1.bas Module1-NonPoisson.bas

Figure 5.8: Calculation scheme for Figure 5.6 and 5.7.

90

GIS environment

GIS environment

For each (Mw,R) pair the probability of liquefaction for the location of each structure is incorporated in GIS

For the given story area of each building, the number of people at risk, the number of fatalities and the value at risk are calculated

Matlab environment

Given the spectral displacement state, the discrete probabilities for each damage state are calculated

5.2 Example 2: Assessment of seismic risk

5.2 Example 2: Assessment of seismic risk


Earthquake risk assessment for each building in the city is performed by using the BPN given in Figure 5.5 with site and structure specic information available in the GIS platform. Having quantied the probability tables and consequences, the Bayesian probabilistic network is evaluated for each of the buildings in the city (Bayraktarli and Faber, 2009). For the analysis, the commercial software package Hugin (2008) is used; however, freeware is now also available for this type of analysis. With information for each building in the city on story area and number of stories, the earthquake risk is calculated as the expected total annual cost.

Soil information
Probability of Liquefaction

Seismicity data

Structure data

S5

S4

S3 S13

S2 S12

S1 S21 S19 S22

S25 S15

S14

Magnitude Epsilon PGA

Distance Epsilon SD

PGA

SD

Period

Magnitude

Epsilon Damage

SD

Liquefaction PGA

Damage

Structure Class

Occupancy Class Damage Cost Construction Year Cost Structure Class

Liquefaction

Figure 5.9: BPN for calculating the earthquake risk for each building.

91

5 Examples The total expected cost for the portfolio is calculated by totaling the expected costs of the buildings. This approach has a couple of shortcomings. First of all, simply aggregating the individual expected cost gives the total expected loss for the portfolio but not the distribution of the losses. Furthermore, as the same structural model is applied for each building the same modeling uncertainties should be applied for each building. The fact that a common event is affecting the region is also disregarded here. A structured way of modeling complex systems with Bayesian probabilistic networks is to dene the different underlying models as object classes. An object oriented Bayesian probabilistic network is generated using the subnetworks introduced in Chapter 4 (Figure 5.9). Careful examination of the main BPNs for the Example 1 reveals that most of the nodes affect only the building they belong to. There are, however, also nodes which are common to all buildings and nodes which are common to a subset of buildings. Nodes belonging to different levels are modeled within a hierarchical level. For the present case the nodes affecting all buildings comprise the highest modeling level. Given a seismic source, the distribution of magnitude and distance should be conditioned at the same state, e.g., when performing a probabilistic analysis over different magnitudes and distance combinations, it cannot be assumed that one building is experiencing a Mw=6 earthquake while another building experiences a Mw=7 earthquake. This can be assured by modeling one single node for magnitude and distance. The second modeling level comprises the uncertainty node for the fragility curves. In the present example two structure classes are considered: 5-story buildings constructed before 1980 and 5-story buildings constructed after 1980. Two sets of fragility curves are modeled in Section 4.3 for the two structure classes. As all the buildings in the two structure classes are assumed to respond as the generic building of that structure class, the uncertainty nodes should be in the same state, e.g., when performing a probabilistic analysis of the structural response, it cannot be assumed that for one building the structural response is given by the fragility curve with an uncertainty = +2 while for an another building with an uncertainty = 2 . Consistency is assured by modeling a common uncertainty node for each considered structure class. The third and last modeling level comprises all other nodes which affect only the building they belong to. In other words, these nodes are independent of the corresponding nodes in the other buildings. This important distinction of the nodes into hierarchical levels does not need to be considered in Example 1 since the optimal decision is based on the expected value of the costs. When modeling the dependency in a serial system the expected values of the parameters are invariant, while the variances change. For residential buildings, it can be assumed that the collapse of any residential building has a negligible inuence on the cost of the collapse of another residential building. In reality this may not be true, especially when a very high number of residential buildings collapse and the economy is affected, e.g. the unit prices establish a dependency. In contrast, when considering lifelines, e.g. hospitals, the collapse of one hospital has an inuence on the cost of the collapse of another hospital. This may be modeled as a parallel system. When modeling the dependency in a parallel system both the expected values and the variances of the parameters are affected (Schubert, 2009). Figure 5.9 illustrates the application of the BPN with site and structure specic information

92

5.2 Example 2: Assessment of seismic risk


Magnitude Distance

Epsilon Damage

Epsilon Damage

Magnitude

Distance

Epsilon Damage

Typ5 O Building 1

Cost

...

Magnitude

Distance

Epsilon Damage

Typ5 O Building 534

Cost

Magnitude

Distance

Epsilon Damage

Typ5 N Building 2

Cost

...

Magnitude

Distance

Epsilon Damage

Typ5 N Building 712

Cost

Cost S2

Figure 5.10: Earthquake risk for the portfolio of 5-story residential buildings from one seismic source.

for each building. To overcome the aforementioned shortcomings, the Bayesian probabilistic network in Figure 5.9 is modeled as an object class and applied for each building. In addition to the three input nodes Liquefaction, Occupancy class and Structure class which will always be conditioned, three common nodes are modeled: the common hazard event with the magnitude and distance of the earthquake, and the modeling uncertainty of the fragility curves for the damage assessment (node Epsilon Damage). A node Costs portfolio is modeled conditioned on the node Costs of the individual buildings. The application of the object oriented Bayesian probabilistic network is illustrated in Figure 5.10. The results for the two cases, aggregating the risk with and without considering common cause effects are given in Figure 5.11 for the seismic source S2. It can be seen that, when aggregating individual risks considering common cause effects the loss exceedance curve is relatively smooth and centered. It can be seen that, when the individual risks are aggregated without considering common cause effects, the loss distribution curve underestimates the probability of exceedance of higher total portfolio losses. This is in line with the expectation, that there will always be a certain "leveling out" between losses generated by individual objects within an area given an event. For the three common nodes considered (i.e. magnitude, distance and fragility curve uncertainty) the composition of the loss exceedance curve is illustrated in Figure 5.12. The magnitude node was discretized into six states, the distance node into ve and the fragility curve uncertainty into ve states. The total number of combinations is hence 150. For all of the 150 combinations the loss distribution is calculated. These 150 loss distributions are multiplied by the joint occurrence probability of the Magnitude-Distance-Fragility curve uncertainty triple and totaled according to the total probability theorem. In Figure 5.12 the

93

5 Examples loss distribution of the individual combinations is illustrated in the table and the aggregation at the top. The corresponding loss exceedance curve is illustrated in Figure 5.11. It is also very clear that the waves in the loss distribution and the loss exceedance curve result mainly from the small number of states in the magnitude and distance nodes. Figure 5.13 illustrates loss exceedance curves for the 5-story residential buildings in the city due to the seismic source S2 for different parameters. For the sake of completeness the inuence of considering common cause effects in the portfolio loss estimation is illustrated in a). The effect of considering a timedependent earthquake recurrence model instead of a Poisson recurrence model is illustrated in b). The details of the seismic hazard model considered are given in Section 4.1. There is a minor inuence on the loss exceedance curve when considering the time-dependent recurrence model, since the risks are calculated per annum. The probability distribution of the magnitude barely differs for the rst years, here for the year 2009. In c) the loss exceedance curve for the 5-story residential buildings in the city due to the seismic source S2 is given for the total losses and the property losses. Figure 5.13 illustrates the effect of using different discretisation schemes when calculating and implementing the probability tables, here the SD node. First the SD node is discretized by using only 7 states. The bounds of these 7 discrete states are adapted considering the relative frequencies of each quadruple (magnitude, distance, epsilon SD and period T). Alternatively, other discretisation schemes with equal spacing of the state bounds of the SD node are evaluated. Here, three cases with an SD node discretized with 10, 20 and 50 states are considered. The results clearly show that a very ne equal spaced discretisation (50 states) converges to the adapted discretisation scheme with 7 states. That means in constructing a BPN, either a ne disretization or an adaptive discretisation scheme needs to be chosen. Since the choice for a very ne discretisation easily grows the computation effort involved in evaluating a BPN often adaptive discretisation schemes is unavoidable. This situation may however change when computational efciency increases. In Figure 5.14 the aggregation scheme for all seismic sources affecting the considered region is illustrated. Figure 5.15 and Figure 5.16 illustrate the loss exceedance curves for the 5-story residential building portfolio due to each seismic source and the aggregation of all seismic sources for the portfolio total losses and

Figure 5.11: Loss exceedance curves for 5-story residential buildings.

94

5.2 Example 2: Assessment of seismic risk

Loss Distribution
e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1 e5 e4 e3 e2 e1

900

15000

62000

160000

Figure 5.12: Illustration of the composition for the loss exceedance curve considering common cause.

300000

Cost [USD]

95

5 Examples

Probability of Exceedance [a1 ]

10 10 10 10

Probability of Exceedance [a1 ]

Seismic source S2 Common cause No common cause

10 10 10 10

Seismic source S2 Time-independent hazard Time-dependent hazard

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

5 1 5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S2 Total loss Property loss

0
0

10 10 10 10

5 1 5 2 Portfolio Total Loss [USD] x 105 Seismic source S2 SD with adapted discretisation SD with 10 equal spaced bins

Probability of Exceedance [a1 ]

10 10 10 10

Probability of Exceedance [a1 ]

5 1 5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S2 SD with adapted discretisation SD with 20 equal spaced bins

0
0

10 10 10 10

5 1 5 2 Portfolio Total Loss [USD] x 105 Seismic source S2 SD with adapted discretisation SD with 50 equal spaced bins

5 1 5 Portfolio Total Loss [USD]

2 x 10
5

5 1 5 Portfolio Total Loss [USD]

2 x 10 5

Figure 5.13: Parametric study of the loss exceedance curves for 5-story residential buildings.

portfolio property losses respectively. Figure 5.17 and Figure 5.18 illustrate the loss exceedance curves for the 5-story hospital building portfolio due to each seismic source and the aggregation of all seismic sources for the portfolio total losses and portfolio property losses respectively. It can be seen that the inuence is not accentuated compared to the residential buildings as there are only ve hospitals which do not lead to pronounced leveling out. In Figure 5.17 and Figure 5.18 also the inuence of non-proportional increase of indirect consequence for lifelines such as hospitals is illustrated. The total loss of each hospital is calculated for two damage states, i.e. no damage or collapse. In contrast to the costs of losses for residential buildings, the collapse of one hospital results in an increase of the costs of the collapse of the remaining hospitals. Here a very simple assumption is applied. The portfolio losses of the ve hospitals are multiplied by 1, 1.5, 2, 2.5, 3 for the total number of collapsed 1, 2, 3, 4, 5 hospitals respectively.

Cost S1
Source 1 Source 2

Cost S2

...
Source 9

Cost S9

Cost Portfolio

Figure 5.14: Earthquake risk for the portfolio of 5-story buildings (all seismic sources).

96

5.2 Example 2: Assessment of seismic risk

Probability of Exceedance [a1 ]

10 10 10 10

Probability of Exceedance [a1 ]

All seismic sources Common cause No common cause

10 10 10 10

Seismic source S1 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

1 2 3 4 5 6 Portfolio Total Loss [USD] x 10 Seismic source S2 Common cause No common cause

0
0

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S3 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S4 Common cause No common cause

0
0

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S5 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S6 Common cause No common cause

0
0

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S7 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a-1 ]

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S8 Common cause No common cause

0
0

10 10 10 10

0.5 1 1.5 2 5 Portfolio Total Loss [USD] x 10 Seismic source S9 Common cause No common cause

0.5 1 1.5 Portfolio Total Loss [USD]

2 x 10
6

0.5 1 1.5 Portfolio Total Loss [USD]

2 x 10
6

Figure 5.15: Total loss exceedance curves with and without considering common cause effects for the portfolio of 5-story residential buildings.

97

5 Examples

Probability of Exceedance [a1 ]

10 10 10 10

Probability of Exceedance [a1 ]

All seismic sources Common cause No common cause

10 10 10 10

Seismic source S1 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

1 2 3 4 5 Portfolio Property Loss [USD] 6 x 10 Seismic source S2 Common cause No common cause

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S3 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S4 Common cause No common cause

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S5 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S6 Common cause No common cause

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S7 Common cause No common cause

0
0

Probability of Exceedance [a1 ]

Probability of Exceedance [a1 ]

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S8 Common cause No common cause

10 10 10 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 5 x 10 Seismic source S9 Common cause No common cause

0.5 1 1.5 2 Portfolio Property Loss [USD] 6 x 10

0.5 1 1.5 2 Portfolio Property Loss [USD] 6 x 10

Figure 5.16: Property loss exceedance curves with and without considering common cause effects for the portfolio of 5-story residential buildings.

98

5.2 Example 2: Assessment of seismic risk

Probability of Exceedance [a1]

10

Probability of Exceedance [a1]

All seismic sources Common cause No common cause Common cause & NL

10

Seismic source S1 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a1]

0.5 1 1.5 2 2.5 6 Portfolio Total Loss [USD] x 10 Seismic source S2 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S3 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a1]

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S4 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S5 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a1]

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S6 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S7 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a1]

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S8 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 4 Portfolio Total Loss [USD] x 10 Seismic source S9 Common cause No common cause Common cause & NL

10

10

10

0.5 1 1.5 2 Portfolio Total Loss [USD]

2.5 x 10
6

10

0.5 1 1.5 2 Portfolio Total Loss [USD]

2.5 x 10
6

Figure 5.17: Total loss exceedance curves with and without considering common cause effects for the portfolio of 5-story hospital buildings.

99

5 Examples

Probability of Exceedance [a1]

10

Probability of Exceedance [a ]

All seismic sources Common cause No common cause Common cause & NL

10

Seismic source S1 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a ]

0.5 1 1.5 2 2.5 Portfolio Property Loss [USD] 6 x 10 Seismic source S2 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S3 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a ]

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S4 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S5 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a ]

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S6 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S7 Common cause No common cause Common cause & NL

10

10

10

Probability of Exceedance [a1]

10

Probability of Exceedance [a ]

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S8 Common cause No common cause Common cause & NL

10

0
0

10

2 4 6 8 10 Portfolio Property Loss [USD] 4 x 10 Seismic source S9 Common cause No common cause Common cause & NL

10

10

10

0.5 1 1.5 2 2.5 Portfolio Property Loss [USD] 6 x 10

10

0.5 1 1.5 2 2.5 Portfolio Property Loss [USD] 6 x 10

Figure 5.18: Property loss exceedance curves with and without considering common cause effects for the portfolio of 5-story hospital buildings.

100

5.2 Example 2: Assessment of seismic risk


Calculation scheme for Example 2

The calculation scheme for loss exceedance curves given in Figure 5.11 is given in Figure 5.19. The BPN in Figure 5.9 is constructed in HUGIN for seismic source S2. Only the nodes and arrows need to be specied at this step. The number of discrete states in the nodes and the probability tables are introduced in MATLAB. The main le BPN_PSHA_Adapazari_RA_Res.m calculates the probability distributions related to seismic hazard as described in Section 4.1.5. The le Fragility_Typ5_O_Res_RA_1.m calculates the discrete probabilities for the states of the node Damage given the states of the node Epsilon Damage and the states of the node SD specied in BPN_PSHA_Adapazari_RA_Res.m. Upon completion of this step one single BPN is constructed and quantied for the structure class considered. The nodes Liquefaction and Cost are site and building specic, respectively. They are quantied in GIS. For each of the 42 combinations of magnitude (6 states) and PGA (7 states) the probability of liquefaction in each grid point is calculated using the scheme provided in Section 4.2. These probabilities were assigned to the buildings given the the location and the magnitude-PGA pair. These probabilities were imported into GIS as 42 additional columns in the attribute table of the corresponding shape le of the building class. In GIS, for each of the buildings the total story area is calculated and added as an additional column to the attribute table of the shape le. Based on the total story area, the number of people at risk (required for estimating fatality costs) are calculated and appended as columns to the attribute table of the corresponding shape le. Using the visual Basic macro les Module1-Ex2.bas and OpenShape.bas, the BPN for the structure class is called and for each building the site and building specic nodes, i.e. node Liquefaction and Cost, are quantied. By doing this, building and site specic BPNs are generated. These BPNs are evaluated in GIS using Visual Basic as a client and the inference engine of HUGIN as a server. These BPNs are applied individually when the analyst is interested in risk of individual buildings. When a portfolio is considered, however, those nodes need to be identied which inuence the individual buildings commonly at different hierarchical levels. As illustrated in Figure 5.10, the nodes Magnitude and Distance represent one hierarchical level, and the nodes Epsilon Damage represents another hierarchical level. This large BPN is decoupled by conditioning each of the states of the common nodes and evaluating individual BPNs. For each of the 150 combinations of the states of the common nodes (nodes Magnitude, Distance and Epsilon Damage with 6, 5 and 5 states, respectively) the distributions of the nodes Cost for each of the 1246 buildings (534 Typ5-O and 712 Typ5-N) are calculated using the le CostDistS2.m. For each of the 150 combination the distribution of node CostS2 is calculated by sampling the state of the node Cost for each of the 1246 buildings using le Aggregation_Poisson_S2.m. These distributions are given in Figure 5.12. These 150 distributions are multiplied by the joint occurrence probability of the Magnitude-Distance-Epsilon Damage triple and totaled according to the total probability theorem resulting in the distribution of the node CostS2. The results are given in Figure 5.11. For comparison, the existence of common nodes are disregarded and the distribution of the node Cost for each building is evaluated for the stand-alone BPNs by the le CostDistS2_ Integrated.m. The distribution of the node CostS2 is calculated by sampling the distributions of the nodes Cost for each of the 1246 buildings using le Aggregation_Poisson_S2_Integrated.m.

101

5 Examples The same calculation scheme is used for each seismic source, for different structure classes (residential/hospital), different magnitude-recurrence relationships (Poisson/time-dependent), different loss types (total loss/property loss) and different numbers of discrete states (Figures 5.13 and Figures 5.15 to 5.18).
Construction of the BPN in Figure 5.8 for the structure class Typ5-NR-O-Res and Seismic source S2

Nodes related to seismic hazard are incorporated as described in the calculation scheme in Section 4.1

BPN_PSHA_Adapazari_RA_Res.m

Fragility_Typ5_O_Res_RA_1.m

OpenShape.bas Module1-Ex2.bas

OpenShape.bas Module1-Ex2.bas

For each combination of the states of the common nodes the distribution of the node ' Cost' for each building is evaluated
CostDistS2.m

The distribution of the node ' Cost' for each building is evaluated
CostDistS2_Integrated.m

The discrete probabilities of the node ' Cost S2' representing the portfolio loss due to seismic source S2 are simulated using a Monte Carlo scheme
Aggregation_Poisson_S2.m

The discrete probabilities of the node ' Cost S2' representing the portfolio loss due to seismic source S2 are simulated using a Monte Carlo scheme
Aggregation_Poisson_S2_Integrated.m

Figure 5.19: Calculation scheme for Figure 5.11.

102

Matlab environment

GIS environment

GIS environment

For each (Mw,R) pair the probability of liquefaction for the location of each structure is incorporated in GIS

For the given story area of each building, the number of people at risk, the number of fatalities and the value at risk are calculated

Matlab environment

Given the spectral displacement state, the discrete probabilities for each damage state are calculated

5.3 Example 3: Update of fragility curves

5.3 Example 3: Update of fragility curves


One of the strengths of the proposed framework using BPNs is their ability to systematically update any parameter in the model with new information. It is possible that data in the form of reports on damage to buildings in an affected area is available after an earthquake. A postearthquake damage inspection team could have been instructed to utilize ATC-20 (1989) methodology and assign a color tag (Green-Safe, Yellow-Limited use or Red-Unsafe/Collapsed) to the inspected building. Based on the damage distribution of the buildings after the Kocaeli Mw7.4 earthquake of 17th August, 1999, a map indicating the percentage of damaged buildings within a district in Adapazari is available (DRM, 2004). The same report also provides a map indicating the areas in the city which are liqueed. The BPN applied for the Bayesian update problem is in principle the same as for the risk management and risk assessment problems illustrated in Example 1 and Example 2. Beside the fact that all cost-related nodes have been removed, an important modication is performed in handling the uncertainty of the fragility curves which are to be updated using data captured by post-earthquake damage surveys. In the foregoing examples the uncertainty in the fragility curves was modeled by a single node (i.e. Epsilon fragility node). To enable updating in the present case, the uncertainty distribution of the fragility curves is modeled by four nodes: The parameters of the lognormal distribution and for each of the two damage states. The four uncertainty nodes are discretised into three states (mean+sigma, mean, mean-sigma) with the probabilities (0.159,0.682,0.159) respectively. The BPN is illustrated in Figure 5.20. The map available indicating the percentage of damaged buildings is used to assign to each of the 201 "5-story residential buildings constructed before 1980" at random the two damage states (Green-No damage and Red-Collapsed). It should be noted that for only 201 of the 534 "5-story residential buildings constructed before 1980" information on damage is available. It is also important to note that building-specic information on damage is not necessary as the buildings are grouped into structural classes and the structural behavior of generic buildings is assumed to represent their behavior. Furthermore, from the map indicating liquefaction for each of the 201 buildings, a tag on the liquefaction state is assigned.
Magnitude Epsilon PGA Distance Epsilon SD

PGA

SD

Lambda Yellow

Zeta Yellow

Lambda Red

Zeta Red Structure Class

Liquefaction

Damage

Figure 5.20: Main Bayesian probabilistic network constructed for Bayesian updating.

103

5 Examples

10 % 20 % 30 % 40 % 50 % 60 %

Lambda Yellow

Zeta Yellow

Lambda Red

Zeta Red

Liquefaction

Lambda Yellow

Zeta Yellow

Lambda Red

Zeta Red

...

Liquefaction

Lambda Yellow

Zeta Yellow

Lambda Red

Zeta Red

Damage

Damage

Typ5 O Building 1

Typ5 O Building 201

Figure 5.21: Collapsed building statistics for buildings with four and more stories in Adapazari after the Mw7.4 Kocaeli earthquake of 17th August 1999 (Bakir et al., 2005) and Bayesian probabilistic network for the update of fragility curves for the hospital buildings.

The BPN for the prior case along with information on the liquefaction state and structural damage state for 201 buildings is thus available. The characteristics of the earthquake are no

104

5.3 Example 3: Update of fragility curves longer uncertain. The magnitude was Mw7.4, the distance to the activated seismic source S3 was about 50 km. This information can be used to condition the nodes magnitude and distance. Starting with the rst of the 201 buildings, the BPN is further conditioned using the information on the liquefaction state and structural damage state of the rst building. The two nodes Liquefaction and Damage are instantiated correspondingly and the parameters of the fragility curves are updated using the BPN. The calculation scheme is illustrated in Figure 5.21. The probability distributions of the uncertainty nodes are updated using the software Hugin within a GIS environment. The BPN is modied implementing these updated probability distributions for the four uncertainty nodes. The same procedure is applied successively for all of the 201 buildings. The probabilities of the uncertainty nodes are given in Table 5.3 and 5.4. Table 5.3: Probability distribution of the uncertainty nodes. Prior Posterior + + Lambda 0.159 0.682 0.159 0.662 0.330 0.008 Yellow Zeta 0.159 0.682 0.159 0.088 0.640 0.272 Lambda 0.159 0.682 0.159 0.000 0.002 0.999 Red Zeta 0.159 0.682 0.159 0.904 0.095 0.001 The posterior distribution parameters are calculated by weighted averaging: Yellow = 0.662(3.758 + 0.034) + 0.330(3.758) + 0.008(3.758 0.034) = 3.780 Yellow = 0.088(0.390 + 0.024) + 0.640(0.390) + 0.272(0.390 0.024) = 0.386 Red = 0.000(4.215 + 0.055) + 0.002(4.215) + 0.999(4.215 0.055) = 4.160 Red = 0.904(0.346 + 0.040) + 0.095(0.346) + 0.001(0.346 0.040) = 0.382 The updated fragility curve for the building class is given in Figure 5.22. Table 5.4: Parameters of the prior and posterior lognormal distribution for the fragility curves. Prior Posterior Mean Sigma Mean Lambda 3.758 0.034 3.780 Yellow Zeta 0.390 0.024 0.386 Lambda 4.215 0.055 4.160 Red Zeta 0.346 0.040 0.382

Calculation scheme for Example 3

The calculation scheme for fragility curves update given in Figure 5.11 is given in Figure 5.23. The BPN in Figure 5.20 is constructed in HUGIN for seismic source S3 which is assumed to be ruptured in the Mw7.4 Kocaeli earthquake 1999. Only the nodes and arrows need to be

105

5 Examples specied at this step. The number of discrete states in the nodes and the probability tables are introduced in MATLAB. The main le BPN_PSHA-_Adapazari_BU.m calculates the probability distributions related to seismic hazard as described in Section 4.1.5. The le Fragility_Typ5_ O_Res_RA_2.m calculates the discrete probabilities for the states of the node Damage given the states of the nodes Lambda Yellow, Zeta Yellow, Lambda Red and Zeta Yellow, and the states of the node SD specied in BPN_PSHA_Adapazari_BU.m. Upon completion of this step one single BPN is constructed and quantied for the structure class considered. The nodes Liquefaction and Cost are site and building specic, respectively. They are quantied in GIS. For each of the 42 combinations of magnitude (6 states) and PGA (7 states) the probability of liquefaction in each grid point are calculated using the scheme provided in Section 4.2. These probability of liquefaction values were assigned to the buildings given the location and the magnitude-PGA pair. These probabilities were imported into GIS as 42 additional columns in the attribute table of the corresponding shape le of the building class. Using the visual Basic macro les Module1-Ex3.bas the BPN for the structure class is called and for each building the site specic node Liquefaction is quantied. By doing this site specic BPNs are generated. These BPNs are evaluated in GIS using Visual Basic as a client and the inference engine of HUGIN as a server. The available maps indicating damage to buildings and liqueed areas within the region during the Mw7.4 Kocaeli earthquake 1999, are used to assign tags (1 or 0) indicating damage/non damaged and liqueed/non liqueed to each of the 201 buildings. This 201x2 matrix is read from the attribute table of the corresponding shape le in GIS. The BPN given in Figure 5.20 is evaluated using MATLAB as client and the inference engine of HUGIN as a server in le UpdateBPN.m. Using the tags of the rst building the BPN is conditioned and the nodes Lambda Yellow, Zeta Yellow, Lambda Red and Zeta Yellow of the fragility curve are updated. Applying this successively for each of the 201 tags the parameters
1 0.9 0.8 0.7 Probability 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 limited use (yellow) Collapse (red) limited use updated (yellow) Collapse updated (red) 40 60 80 100 120 Spectral displacement, S (T=0.64s, =5%) [mm] d 140 160

Figure 5.22: Fragility curves updated with data from the Mw7.4 Kocaeli earthquake 1999.

106

5.3 Example 3: Update of fragility curves of the fragility curve is updated.

Estimation of the seismic fragility curves in Figure 5.20

Nodes related to seismic hazard are incorporated as described in the calculation scheme in Section 4.1

BPN_PSHA_Adapazari_BU.m

Fragility_Typ5_O_Res_RA_2.m

For each (Mw,R) pair the probability of liquefaction for the location of each structure is incorporated in GIS
Module1-Ex3.bas

Information given in Figure 5.19 regarding the occurence of liquefaction is incorporated

Successive update of the nodes 'Lambda Yellow', 'Zeta Yellow', 'Lambda Red' and 'Zeta Red'
UpdateBPN.m

Figure 5.23: Calculation scheme for Figure 5.21 and 5.22.

Matlab environment

GIS environment

Information given in Figure 5.19 regarding the observed damage for each of the 201 buildings is incorporated

GIS environment

Matlab environment

Given the spectral displacement state, the discrete probabilities for each damage state are calculated

107

5 Examples

5.4 Example 4: Index of robustness


This example illustrates the use of the index of robustness as an indicator for the comparison of risk reduction measures. The output given in the Example 2 is used for illustration. As a regional city, the damage to the buildings in Adapazari affects the city as well as the region. Robustness of a system is dened as the relationship of the direct risks to the total risks (Faber, 2008). It is thus a measure indicating the relative importance of a system with regard to a hierarchically upper level system. The derivation of the index of robustness for different hierarchical levels is illustrated in this example. More formally, robustness is quantied by means of an index of robustness IR expressed through the ratio between direct risks and total risks. IR = RD RD + RID (5.7)

where RD and RID represent the direct and indirect risks respectively. The index of robustness is illustrated for different system levels for the 5-story residential buildings in Adapazari. The direct and indirect risk due to earthquakes for the 5-story residential buildings are calculated in Example 2 and given in Figure 5.15 and Figure 5.16. The estimation of the index of robustness is given for three different system characterizations, see Figure 5.24.
System 2: City of Adapazari

System 1: Individual building

Typ5 River

System 3: Sakarya Region

/
Sakarya Istanbul

/
Adapazari

1.5

3.0 km

50

100 m

75

150 km

Figure 5.24: System characterizations for seismic risk calculations for buildings of different scales.

108

5.4 Example 4: Index of robustness For the individual building in Figure 5.24 the indirect and direct losses are USD 46000 and USD 15800. It should be noted that when calculating the direct losses for the system "individual building" repair costs are considered as direct costs, rebuilding and loss of lives as indirect losses. Using Equation 5.7, the index of robustness is calculated as: IR1 = RD 15800 = = 0.255 RD + RID 15800 + 46000

The situation is different when considering the portfolio of 5-story residential buildings as a system (Figure 5.24). Hence, repair and rebuilding costs are considered as direct costs and loss of lives as indirect costs. The indirect and direct losses are estimated as USD 1239500 and USD 1148600 for the portfolio of 5-story residential buildings (Figure 5.15 and Figure 5.16). Using Equation 5.7, the index of robustness is calculated as: IR2 = RD 1 148 600 = = 0.481 RD + RID 1 148 600 + 1 239 500

Dening the system as the whole region where the city of Adapazari is embedded results in a different composition of direct and indirect loss, and hence a different index of robustness. When considering System 3 in Figure 5.24, i.e. the Sakarya region, the estimation of the macroeconomic costs of the the Kocaeli Mw7.4 earthquake of August 17, 1999 is assumed to apply. In Bibbee et al. (2000) estimates for the whole affected region of the Kocaeli Mw7.4 earthquake are given. Using the estimation of the Trkish Industrialisation and Businessmens Association (TSIAD), i.e. an indirect loss estimate of USD 6.8 billion and a direct loss estimate of USD 10.0 billion, the index of robustness is calculated as: IR3 = RD 10 000 000 000 = = 0.595 RD + RID 10 000 000 000 + 6 800 000 000

This example illustrates that the relative importance of the propagation of losses of an adverse event is dependent on the system representation level, i.e. on the decision-making level. The index of robustness provides an alternative way of presenting seismic risk by explicitly indicating the fraction of the direct effects to the total effects. For different risk reduction measures, the index of robustness may lead to different optimal decisions, given the system characterization level. For each decision alternative, the index of robustness could be calculated given the system characterization level. The increase in the index of robustness through retrot may be feasible for one decision level, but not feasible for another. A thorough consideration of this new concept of index of robustness is anyway necessary in the research community.

109

110

6 Conclusions
Summary

In this dissertation, a framework for risk assessment has been proposed for use in earthquakerelated problems in cities. The framework aims to represent the problem complex of earthquake risk by modeling the prevailing parameters, to explicitly model the dependencies among the parameters for considering system effects and to provide a framework for systematic update. It uses a modular structure in modeling seismic hazard, soil response, structural response and damage and loss assessment. The framework is applied by Bayesian probabilistic networks (BPN). The applicability and pros and cons of BPNs are discussed for earthquake risk management, for portfolio loss distribution and for the systematic update of model constituents within a Bayesian perspective. In the proposed risk assessment framework, three levels are distinguished: exposure, vulnerability and robustness. The proposed framework allows for utilization of any type of risk indicators with regard to exposure, vulnerability and robustness of the considered system. Risk indicators are any observable or measurable characteristics of the system or its constituents containing information about risk. Existing earthquake loss estimation methodologies were reviewed and six main characteristics were identied, thus ensuring a consistent treatment of complex problems subject to uncertainties; integrality/generality, modularity, inference, dependencies, updateability, and multidetailing. Although the rst two, and partly the third characteristics are covered by some of the existing methodologies, none of the existing methodologies comprehends the latter three. An integral approach to risk assessment ensures that signicant risk contributors originating from interactions between the different agents are accounted for. To some degree all of the existing loss estimation methodologies can be regarded as integral approaches. On the other hand, generality, i.e. context independence, is especially important when the loss estimation methodology is to be applied to other regions as well as for other hazard types. The proposed framework is applied using indicators representing exposure, vulnerability and robustness. Consistent modeling at all levels is ensured by modeling using BPNs. The indicators may be identied and quantied for specic or general situations. Earthquake loss estimation requires an interdisciplinary approach. Research on individual disciplines such as seismology, soil dynamics or earthquake engineering provide scientic and technological improvements which may result in the existing loss estimation methodologies becoming obsolete, if these improvements are not considered. A modular structure enables the implementation of new models in any of the disciplines without resetting the overall methodology. Just like the other existing earthquake loss estimation methodologies, the proposed framework has as a modular structure. It comprises modules for seismic hazard, for soil response,

111

6 Conclusions for structural response, for damage and losses. The interfaces of these modules is dened and calculations are executed from end-to-end. Modularity enables the easy adaptation of alternative methods and models to the integral model. Earthquake risk is evaluated in the existing methodologies in forward direction only. The analysis on seismic hazard, soil response, structural response, damage and loss assessment is performed end-to-end leading to a quantication of the risk or to a damage scenario. Using the proposed methodology it is also possible to perform diagnostic analysis. Typical queries are of the kind "Which magnitude of earthquakes would lead to complete unavailability of important structures such as hospitals?" None of the existing loss estimation methodologies considers the prevailing parameters explicitly by considering their uncertainties. Explicit consideration of the uncertainty of the parameters enables inference in both directions. It is shown that BPNs allow inference based on observed evidence. The lack of explicit consideration of the dependencies of the prevailing parameters is another pitfall in existing methodologies. Disregarding dependencies leads to a suppression of important system effects when considering loss estimation to portfolios of buildings. Statistical dependency may be appropriately represented through correlation. Functional dependency or common cause dependency is appropriately represented through hierarchical probabilistic models. Explicit modeling of dependencies is one of the strengths of BPNs, especially when dependencies at different modeling levels are considered, as in hierarchical models. Modeling the dependencies at different hierarchical levels makes it possible to calculate risks for portfolios. The continuous adaptation of engineering models is a major challenge. In Bayesian understanding the constructed models represent the present state of knowledge and with every incoming information, the underlain models could be updated. The existing loss estimation methodologies are in some ways considering the knowledge gained through new earthquakes by remodeling, but none of them provides a framework for systematic update. The knowledge and information basis is not very broad when assessing earthquake risks. Damaging earthquakes are not very frequent, data on the built environment is usually very scarce and research on the occurrence of earthquakes and their effects on soil and structures is ongoing. Hence, the processing of scientically veried knowledge over statistically representative data to experience reecting expert opinions is necessary. An update is facilitated especially when the problem complex is modeled explicitly by observable characteristic descriptors referred to as indicators. It is shown that the Bayesian perspective as proposed in this dissertation provides a sound and thorough means for this. Considering risks due to earthquakes, the decision-makers are to some degree all individuals within the earthquake-prone region. The formulation of a specic decision problem depends, however, on the decision-making level. It is illustrated that in principle the framework can be applied for different decision-making levels. This is possible as the dependencies at different hierarchical levels are modeled explicitly. None of the existing loss estimation methodologies is capable of application for different decision makers with different levels of detailing. In this dissertation the construction and application of BPNs for "seismic hazard", "soil behavior", "structural response" and "consequence assessment" are also discussed. The application of the proposed framework is illustrated using four examples on earthquake risk problems for

112

a class of structures located in Adapazari, Turkey. The city center of Adapazari was affected by the Kocaeli Mw7.4 earthquake of August 17, 1999. Adapazari is chosen as a test area, as it has suffered damage due both to ground shaking and soil failures such as tilting, settlement and lateral displacements. In the rst example, a risk management problem is considered. Two decision alternatives, namely strengthening the reinforced concrete moment resisting frames by column jacketing or no action, are considered. For a chosen structure class, namely ve-story reinforced concrete moment resisting frames, the optimal decision of each of the buildings within the structure class is identied. Here, both time-independent and time-dependent seismic hazard models are considered. The time-dependent seismic hazard model leads to a substantial increase in seismic activity which results in the optimal action being "retrotting" for the buildings. In the second example, earthquake risk assessment for each building in the city is performed using the proposed methodology. The example illustrates that the framework facilitates the assessment of groups of structures and of the portfolio loss exceedance probability function. The third example illustrates one of the strengths of the proposed methodology in their ability for straightforward updating of any parameter in the model. It is possible that data in the form of reports on the damages on buildings in an affected area is available after an earthquake. A post-earthquake damage inspection team could have assigned a color tag (Green-Safe, Yellow-Limited use or Red-Unsafe/Collapsed) to the inspected building on the basis of the buildings safety. Based on the damage distribution of the buildings after the 17th August, 1999 Kocaeli Mw7.4 earthquake, a map indicating the percentage of damaged buildings within a district in Adapazari and a map showing the areas in the city which are liqueed are used and the parameters of the fragility curves are updated using the BPN. The fourth example illustrates the use of the index of robustness as an indicator. The example illustrates that the relative importance of the propagation of losses of an adverse event is dependent on the system representation level, i.e. on the decisionmaking level. The index of robustness provides an alternative way of presenting seismic risk by explicitly indicating the fraction of the direct consequences related to the total consequences.
Originality

The dissertation has two main objectives: The construction and application of BPNs in individual discipline and the systematic application of BPNs to cities. The rst objective is met by "translating" the state-of-the-art models for probabilistic seismic hazard assessment, for seismic-induced soil liquefaction potential evaluation and for structural assessment into BPN models. In detail: For the seismic hazard model an alternative calculation and representation scheme for the standard Probabilistic Seismic Hazard Analysis (PSHA) using Bayesian Probabilistic Networks (BPN) is presented. The BPN can easily be extended to compute joint probability distributions for multiple ground motion parameters, which is a feature not easily implemented in standard PSHA. Backward calculation, as implemented using deaggregation in standard PSHA, is also easily performed using BPNs. Incorporation of model choice uncertainties and time-dependent seismic hazard into the BPN model for seismic hazard was also discussed.

113

6 Conclusions The soil liquefaction potential evaluation is generally based on deterministic empirical correlations using Standard Penetration Test blow counts. The required soil parameters for these empirical correlations are modeled by taking their spatial variability into account. For different earthquake intensity parameters (moment magnitude and peak ground acceleration) and for each point of the test area a probability of liquefaction is calculated. This information is represented in a BPN, which is linked to the seismic hazard BPN and the structural damage BPN. BPN models for structural damage are developed based on state-of-the-art seismic fragility assessment procedures. To illustrate the application of the methodology to risk management problems, seismic fragility curves are also developed for a retrotting scheme using jacketing of the columns. The fragility curves developed are compared with the corresponding HAZUS fragility curves as well as with post-earthquake survey data from the recent earthquake in the region. The second objective of the dissertation has been addressed through a systematic denition of a city and development of a risk assessment framework. The discussions are based on the application of the proposed methodology to a specic city. The new developments are: A risk assessment framework for a generic application to earthquake risks for cities is established. This includes a system theoretic denition of cities and a hierarchical modeling of risks. The framework can be applied for different decision-making levels. This is possible as the dependencies at different hierarchical levels are modeled explicitly. Methods and rules for the consistent representation of the effect of dependencies in the estimation of losses are developed. It is shown that inclusion of such effects may have a very signicant impact on portfolio loss estimates. A framework for systematic update of the models considering the knowledge and data gained through new earthquakes is proposed.
Limitations

Mainly two issues were identied as potential shortcomings of using the proposed framework for earthquake risk problems. The rst one is the discretisation of the parameters within the model. The effect of different discretisation schemes on the portfolio loss exceedance curve is evaluated. The results clearly show that an automated equal spaced discretisation converges to a supervised adapted discretisation scheme with an increasing number of bins. This means, in constructing a BPN, either a ne discretization or a supervised adaptive discretisation scheme has to be chosen. Since the choice of very ne discretisation increases the computation effort involved in evaluating a BPN, adaptive discretisation schemes may often be unavoidable. The second shortcoming is related to the computational efciency when considering a city or region with thousands of buildings. It is not mainly the number of buildings but the complex dependency structure among the variables in the BPN models and especially the different hierarchical levels in the model which results in computational difculties. The number of buildings

114

has no affect when considering management problems as illustrated in the rst example. The optimal decision is based on the expected value of risk, and the expected value of risk is calculated "decoupling" all the hierarchical dependencies within the model. However, when the distribution of risk is an issue, as illustrated in the second example, these dependencies need to be considered. This results in very large BPNs for risk assessment problems. It is illustrated that this kind of problem can be solved by decoupling the BPNs at the hierarchical levels. By doing so, the distribution of risk can be calculated; however, other important features of BPNs such as backward inference, update or diagnostics cannot be applied.
Recommendations

When applying BPNs for large scale problems with thousands of elements at risk which have dependencies at different hierarchical levels, the existing software tools reach their limit. In the present dissertation, the very large integral BPN models were decoupled, evaluated and nally coupled. By doing so, the useful features of BPNs such as backward inference, update and diagnostics are not fully applicable. More efcient programming may be targeted to enable the modeling of larger BPNs. The soil BPN model in the present dissertation was constructed so that the input parameters for the hazard (PGA,Mw ) and the main output in the form of liquefaction triggering are explicitly modeled as nodes. The underlying empirical models for liquefaction triggering prediction may be considered using the soil parameters explicitly within the BPN model. The spatially explicit application of these BPN models may be used to update the underlying empirical models with data from earthquake events. The update of seismic fragility curves with damage data from past earthquakes using BPNs was illustrated. Besides the damage observed, measurements from seismic stations may also be used when updating the structural damage models as well as the ground motion prediction equations in the seismic hazard models. The BPNs in the present dissertation are based mainly on existing engineering models. The governing parameters in these models are explicitly modeled in the BPNs taking the causal relations into account. Another approach would be to set up the parameters and construct BPN models based solely on data, without reecting existing knowledge using causal relations. The arrows in the BPNs may be dened based on the data using so-called structural learning methods. It is thus possible to reveal former not considered dependencies among parameters to the analyst. An application to ground motion prediction equations or soil liquefaction models would probably be feasible. Sustainable and consistent societal decision making requires that a framework for risk management is developed, which, at a fundamental level allows for the comparison of risks from different natural hazards such as the comparison between risks due to earthquakes with the risks due to ooding or due to draughts. The main characteristics of the proposed methodology like its generality and modularity makes it a strong candidate for the modeling of risk due to multihazards.

115

116

References
Abrahamson, N., Birkhaeuser, P., Koller, M., Mayer-Rosa, D., Smit, P., and Sprecher, C. (2002). Pegasos - A comprehensive probabilistic seismic hazard assessment for nuclear power plants in Switzerland. In Proceedings 12th European Conference on Earthquake Engineering, London, UK. Abrahamson, N. A. and Silva, W. J. (1997). Empirical response spectral attenuation relations for shallow crustal earthquakes. Seismological Research Letters, 68/1.:94126. AGORA (2009). Alliance for Global Open Risk Analysis. www.risk-agora.org. Akin, H. and Siemens, H. (1988). Praktische Geostatistik. Springer-Verlag, Berlin. Alag, S. and Agogino, A. M. (1996). Inference using message propagation and topology transformation in vector Gaussian continuous networks. In Horvitz, E. and Jensen, F. V., editors, Proceedings 12th Conference on Uncertainty in Articial Intelligence. Morgan Kaufmann Publishers. Algermissen, S., Rinehart, W., Dewey, J., Steinbrugge, K., Degenkolb, H., Cluff, L., McClure, F., and Gordon, R. (1972). A study of earthquake losses in the San Francisco Bay Area: data and analysis. Technical report, National Oceanographic and Athmospheric Administration of the Departement of Commerce for the Ofce of emergency Preparedness. Altan, M., Toz, G., Kulur, S., Seker, D., Volz, S., Fritsch, D., and Sester, M. (2001). Photogrammetry and GIS for quick assessment, documentation and analysis of earthquakes. ISPRS Journal of Photogrammetry and Remote Sensing, 55:359372. Ambraseys, N. N. and Zatopek, A. (1969). The Mudurnu Valley, West Anatolia, Turkey, earthquake of 22 July 1967. Bulletin of Seismological Society of America, 59:52189. Anagnos, T. and Kiremidjian, A. S. (1988). A review of earthquake occurrence models for seismic hazard analysis. Probabilistic Engineering Mechanics, 3(1):311. AS/NZS (1999). Risk Management, Australian Standards. Atakan, K., Ojeda, A., Meghraoui, M., Barka, A. A., Erdik, M., and Bodare, A. (2002). Seismic hazard in Istanbul following the 17 August 1999 Izmit and 12 November 1999 Dzce earthquakes. Bulletin of the Seismological Society of America, 92(1):466482. ATC-13 (1985). Earthquake damage evaluation data for California. Technical report, Applied Technology Council, USA.

117

References ATC-20 (1989). Procedures for postearthquake safety evaluation of buildings. Technical report, Applied Technology Council, USA. ATC-40 (1996). Seismic evaluation and retrot of concrete buildings. Technical report, Applied Technology Council, USA. Atkinson, G. (2004). An overview of developments in seismic hazard analysis. In Proceedings 13th World Conference on Earthquake Engineering, Vancouver, B.C., Canada. Baker, J. W. (2007). Correlation of ground motion intensity parameters used for predicting structural and geotechnical response. In Proceedings 10th International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP), Tokyo, Japan. Baker, J. W. and Cornell, C. A. (2006). Correlation of response spectral values for multicomponent ground motions. Bulletin of the Seismological Society of America, 96:215227. Baker, J. W. and Faber, M. H. (2008). Liquefaction risk assessment using geostatistics to account for soil spatial variability. Journal of Geotechnical and Geoenvironmental Engineering, 134(1):1423. Bakir, B. S., Yilmaz, M. T., Yakut, A., and Gulkan, P. (2005). Re-examination of damage distribution in Adapazari: Geotechnical considerations. Engineering Structures, 27(7):1002 1013. Bayraktarli, Y. Y., Baker, J. W., and Faber, M. H. (2009). Uncertainty treatment in earthquake modeling using bayesian networks. Georisk, accepted for publication. Bayraktarli, Y. Y. and Faber, M. H. (2007). Value of information analysis in earthquake risk management. In Proceedings 10th International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP), Tokyo, Japan. Bayraktarli, Y. Y. and Faber, M. H. (2009). Bayesian probabilistic network approach for managing earthquake risks of cities. Georisk, accepted for publication. Bayraktarli, Y. Y., Ulfkjaer, J.-P., Yazgan, U., and Faber, M. H. (2005). On the application of Bayesian probabilistic networks for earthquake risk management. In Proceedings 9th International Conference on Structural Safety and Reliability (ICOSSAR), Rome, Italy. Bayraktarli, Y. Y., Yazgan, U., Dazio, A., and Faber, M. H. (2006). Capabilities of the Bayesian probabilistic networks approach for earthquake risk management. In Proceedings 1st European Conference on Earthquake Engineering and Seismology, Geneva, Switzerland. Bazzurro, P. and Cornell, C. A. (1994). Seismic hazard analysis of nonlinear structures I: Methodology. Journal of Structural Engineering, 120:33203344. Benjamin, J. R. and Cornell, C. A. (1970). Probability, statistics and decisions in civil engineering. Mc Graw - Hill Book Company.

118

References Benson, C. and Clay, E. (2004). Understanding the economic and nancial impacts of natural disasters. Technical report, The World Bank. Bertalanffy, L. v. (1968). General Systems Theory. Foundations, Development, Applications. Braziller, New York. Bibbee, A., Gnenc, R., Jacobs, S., Konvitz, J., and Price, R. (2000). Economic effects of the 1999 Turkish earthquakes: An interim report. Technical report, OECD Economics Department Working Papers, No. 247, OECD Publishing. Bird, J. F., Bommer, J. J., Bray, J. D., Sancio, R., and Spence, R. J. (2004). Comparing loss estimation with observed damage in a zone of ground failure: A study of the 1999 Kocaeli earthquake in Turkey. Bulletin of Earthquake Engineering, 2:329360. Blaeij, A., Florax, G. M., Rietveld, P., and Verhoef, E. (2003). The value of statistical life in road safety: a meta analysis. Accident Analysis and Prevention, 35(6):973986. Boore, D. M., Joyner, W. B., and Fumal, T. E. (1997). Equations for estimating horizontal response spectra and peak acceleration from Western North American earthq.: A summary of recent work. Seismological Research Letters, 68/1. Bray, J. D. and Stewart, J. P. (2000). Damage patterns and foundation performance in adapazari. In Youd, T. L., Bardet, J. P., and Bray, J. D., editors, Kocaeli, Turkey Earthquake of August 17, 1999 Reconnaissance Report. Earthquake Spectra, Supplement A to Vol. 16, 163-189. Brookshire, D., Chang, S., Cochrane, H., Olson, R., Rose, A., and Steenson, J. (1997). Direct and indirect economic losses from earthquake damage. Earthquake Spectra, 13(4):683701. Cetin, K., Seed, R., Kiureghian, A. D., Tokimatsu, K., Harder, L., Kayen, R., and Moss, R. (2004). Standard penetration test-based probabilistic and deterministic assessment of seismic soil liquefaction potential. Journal of Geotechnical and Geoenvironmental Engineering, 130(12):13141340. Churchman, C., Ackoff, R., and Arnoff, E. (1957). Introduction to Operations Research. Wiley and Sons, New York, USA. Ciriacy-Wantrup, S. V. (1947). Capital returns from soil-conservation practices. Journal of Farm Economics, 29(4):11811196. Coburn, A. and Spence, R. (2002). Earthquake protection. John Wiley and Sons, Chichester, England, second edition. Coppersmith, K. J. and Youngs, R. R. (1986). Capturing uncertainty in probabilistic seismic hazard assessments with intraplate tectonic environments. In Proceedings 3rd U.S. National Conference on Earthquake Engineering, volume 1, Charleston, South Carolina, USA. Cornell, C. (1968). Engineering seismic risk analysis. Bulletin of the Seismological Society of America, 58:15831606.

119

References Cornell, C. A. (2002). Probabilistic basis for 2000 SAC Federal Emergency Management Agency steel moment frame guidelines. Journal of Structural Engineering, 128:526533. Cornell, C. A. and Krawinkler, H. (2000). Progress and challenges in seismic performance assessment, PEER Center News, 3(2). Corotis, R. B. (2005). Public versus private discounting for life-cycle cost. In Proceedings 9th International Conference on Structural Safety and Reliability (ICOSSAR), Rome, Italy. Crowley, H., Pinho, R., and Bommer, J. (2004). A probabilistic displacement-based vulnerability assessment procedure for earthquake loss estimation. Bulletin of Earthquake Engineering, 2(2):173219. Dazio, A. (2000). Entwurf und Bemessung von Tragwandgebuden unter Erdbebeneinwirkung. PhD thesis, ETH Zurich. Deutsch, C. V. and Journel, A. G. (1997). GSLIB geostatistical software library and users guide. Oxford University Press, New York, USA. DRM (2004). Seismic microzonation for municipalities - pilot studies Adapazari, Glck, Ihsaniye and Degirmendere. Research report, General Directorate of Disaster Affairs, Ankara, Turkey. Erdik, M., Demircioglu, M., Sesetyan, K., Durukal, E., and Siyahi, B. (2004). Earthquake hazard in Marmara region, Turkey. Soil Dynamics and Earthquake Engineering, 24:605631. Faber, M. H. (2003). Uncertainty modeling and probabilities in engineering decision analysis. In Proceedings 22nd Offshore Mechanics and Arctic Engineering Conference, Cancun, Mexico. Faber, M. H. (2008). Risk Assessment in Engineering - Principles, System representation and Risk Criteria. Joint Commitee on Structural Safety, www.jcss.ethz.ch. Faber, M. H., Bayraktarli, Y. Y., and Nishijima, K. (2007). Recent developments in the management of risks due to large scale natural hazards. In Proceedings 16th Mexican National Conference on Earthquake Engineering, Ixtapa, Guerrero, Mexico. Faber, M. H., Kroon, I., Kragh, E., Bayly, D., and Decosemaeker, D. (2002). Risk assessment of decommisioning options using bayesian networks. Journal of Offshore Mechanics and Arctic Engineering, 124(4):231238. Faber, M. H. and Maes, M. (2003). Modeling of risk perception in engineering decision analysis. In Proceedings 11th IFIP WG7.5 Working Conference on Reliability and Optimization of Structural Systems, Banff, Canada. Faizian, M. and Faber, M. H. (2004). Consequence assessment in earthquake risk management using damage indicators. In Proceedings 1st International Forum on Engineering Decision Making (IFED), Stoos, Switzerland. Freeman, J. (1932). Earthquake Damage and Earthquake Insurance. McGraw-Hill, New York.

120

References Friis-Hansen, A. (2000). Bayesian networks as a decision support tool in marine applications. PhD thesis, Technical University of Denmark. Fuchs, H. (1972). Systemtheorie. In Bleicher, K., editor, Organisation als System, pages 5155. Betriebswissenschaftlicher Verlag Dr. Th. Gabler, Wiesbaden, Deutschland. Gasparini, D. A. and Vanmarcke, E. H. (1976). Simulated earthquake motions compatible with prescribed response spectra. Technical Report R76-4, MIT Civil Engineering, Cambridge, Massachusetts. Geipel, R. (1990). Long-Term Consequences Of Disasters - Reconstruction of Friuli, Italy, in Its International Context, 1976-88. Springer Verlag, New York, USA. GeNIe-Smile (2008). Graphical network interface, decision systems laboratory, pittsburgh, us. Geoengineer (2006). http://www.ce.washington.edu/ liquefaction/html/content.html. GHI (2004). Geohazards international, RADIUS introduction, www.geohaz.org/radius.html. Glkan, P., Bakir, O., Cetin, O., and Yazgan, U. (2003). Chapter 3: Site-specic geotechnical classication and building damage interpretation in Adapazari. Structural damage report, General Directorate of Disaster Affairs, Ankara, Turkey. Green, P. (2005). Disaster by design. British Journal of Criminology, 45:528546. Gret-Regamey, A. and Straub, D. (2006). Spatially explicit avalanche risk assessment linking Bayesian networks to a gis. Natural Hazards and Earth System Sciences, 6(6):911926. Gutenberg, B. and Richter, C. F. (1944). Frequency of earthquakes in California. Bulletin of the Seismological Society of America, 34:185188. Hanemann, W. M. (1994). Valuing the environment through contingent evaluation. Journal of Economic Perspectives, 8(4):1943. Has-Insaat (2007). Cost estimates for rebuilding, retrot and repair in downtown Adapazari, Turkey. Personal Communication. HAZUS (2001). Federal Emergency Management Agency, HAZUS99 Service Release 2, advanced engineering building module, technical and users manual. Technical report. Hicks, J. R. and Allen, R. G. D. (1934a). A reconsideration of the theory of value, part I. Economica, New Series 1:5276. Hicks, J. R. and Allen, R. G. D. (1934b). A reconsideration of the theory of value, part II. Economica, New Series 2:196219. Hofstetter, P. and Hammitt, J. K. (2002). Selecting human health metrics for environmental decision-support tools. Risk Analysis, 22(5):965983.

121

References Holzer, T., Bennett, M. J., Ponti, D. J., and Tinsley, J. C. (1999). Liquefaction and soil failure during 1994 Northridge eartqhuake. Journal of Geotechnics and Geoenvironmental engineering, 125(6):438452. Hugin (2008). Hugin researcher, software, www.hugin.com. Huo, J.-R. and Hwang, H. (1996). Fragility of memphis buildings. In Proceedings 11th World Conference on Earthquake Engineering, Acapulco, Mexico. Isaaks, E. H. and Srivastava, R. M. (1989). Applied Geostatistics. Oxford University Press, New York, USA. Ishizuka, M., Fu, K. S., and Yao, J. T. P. (1981). Speril I - computer based structural damage assessment system. Technical report, School of Civil Engineering, Purdue University, West Lafayette, Indiana. JCSS (2001). Probabilistic Model Code, www.jcss.ethz.ch. Jensen, F. V. (2001). Bayesian Networks and Decision Graphs. Springer Verlag, New York. Jensen, F. V., Olesen, K., and Andersen, S. (1990). An algebra of Bayesian belief universes for knowledge-based systems. Networks, 20:637659. Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2):263292. Kahnemann, D. and Kentsch, J. L. (1992). Contingent valuation and the value of public goods: The purchase of moral satisfaction. Journal of Environmental Economics and Management, 22(1):9094. Karsan, I. D. and Jirsa, J. O. (1969). Behavior of concrete under compressive loadings. Journal of Structural Division ASCE, 95(ST12). Kbler, O. (2006). Applied Decision-Making in Civil Engineering. PhD thesis, ETH Zurich. Khater, M., Scawthorn, C., and Johnson, J. (2003). Loss estimation. In Chen W.-F., S. C., editor, Earthquake Engineering Handbook. CRC Press, Boca Raton, Florida, USA. King, S., Kiremidjian, A., Basz, N., Law, K., Doroudian, M., Olson, R., Eidinger, J., Goettel, K., and Horner, G. (1997). Methodologies for evaluating the socio-economic consequences of large earthquakes. Earthquake Spectra, 13(4):565584. Kircher, C., Nassar, A., Kustu, O., and Holmes, W. (1997a). Development of building damage functions for earthquake loss estimation. Earthquake Spectra, 13(4):663682. Kircher, C., Reitherman, R., Whitman, R., and Arnold, C. (1997b). Estimation of earthquake losses to buildings. Earthquake Spectra, 13(4):703720. Klir, G., editor (1972). Trends in General Systems Theory. Wiley Interscience, New York.

122

References Knox, P. and Marston, S. (2006). Human Geography: Places and Regions in Global Context. Prentice-Hall, 4 edition. Komazawa, M., Morikawa, H., Nakamura, K., Akamatsu, J., Nishimura, K., Sawada, S., Erken, A., and Onalp, A. (2002). Bedrock structure in Adapazari, Turkey - a possible cause of severe damage by the 1999 Kocaeli earthquake. Soil Dynamics and Earthquake Engineering, 22:829836. Kramer, S. L. (1996). Geotechnical Earthquake Engineering. Prentice-Hall. Kramer, S. L., Mayeld, R. T., and Anderson, D. G. (2006). Performance-based liquefaction evaluation: Implications for codes and standards. In Proceedings 8th US National Conference on Earthquake Engineering, San Francisco. Krige, D. G. (1951). A statistical approach to some mine valuations and allied problems at the Witwatersrand. PhD thesis, University of Witwatersrand. Kulkarni, R. B., Youngs, R. R., and Coppersmith, K. J. (1984). Assessment of condence intervals for results of seismic hazard analysis. In Proceedings of the 8th World Conference on Earthquake Engineering, San Francisco, USA. Lauritzen, S. and Spiegelhalter, D. (1988). Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society, 50(Series B):157224. Lauritzen, S. L. (1992). Propagation of probabilities, means, and variances in mixed graphical association models. Journal of American Statistical Association, 87:10981108. LESSLOSS (2005). Risk mitigation for earthquakes and landslides. Technical report, Universita degli Studia di Pavia. Lestuzzi, P. (2000). Dynamisches plastisches Verhalten von Stahlbetontragwnden unter Erdbebeneinwirkung. PhD thesis, ETH Zurich. Lin, Y. (1999). General Systems Theory - A Mathematical Approach. Kluwer Academics Publishers, New York. Luce, R. and Raiffa, H. (1957). Games and Decisions. Wiley and Sons, New York. Marschak, J. and Radner, R. (1972). Economic Theory of Teams. Yale University Press, New Haven, Connecticut, USA. Matheron, G. (1963). Principles of geostatistics. Economic Geology, 58:12461266. Matthews, M. V., Ellsworth, W. L., and Raesenberg, P. A. (2002). A Brownian model for recurrent earthquakes. Bulletin of the Seismological Society of America, 92:22332250. McGuire, R. (1995). Probabilistic seismic hazard analysis and design earthquakes: Closing the loop. Bulletin of the Seismological Society of America, 85(5):12751284.

123

References McGuire, R. K. and Barnhard, T. P. (1981). Effect of temporal variations in seismicity in seismic hazard. Bulletin of Seismological Society of America, 71:321334. Mesarovic, M. and Takahara, Y. (1975). General Systems Theory - Mathematical Foundations. Academic Press, New York. Miller, T. R. (1990). The plausible range for value of life - red herrings among the mackerel. Journal of Forensic Economics, 3(3):1739. Miyasato, G. H., Dong, W., Levitt, R. E., Boissonnade, A. C., and Shah, H. C. (1986). Seismic risk analysis system. In Kostem, C. N. and Maher, M. L., editors, Expert systems in Civil Engineering. ASCE, New York. Mollamahmutoglu, M., Kayabali, K., Beyaz, T., and Kolay, E. (2003). Liquefaction-related building damage in Adapazari during the Turkey earthquake of August 17, 1999. Engineering Geology, 67(3-4):297307. Mosalam, K., Ayala, G., White, R., and Roth, C. (1997). Seismic fragility of LRC frames with and without masonry inll walls. Journal of Earthquake Engineering, 1(4):693720. Nathwani, J., Lind, N., and Pandey, M. (1997). Affordable Safety by Choice: The Life Quality Method. University of Waterloo, Waterloo. Netica (2008). Norsys software corp., www.norsys.com. Neumann, J. V. and Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press, Princeton, USA. Nishijima, K., Straub, D., and Faber, M. H. (2007). Inter-generational distribution of the lifecycle cost of an engineering facility. Journal of Reliability of Structures and Materials, 3(1):3346. NOAA (1972). A study of earthquake losses in the san francisco bay area. Technical report, National Oceanographic and Athmospheric Administration of the Departement of Commerce for the Ofce of emergency Preparedness. NRC (1988). Probabilistic seismic hazard analysis, Panel on Seismic Hazard Analysis. National Academy Press, Washington, D.C. NRC (1997). Panel on seismic hazard evaluation, review of recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and use of experts. Technical report, National Research Council. NRC (2004). The economic consequences of a catastrophic earthquake. National Academies Press. OpenSees (2008). Open system for earthquake engineering simulation, opensees.berkeley.edu/.

124

References Oreskes, N., Shrader-Frechette, K., and Belitz, K. (1994). Verication, validation, and conrmation of numerical models in the earth sciences. Science, 263:641646. Pagnoni, T., Tazir, Z. H., and Gavarini, C. (1989). Amadeus: A KBS for the assessment of earthquake damaged buildings. In Proceedings IABSE Colloquium on Expert Systems in Civil Engineering, Zurich, Switzerland. International Association for Bridge and Structural Engineering. Park, R., Kent, D. C., and Sampson, R. A. (1972). Reinforced concrete members with cyclic loading. Journal of Structural Division ASCE, 98(ST7). Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann Publishers, San Mateo, California. Pilkey, O. and Pilkey-Jarvis, L. (2007). Useless Arithmetic: Why environmental scientists cant predict the future. Columbia University Press. Pinho, R., Bommer, J., and Glaister, S. (2002). A simplied approach to displacement-based earthquake loss estimation analysis. In Proceedings 12th European Conference on Earthquake Engineering, London, U.K. Pinto, P. E., Giannini, R., and Franchin, P. (2004). Seismic Reliability Analysis of Structures. IUSS Press, Pavia, Italy. Porter, K. (2003). Seismic vulnerability. In Chen W.-F., S. C., editor, Earthquake Engineering Handbook. CRC Press, Boca Raton, Florida, USA. Priestley, M. J. N. (1998). Brief comments on elastic exibility of reinforced concrete frames and signicance to seismic design. Bulletin of the New Zealand Society for Earthquake Engineering, 31(4):246259. Rackwitz, R. (2001). Reliability analysis - a review and some perspectives. Structural Safety, 23:365 395. Rackwitz, R. (2006). The effect of discounting, different mortality reduction schemes and predictive cohort life tables on risk acceptability criteria. Reliability Engineering and Systems Safety, 91(4):469484. Rackwitz, R., Lentz, A., and Faber, M. H. (2005). Socio-economically sustainable civil engineering infrastructures by optimization. Structural Safety, 27:187229. Raiffa, H. and Schlaifer, R. (1961). Applied Statistical Decision Theory. Cambridge University Press, Cambridge, Massachusetts, USA. Rapoport, A. (1986). General Systems Theory, Essential Concepts and Applications. Abacus Press, Cambridge, Massachusetts, USA.

125

References Rathje, E., Idriss, I., Somerville, P., Ansal, A., Bachhuber, J., Baturay, M., Erdik, M., Frost, D., Lettis, W., Sozer, B., Stewart, J., and Ugras, T. (2000). Strong ground motions and site effects. Earthquake Spectra, 16(S1):6596. Reid, H. F. (1911). The elastic rebound theory of earthquakes. Bulletin of the Department of Geology, University of California, Berkeley, CA, 6:413444. Risk-UE (2004). An advanced approach to earthquake risk scenarios with application to different European towns, nal report. Technical report, www.risk-ue.net. Ropohl, G. (1979). Eine Systemtheorie der Technik - Zur Grundlegung der allgemeinen Technologie. Carl Hanser Verlag, Mnchen. Rosenblueth, E. and Mendoza, E. (1971). Realibility optimization in isostatic structures. Journal of Engineering Mechanics Division ASCE, 97(EM6):162542. Salvaneschi, P., Cadei, M., and Lazzari, M. (1997). A causal modeling framework for the simulation and explanation of the behavior of structures. Articial Intelligence in Engineering, 11:205215. Samuelson, P. A. (1938). A note on the pure theory of consumers behaviour. Economica, New Series 5:6171. Sancio, R., Bray, J., Stewart, J., Youd, T., Durgunoglu, H., nalp, A., Seed, R., Christensen, C., Baturay, M., and Karadayilar, T. (2002). Correlation between ground failure and soil conditions in Adapazari, Turkey. Soil Dynamics and Earthquake Engineering, 22:10931102. Sancio, R. B., Bray, J. D., Durgunoglu, T., and Onalp, A. (2004). Performance of buildings over liqueable ground in Adapazari, Turkey. In Proceedings 13th World Conference on Earthquake Engineering, page Paper No. 935, Vancouver, B.C., Canada. Savage, L. J. (1954). The Foundation of Statistics. Wiley and Sons. Scawthorn, C. (1981). Urban Seismic Risk: Analysis and Mitigation. PhD thesis, Kyoto University. Scawthorn, C. (1995). Insurance estimation: Performance in the Northridge earthquake. Contingencies, The magazine of the American Institute of Actuaries, Oct/Nov:2631. Schubert, M. (2009). Konzepte zur informierten Entscheidungsndung im Bauwesen. PhD thesis, ETH Zurich. SEAOC (1995). Performance based seismic engineering of buildings - Vision 2000: Volume I. Technical report, Structural Engineers Association of California. Seed, B. and Idriss, I. (1971). Simplied procedure for evaluating soil liquefaction potential. Journal of Soil Mechanics and Foundations Division, 97(SM9):12491273.

126

References Shakhramanjyan, M., Nigmetov, G., Larionov, V., Nikolaev, A., Frolova, N., Sushchev, S., and Ugarov, A. (2001). Advanced procedures for risk assessment and management in Russia. International Journal for Risk Assessment and Management, 2(3):303318. Shaktar, R. D. (1986). Evaluating inuence diagrams. Operations Research, 34(6):871882. Singhal, A. and Kiremidjian, A. (1997). A method for earthquake motion-damage relationsships with application to reinforced concrete frames. Technical Report 97-0008, State University of New york at Buffalo, National Center for Earthquake Engineering Research. Skjong, R. and Ronold, K. (1998). Societal indicators and risk acceptance. In Proceedings 17th International Conference on Offshore Mechanics and Arctic Engineering, Lisbon. Stempe, H. (2008). Systemtheorie im Brckenbau. PhD thesis, ETH Zurich. Stepp, J. C., Wong, I., Whitney, J., Quittmeyer, R., Abrahamson, N., and Toro, G. (2001). Yucca mountain PSHA project members. probabilistic seismic hazard analyses for ground motions and fault displacements at Yucca mountain, Nevada. Earthquake Spectra, 17:113151. Straub, D. (2005). Natural hazards risk assessment using Bayesian networks. In Proceedings 9th International Conference on Structural Safety and Reliability (ICOSSAR), Rome, Italy. Takahashi, Y., Kiureghian, A. D., and Ang, A. H.-S. (2004). Life-cycle cost analysis based on a renewal model of earthquake occurrences. Earthquake Engineering and Structural Dynamics, 33:859880. Taylor, P. J. (2004). World City Network - A global urban analysis. Routledge, London, UK. TDY (1975). Turkish seismic code - specications for structures to be built in disaster areas: Part III - earthquake disaster prevention. TDY (1998). Turkish seismic code - specications for structures to be built in disaster areas: Part III - earthquake disaster prevention. Thaler, R. (1978). A note on the value of crime control: Evidence from the property market. Journal of Urban Economics,, 5(1):137145. TS500-1975 (1974). Turkish standards - requirements for design and construction of reinforced concrete structures. Technical report, Turkish Standards Institute. TS500-1984 (1983). Turkish standards - requirements for design and construction of reinforced concrete structures. Technical report, Turkish Standards Institute. UN (1987). Resolution No:A/RES/42/169 adopted by the General Assembly. USGS (1999). United states geological survey, implication for earthquake risk reduction in the United State from Kocaeli, Turkey earthquake of August 17, 1999. Technical report, US Geological Survey Circular 1193.

127

References Viscusi, W. K. (1993). The value of risks to life and health. Journal of Economic Literature, 31(4):19121946. Whitman, R., Anagnos, T., Kircher, C., Lagorio, H., Lawson, R., and Schneider, P. (1997). Development of a national earthquake loss estimation methodology. Earthquake Spectra, 13(4):643661. Whitman, R., Reed, J., and Hong, S.-T. (1973). Earthquake damage probability matrices. In Proceedings 5th World Conference on Earthquake Engineering, Rome, Italy. Williams, J. B. (1938). The Theory of Investment Value. Fraser Publishing Company. WinBUGS (2008). The bsu.cam.ac.uk/bugs/winbugs/contents.shtml. Bugs project, http://www.mrc-

Yazgan, U. (2009). The Use of Post-Earthquake Residual Displacements as a Performance Indicator in Seismic Assessment. PhD thesis, ETH Zurich. Yazgan, U. and Dazio, A. (2006). Comparison of different nite-element modeling approaches in terms of estimating the residual displacements of rc structures. In Proceedings 8th US National Conference on Earthquake Engineering, San Francisco, USA. Youd, T., Idriss, I., Andrus, R., Arango, I., Castro, G., Christian, J., Dobry, R., Finn, W., Harder, L., Hynes, M., Ishihara, K., Koester, J., Liao, S., Marcuson, W. I., Martin, G., Mitchell, J., Moriwaki, Y., Power, M., Robertson, P., Seed, R., and Stokoe, K. I. (2001). Liquefaction resistance of soils: Summary report from the 1996 NCEER/NSF workshops on evaluation of liquefaction resistance of soils. Journal of Geotechnical and Geoenvironmental Engineering, 127(10):817833. Zhang, X. J. and Yao, J. T. P. (1988). Automation of knowledge organization and acquisition. Microcomputer in Civil Engineering, 3:112.

128

A BPN Algorithms
Algorithms for evaluating BPNs are presented in this Annex. Efcient algorithms are important for the applicability of BPNs, especially for comprehensive and complex problems as dealt with in the present dissertation. The joint probability table increases exponentially with the number of variables and the number of states in these variables in each variable. The presented algorithms here are among the most efcient methods known. Adhering closely to Jensen (2001), three methods are presented. The methods are illustrated via a numerical example with the BPN introduced in Chapter 3. The unconditional and conditional probability tables (hereafter referred to as potentials) of the variables in the BPN under consideration are given in Figure A.1.
M=0 M=0 G=0 G=0.5g 0.9 0.1 M=7 0.2 0.8 M=7 0.9 0.1

S
M=0 S=0 0.9 0.1 M=7 0.1 0.9

L
G=0 L='yes' L='no' 0.1 0.9 G=0.5g 0.7 0.3

S=10cm

D
L=yes S=0 S=10 0.1 0.9 S=0 0.9 0.1 L= S=10 0.3 0.7

D='no' D='collapse'

0.2 0.8

Figure A.1: Considered BPN with the probability tables.

Bucket elimination
For a BPN over U = M, G, S, L, D, the joint probability distribution P(U) is the product of all potentials specied in the BPN. According to the chain rule for BPNs the joint probability distribution is:

129

A BPN Algorithms

P(U) = P(M)P(G|M)P(S|M)P(L|G)P(D|S, L)
i

(A.1)

Using the notation with potentials: P(U) = 1 (M)2 (G, M)3 (S, M)4 (L, G)5 (D, S, L)
i

(A.2)

When any of the variables receive specic information, i.e. evidence, the BPN is used to calculate the updated probabilities. For example, when the state of the node "Liquefaction" in the BPN in Figure A.1 is no longer uncertain, i.e. it is known that liquefaction of the soil is observed, the joint probability distribution P(U, e) is calculated with the evidence of observing liquefaction: P(U, e) = 1 (M)2 (G, M)3 (S, M)4 (L, G)5 (D, S, L)e
i

(A.3)

The marginal distribution of any variable in a BPN can be calculated by marginalizing all other variables out of the joint probability distribution function. Starting with the set of tables as given in equations A.2 or A.3, whenever a variable has to be marginalized, all tables with that variable are taken, in multiplied form. The variable to be marginalized is then integrated out. This is called eliminating the variable, and the process of repeatedly eliminating a variable from an initial set of tables is called bucket elimination. For example, the marginal distribution of the variable Damage (D) is calculated by: P(D) =

M,G,S,L

1 (M)2 (G, M)3 (S, M)4 (L, G)5 (D, S, L)

(A.4)

The order of marginalization is chosen as M-G-S-L. First the variable M is marginalized. The numerical evaluation is given in Figure A.2.

P(G, S, L, D) =

1 (M)2 (G, M)3 (S, M)4 (L, G)5 (D, S, L)


M M

(A.5)

= 4 (L, G)5 (D, S, L) 1 (M)2 (G, M)3 (S, M) = 4 (L, G)5 (D, S, L)1 (G, S) Next, variable G is marginalized out. The numerical evaluation is given in Figure A.3.

P(S, L, D) =

4 (L, G)5 (D, S, L)1 (G, S)


G G

(A.6)

= 5 (D, S, L) 4 (L, G)1 (G, S) = 5 (D, S, L)4 (L, S)

130

P( D) = L S 5 ( D, S , L) G 4 ( L, G ) M 1 ( M ) 2 ( M , G ) 3 ( M , S )
M=0 M=7 0.9 0.1 G=0 G=0.5g M=0 0.9 0.1 M=7 0.2 0.8 S=0 S=10cm M=0 0.9 0.1 M=7 0.1 0.9

S=0 M=0 M=7 0.2*0.1*0.1= 0.002 0.8*0.1*0.1= 0.008 M=0

S=10 M=7 0.2*0.1*0.9= 0.018 0.8*0.1*0.9= 0.072 0.9*0.9*0.1= 0.081 0.1*0.9*0.1= 0.009

Multiply:

G=0 G=0.5g

0.9*0.9*0.9= 0.729 0.1*0.9*0.9= 0.081

Marginalize M:

S=0 G=0 G=0.5g 0.729+0.002= 0.731 0.081+0.008= 0.089

S=10 0.081+0.018= 0.099 0.009+0.072= 0.081

1' (G, S )

Figure A.2: Marginalizing out variable M.

P( D) = L S 5 ( D, S , L) G 4 ( L, G )1' (G, S )
G=0 L=yes L=no 0.1 0.9 G=0.5g 0.7 0.3 G=0 G=0.5g S=0 0.731 0.089 S=10 0.099 0.081

L=yes G=0 G=0.5g 0.089*0.7= 0.0623 0.081*0.7= 0.0567 G=0 0.731*0.9= 0.6579 0.099*0.9= 0.0891

L=no G=0.5g 0.089*0.3= 0.0267 0.081*0.3= 0.0243

Multiply:

S=0 S=10

0.731*0.1= 0.0731 0.099*0.1= 0.0099

Marginalize P:

L=yes S=0 S=10 0.0731+0.0623= 0.1354 0.0099+0.0567= 0.0666

L=no 0.6579+0.0267= 0.6846 0.0891+0.0243= 0.1134

4' ( L, S )

Figure A.3: Marginalizing out variable G.

Finally, variables S and L are marginalized out. The numerical evaluation is given in Figure A.4.

131

A BPN Algorithms

P(L, D) =

5 (D, S, L)4 (S, L)


S

(A.7)

= 5 (D, L) P(D) =

5 (D, L)
L

(A.8)

P ( D) = L S 5 ( D, S , L)4' ( L, S )
L=yes S=0 No Collapse 0.2 0.8 S=10 0.1 0.9 S=0 0.9 0.1 L=no S=10 0.3 0.7 L=yes L=no S=10 0.1*0.0666= 0.00666 0.9*0.0666= 0.05994 S=0 0.9*0.6846= 0.61614 0.1*0.6846= 0.06846 S=10 0.3*0.1134= 0.03402 0.7*0.1134= 0.07938 S=0 S=10 L=yes 0.1354 0.0666 L=no 0.6846 0.1134

Multiply:

S=0 No Collapse 0.2*0.1354= 0.02708 0.8*0.1354= 0.10832

Marginalize S:

L=yes No Collapse 0.02708+0.00666=0.03374 0.10832+0.05994=0.16826

L=no 0.61614+0.03402=0.65016 0.06846+0.07938=0.14784

5' ( D, L)
Marginalize L:

P( D)

No Collapse

0.03374+0.65016=0.6839 0.16826+0.07938=0.3161

Figure A.4: Marginalizing out variable S and L. The calculation order yielding the marginal distribution of D is hence: P(D) = 5 (D, S, L) 4 (L, G) 1 (M)2 (G, M)3 (S, M)
L S G M

(A.9)

The steps in marginalizing down to P(D) can be illustrated as in Figure A.5. The circle nodes are buckets containing potentials. The potentials in the buckets are multiplied by the incoming potentials, a variable is marginalized out, and the result is placed in a rectangular box. The rectangular box serves as a mailbox for a neighboring bucket. In Figure A.5 the bucket elimination scheme is given for two elimination orders. It should be noted that the domains for the elimination order M-G-S-L are smaller than for the elimination order L-G-M-S. As the size of the domains to be handled is a good indicator of complexity, choosing an elimination order yielding the smallest domains to be handled is important. Marginalizing down to different variables in a BPN yields different elimination frames as illustrated in Figure A.6 for calculating P(D) and P(L). It can easily be seen that most of the elements of the frames are the same and many calculations from the calculation of P(D) as given above can be reused for calculating P(L). The so-called junction trees present a systematic way of exploiting reuse when calculating all marginals.

132

1 ( M ) 2 (G, M ) 3 ( S , M )

1 ( M ) 2 (G, M ) 3 ( S , M )

1' (G, S )

1' (G, S )

4 (G, L)

4 (G, L)

4' ( L, S )

4' ( L, S )

5 ( D, S , L)

P( D)

5 ( D, S , L)

P ( L)

Figure A.5: A frame for computing P(D) with an elimination order M-G-S-L (left) and an elimination order L-G-M-S (right).

1 ( M ) 2 (G, M ) 3 ( S , M )

4 ( L, G ) 5 ( D, S , L)

1' (G, S )

4' (G, D, S )

4 (G, L)

2 ( M , G )

' 4

( L, S )

2' ( M , D, S )
1 ( M ) 3 ( M , S )

5 ( D, S , L)

P( D)

P( D)

Figure A.6: A frame for computing P(L) with an elimination order M-G-S-D (left) and P(D) with an elimination order M-G-S-L (right).

Junction tree
Evaluating a BPN, especially when multiple pieces of evidences are inserted, quickly becomes inefcient when using straightforward methods such as bucket elimination. These are practical only if the BPN is small and each node represents only a few states. To increase efciency in evaluating BPNs several algorithms have been developed (Shaktar, 1986). Research has paid most attention to algorithms based on the transformation of the BPN into join trees or junction trees. The junction tree algorithms are introduced using a graph-theoretic representation of the BPNs.

133

A BPN Algorithms A domain graph is an undirected graph with variables of the domain set as nodes and with links between pairs of variables being members of the same domain. For the BPN considered in this Annex, the domain graph is given in Figure A.7. In addition to the undirected links for each of the arrows, a new link between the nodes S and L is introduced. This link is called a moral link as it connects the parents of a common child node. In eliminating a variable, the potentials with that variable in their domain are multiplied. The domain of this product consists of that variable and its neighbors. When the variable is eliminated, the resulting potential has all neighbors of the eliminated variable in its domain. The graph-theoretical meaning of this is that all the neighbors of the eliminated variable are linked in pairs. For example, when eliminating the variable G in the domain graph in Figure A.7, a new link is introduced. These so-called ll-ins are not favorable as they require to deal with new potentials. That is why an elimination sequence generating no ll-ins is called perfect elimination sequence.
M M

Figure A.7: BPN and domain graph. An undirected graph with a perfect elimination sequence for all nodes is called a triangulated graph. The domain graph in Figure A.7 is not a triangulated graph; as mentioned above, the elimination of the variable G requires a new link. Introducing a link between the nodes G and S would result in a triangulated graph (Figure A.8).
M M

Figure A.8: Domain graph and triangulated graph. The set of domains produced during an elimination is called a domain set, where potentials

134

that are subsets of other potentials are removed. All perfect elimination sequences produce the same domain set; this is referred to as the set of cliques of the domain graph. This algorithm is originally developed by Lauritzen and Spiegelhalter (1988) and adapted by Jensen et al. (1990), see also Friis-Hansen (2000). In summary, the set of cliques is established using the following procedure:
Moralization Parent nodes with a common child node are connected. Deletion All arrows for the links are removed. Triangulation First, a variable with neighbors which are mutually connected is eliminated. The

eliminated variable and its neighbors form a clique. If there are no other variables with mutually connected neighbors, a ll-in link is added to the graph to obtain full connectivity. Then the second variable is eliminated analogously. If at any point a clique is formed which is a subset of an existing clique, it is not considered. Eliminating all variables yields the set of cliques. The cliques are organized in a tree, satisfying the following condition: the cliques on the path between two cliques must contain the intersection set of variables in the two cliques. The trees having this property are called join trees. Triangulated graphs can always be organized into a join tree (Jensen, 2001). The process of constructing a join tree is illustrated in the domain graph in Figure A.8 for the perfect elimination sequence M-G-S-L. Starting with the rst node to be eliminated, M, the rst clique is found M,G,S and denoted as V1 . From this clique all nodes having only neighbors in the clique are eliminated. For this step this is node M only. After eliminating M, the remaining nodes in the set G,S is denoted as separator S1 (Figure A.9). The clique set and separator set are given an index according to the number of nodes eliminated. The other clique and separator sets are counted on this index.
M

M,G,S V1 D

G,S S1

Figure A.9: Clique set and separator while eliminating node M.

Next, node G is eliminated. Again only one variable can be eliminated, as only node G has neighbors in the clique set. The clique set G,S,L is denoted as V2 and the remaining node set S,L is denoted as separator S2 (Figure A.10).

135

A BPN Algorithms
G S

G,S,L V2 D

S,L S2

Figure A.10: Clique set and separator while eliminating node G.

Finally, nodes S and L can be eliminated yielding the clique S,L,D denoted as V5 (Figure A.11).
S

S,L,D V5 D

Figure A.11: Clique set while eliminating nodes S and L. Having determined the clique set, the join tree can be constructed satisfying the condition that on a path between two clique sets, the intersection set of variables are in the separator set. Figure A.12 illustrates the join tree for the BPN under consideration.
G,S,L V2 G,S S1 S,L S2

M,G,S V1

S,L,D V5

Figure A.12: Join tree for the BPN in Figure A.7. A junction tree for a set of potentials of a triangulated graph is constructed with the following additional structure: Each potential is attached to a clique

136

Each link has the appropriate separator attached Each separator contains mailboxes for each direction
4 V2 : G, S , L 1 1 S1 : G, S 1 , 2 , 3 V1 : M , G, S 5 5 S2 : S , L 5 V3 : S , L, D

Figure A.13: Junction tree after a full propagation.

To calculate P(D), a clique containing D is made a temporary root and messages in the direction of that clique are sent from other leaf cliques. The message 1 = M 1 2 3 is placed in the appropriate S1 mailbox. Next, V2 assembles the incoming message and the potentials held to the set 2 = 1 4 . The variable G is eliminated from 2 , and the result, 5 = G M 1 2 3 is placed in the appropriate mailbox (see Figure A.13). V3 collects the incoming message, multiplies it by 5 and marginalizes S and L to obtain P(D): P(D) = P(D) = 5 (D, S, L) 4 (L, G) 1 (M)2 (G, M)3 (S, M)
L S G M

(A.10)

This process is called collect evidence to V3 . As expected, it yields the same equation as derived in the bucket elimination algorithm (see Equation A.9). To calculate the marginal for other variables, the messages are collected into a clique containing that variable. The junction tree can be prepared to calculate all marginals by a process referred to as distribute evidence. First, V3 sends the message 5 = D 5 to S2 . This message is assembled in V2 , L is eliminated and the message 1 = L 4 D 5 is sent to S1 . A full propagation is performed, in which evidences are collected and distributed in a junction tree. To calculate a marginal of a variable, the incoming messages are collected to a clique containing the variable. The incoming message is multiplied by the potential in the clique of the variable being considered. Eliminating all other variables from this product yields the marginal.

Stochastic simulation
Even though the junction tree algorithm increases the efciency of evaluating BPNs, it can happen that for some problems the space requirements cannot be met by the hardware available. In that case approximate methods such as stochastic simulation can be used. The probabilistic

137

A BPN Algorithms structure in the form of conditional probability tables of BPNs is exploited to draw a random conguration of the variables in a BPN a sufcient number of times. Again, the BPN given in Figure A.1 is used for illustration of the algorithm. First, the state of the node M is sampled. Another random number is drawn and the state of node G is assigned according to the conditional probability table of the variable G given M. This procedure is repeated to obtain the states of S,L and D, and a conguration is determined. Repeating this procedure 100000 times and sorting yields the Table A.1. Table A.1: Characteristic earthquake parameters associated with the segments. SLD MG 111 112 121 122 211 212 221 222 11 1505 5921 58790 6492 76 701 2124 5240 12 1116 4609 2214 247 53 584 83 198 21 5 18 154 13 16 154 472 1117 22 129 453 217 26 492 4639 642 1500 The probability distributions of the variables are calculated, counting in the sample set. For example, the node D is in the state No in the columns 2, 4, 6 and 8, i.e. in 68088 of the cases. This yields the marginal distribution of node D: 68088 = 0.6809 100000 31912 = 0.3192 100000

P(D = No ) = P(D = Collapse ) =

The marginal distributions of the other variables can be calculated analogously.

138

B Soil Parameters

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4510586 533557 1 1.725 N.A. ML 13 4511110 533760 0.4 1.725 41 ML 17 4518615 536041 0.25 3.725 N.A. ML 11 4518347 536011 0.5 5.725 N.A. CH 15 4517897 536071 0.3 4.225 30 CH 7 4517700 535828 0.5 4.525 N.A. CH 6 4517525 536048 0.25 5.225 N.A. N.A. 11 4517298 536405 0.5 13.225 N.A. N.A. 7 4516769 536228 1.7 5.375 N.A. ML 17 4509494 530734 1.42 2.225 N.A. N.A. 4 4508982 530549 1.1 3.725 N.A. N.A. 7 4507235 530548 4.5 6.225 N.A. N.A. 26 4506209 530302 4.4 4.725 N.A. N.A. 20 4509672 529883 0.4 6.725 N.A. N.A. 3 4508972 529783 1.9 7.725 N.A. N.A. 7 4507679 529543 3.8 6.225 N.A. N.A. 44 4507192 529685 7.4 7.725 N.A. CH 28 4506205 529882 4.1 15.225 N.A. N.A. 16 4509880 528896 0.16 10.725 N.A. N.A. 5 4508998 528847 1.4 9.225 N.A. N.A. 3 4507602 529129 N.A. 1.725 N.A. N.A. 21 4507200 529083 N.A. 7.725 N.A. N.A. 30 4506606 529571 5.2 1.725 N.A. N.A. 19 4509050 528349 0.4 12.725 N.A. N.A. 4 4507534 528354 N.A. 1.725 N.A. N.A. 29 4507188 528172 N.A. 2.225 N.A. N.A. 18 4510816 530058 0.55 7.725 N.A. N.A. 7 4510221 529937 0.6 2.225 N.A. N.A. 4 4509426 528836 0.1 10.725 N.A. N.A. 4 4508605 528761 3.5 1.725 N.A. CL-ML 27 4508528 528092 15.4 1.725 N.A. N.A. 23 4506454 528838 N.A. 2.225 N.A. N.A. 23

139

B Soil Parameters

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4508215 530236 N.A. 2.225 N.A. N.A. 18 4514164 533247 0.3 5.225 N.A. N.A. 10 4513852 532902 2.2 2.725 N.A. N.A. 16 4514545 532248 1.2 1.725 N.A. N.A. 8 4514714 532749 2.1 3.225 N.A. N.A. 4 4515180 533164 1.4 6.725 N.A. N.A. 7 4515574 533262 N.A. 6.725 N.A. N.A. 6 4515180 533802 1 3.225 N.A. N.A. 6 4515111 534130 0.5 1.725 N.A. N.A. 4 4515320 534503 N.A. 1.725 N.A. N.A. 16 4515383 534108 1.25 1.725 N.A. N.A. 6 4515631 533840 0.7 3.225 N.A. N.A. 5 4515648 534227 0.6 3.225 N.A. N.A. 4 4516010 534163 0.22 4.725 N.A. N.A. 6 4516540 534160 N.A. 3.225 N.A. N.A. 6 4516980 534260 N.A. 1.725 N.A. N.A. 9 4516860 534830 0.6 1.725 N.A. N.A. 4 4515900 535220 0.75 6.225 N.A. N.A. 8 4516817 535562 0.7 3.225 N.A. N.A. 6 4517910 534440 0.8 6.725 N.A. N.A. 12 4517870 533910 0.6 3.225 N.A. N.A. 11 4517860 535250 0.95 2.725 N.A. N.A. 6 4518800 534910 1.1 3.225 N.A. N.A. 7 4517590 533420 0.55 3.225 N.A. N.A. 12 4517620 532720 0.7 6.225 N.A. N.A. 5 4516980 532820 0.8 4.725 N.A. N.A. 5 4516990 533350 0.6 3.725 N.A. N.A. 7 4516310 533050 0.7 3.225 N.A. N.A. 19 4516540 533480 0.58 2.225 N.A. N.A. 5 4516330 533440 0.65 3.725 N.A. N.A. 4 4517180 532110 0.75 4.725 N.A. N.A. 6 4515959 533500 0.68 3.225 N.A. N.A. 6 4515127 534636 0.5 3.225 N.A. N.A. 8 4516510 533850 1.1 3.225 N.A. N.A. 9 4516331 534493 0.8 3.225 N.A. N.A. 6 4514854 535263 0.7 4.725 N.A. N.A. 11 4517410 533840 0.8 6.225 N.A. N.A. 19 4519040 534253 0.9 15.225 N.A. N.A. 17 4517290 531640 1.5 1.725 N.A. N.A. 13 4516220 532150 1 4.225 N.A. N.A. 13

140

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4517990 533200 1.2 2.225 N.A. N.A. 10 4518510 533650 1 2.225 N.A. N.A. 44 4516760 534980 1.12 1.975 N.A. N.A. 7 4515020 534860 2.5 1.725 N.A. N.A. 14 4518170 534310 0.75 1.725 N.A. N.A. 9 4516040 533870 1.35 3.225 N.A. N.A. 11 4515920 533700 0.9 3.225 N.A. N.A. 6 4513820 532380 1.2 2.225 N.A. N.A. 4 4514220 532510 1.25 5.125 N.A. N.A. 3 4513970 531980 1.3 2.125 N.A. N.A. 3 4513490 531780 1.7 3.725 N.A. N.A. 24 4514250 532110 2 2.225 N.A. N.A. 6 4515260 532610 2.4 2.225 N.A. N.A. 4 4516000 532590 3.7 5.725 N.A. N.A. 8 4516760 531770 N.A. 1.725 N.A. N.A. 6 4517300 531500 3.1 4.725 N.A. N.A. 9 4516190 532640 1.7 1.725 N.A. N.A. 7 4516460 534320 1.1 10.225 N.A. N.A. 2 4515600 534280 0.9 3.725 N.A. N.A. 12 4517740 533570 1.95 9.725 N.A. N.A. 11 4514690 533360 2.1 8.725 N.A. N.A. 6 4516230 533480 2.05 1.725 N.A. N.A. 2 4514790 532710 1.9 5.225 N.A. N.A. 8 4512230 533090 0.9 5.225 N.A. N.A. 11 4512430 533160 1 4.725 N.A. N.A. 6 4512510 533260 0.9 1.725 N.A. MH 9 4512210 533480 0.95 2.225 N.A. N.A. 7 4512640 533320 0.85 3.725 N.A. N.A. 7 4512320 533630 0.8 1.725 N.A. N.A. 9 4512360 533590 0.85 2.225 N.A. N.A. 6 4511970 533740 0.85 12.225 N.A. N.A. 4 4512100 534100 0.3 1.725 N.A. N.A. 9 4512490 534010 0.9 3.725 N.A. N.A. 7 4512330 533840 0.8 1.725 N.A. N.A. 9 4512870 533990 0.9 3.225 N.A. N.A. 6 4512700 534280 N.A. 3.225 N.A. ML 5 4512340 534570 N.A. 3.725 N.A. N.A. 6 4512940 534670 N.A. 1.725 N.A. N.A. 4 4513310 534670 N.A. 1.725 N.A. N.A. 4 4512450 532860 N.A. 1.725 N.A. N.A. 28

141

B Soil Parameters

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4512970 535520 N.A. 3.225 N.A. N.A. 5 4512890 535890 N.A. 7.725 N.A. MH 9 4513150 535790 N.A. 2.225 N.A. N.A. 10 4513510 534760 N.A. 2.225 N.A. ML 3 4513390 535760 2.1 3.225 N.A. N.A. 2 4513030 535030 N.A. 1.725 N.A. N.A. 5 4513850 535490 N.A. 3.225 N.A. N.A. 7 4512580 534960 N.A. 1.725 N.A. N.A. 8 4513490 535110 N.A. 2.175 N.A. N.A. 7 4513590 534470 N.A. 2.225 N.A. N.A. 7 4513720 535190 N.A. 1.725 N.A. N.A. 11 4513830 534510 N.A. 3.725 N.A. N.A. 8 4514160 534690 N.A. 1.725 N.A. N.A. 3 4514060 535140 N.A. 1.725 N.A. CL 5 4513840 534790 N.A. 2.025 N.A. N.A. 4 4515120 535280 N.A. 1.725 N.A. N.A. 2 4513860 534940 N.A. 2.225 N.A. N.A. 6 4515060 535040 N.A. 2.225 N.A. N.A. 6 4514360 534500 N.A. 4.725 N.A. N.A. 6 4514460 534700 1.2 3.225 N.A. N.A. 5 4514060 534870 N.A. 2.225 N.A. N.A. 3 4514170 534360 N.A. 10.725 N.A. N.A. 8 4514480 534400 0.75 2.225 N.A. N.A. 3 4514740 534680 N.A. 2.225 N.A. N.A. 2 4514730 534350 N.A. 1.725 N.A. N.A. 6 4514440 534120 N.A. 2.225 N.A. N.A. 4 4513900 534140 N.A. 1.725 N.A. N.A. 7 4513880 533910 N.A. 1.725 N.A. N.A. 4 4514660 533940 N.A. 3.225 N.A. N.A. 4 4514720 533500 N.A. 3.225 N.A. N.A. 6 4513400 533370 N.A. 4.725 N.A. N.A. 47 4513620 533830 N.A. 3.225 N.A. N.A. 5 4513620 533570 N.A. 2.225 N.A. N.A. 4 4513840 533440 1.1 1.725 N.A. CH 4 4513850 533690 0.6 2.225 N.A. N.A. 5 4513800 533280 N.A. 5.225 N.A. N.A. 7 4514020 533510 0.6 1.725 N.A. N.A. 4 4514240 533610 0.9 2.225 N.A. N.A. 4 4514250 533840 0.5 1.725 N.A. CL-ML 3 4514470 533710 0.65 2.225 N.A. N.A. 5

142

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4514230 534610 0.8 3.225 N.A. N.A. 5 4512060 534980 2 3.725 N.A. MH 6 4512420 532680 N.A. 1.725 N.A. N.A. 11 4512170 532710 N.A. 2.225 N.A. N.A. 11 4514890 534230 1 2.225 N.A. N.A. 4 4512680 533460 N.A. 1.725 N.A. N.A. 40 4510847 530771 1.2 3.225 N.A. N.A. 8 4510295 530515 1.2 5.225 N.A. N.A. 2 4508300 530584 0.6 5.225 N.A. N.A. 7 4507747 530569 1.35 2.225 N.A. N.A. 15 4506414 530583 6.4 2.225 N.A. CL 23 4508396 529774 5.3 2.225 N.A. N.A. 2 4506692 529776 4.8 6.225 N.A. N.A. 26 4506550 528158 N.A. 9.225 N.A. N.A. 28 4507617 529963 8.4 1.725 N.A. SC 24 4509134 530507 1.1 1.725 N.A. N.A. 9 4509413 530483 0.9 1.725 N.A. N.A. 7 4508726 530068 1 2.225 N.A. N.A. 5 4515083 532465 1.2 1.725 N.A. CH 4 4516105 533764 1.1 3.225 N.A. CL 7 4516260 533319 0.92 2.225 23 ML 2 4516303 532993 2.2 1.725 N.A. N.A. 3 4514730 533496 1.1 1.725 N.A. N.A. 4 4513596 533261 2.1 8.525 N.A. N.A. 14 4513513 533737 1.2 1.725 N.A. N.A. 2 4516179 534731 0.1 3.225 N.A. CL 3 4511749 534967 1.7 1.725 N.A. N.A. 7 4511748 534666 1.3 1.725 N.A. N.A. 5 4511872 534928 0.7 3.225 N.A. N.A. 3 4512113 534173 1.1 3.725 N.A. N.A. 5 4513324 534448 N.A. 1.725 N.A. N.A. 4 4511590 534067 0.5 1.725 N.A. N.A. 9 4513694 534836 3.3 3.225 N.A. ML 7 4512960 533784 2.2 9.225 N.A. N.A. 30 4513985 533349 1.2 1.725 N.A. N.A. 8 4510460 532816 1 1.725 N.A. SC 13 4510560 534750 2.2 1.725 48 ML 20 4511130 534850 1 1.725 63 CL 14 4510172 533281 1.7 1.725 39 CL 12 4510810 533510 1.4 1.725 N.A. SM 7

143

B Soil Parameters

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4511230 533420 1.4 1.725 N.A. ML 14 4510076 532647 1 1.725 N.A. ML 7 4510890 532760 1 1.725 N.A. ML 12 4510411 533901 N.A. 1.725 N.A. MH 11 4511210 533980 N.A. 1.725 N.A. ML 16 4510501 533419 N.A. 1.725 N.A. N.A. 14 4514676 533800 1.2 1.725 N.A. CH 7 4516652 533622 1.8 1.725 N.A. N.A. 10 4516705 533107 1.3 4.725 N.A. CL-ML 3 4516898 534387 0.9 3.225 N.A. CL 4 4513742 535458 2.6 3.225 N.A. CL 6 4515103 532498 1.8 1.725 N.A. N.A. 5 4514572 532838 1.1 4.225 N.A. N.A. 5 4514657 532077 1.4 1.725 N.A. N.A. 5 4514682 532635 1.9 5.225 N.A. CH 8 4514249 532738 0.41 5.225 N.A. N.A. 4 4515889 532330 2.6 3.225 N.A. CL 3 4513587 531740 1.7 3.725 N.A. N.A. 24 4513904 531738 2 2.225 N.A. N.A. 6 4516704 533921 2 2.225 N.A. N.A. 12 4509071 534840 1.5 1.725 N.A. SM 11 4508640 534449 1.3 1.725 7 ML 11 4508277 533858 2.2 1.725 8 SM 11 4508543 533388 1.6 1.725 10 ML 12 4508892 532955 1.8 1.725 7 ML 16 4508339 532129 1.2 1.725 16 ML 7 4508106 531594 1.8 6.225 16 CL 10 4507660 531489 2 14.725 16 ML 18 4507170 531430 2.2 9.725 18 ML 10 4507380 531689 2.1 1.725 12 ML 10 4516743 533935 2 1.725 N.A. N.A. 5 4516332 533982 1.7 1.725 N.A. CL 4 4516612 534342 2.8 3.225 15 ML 10 4517155 533067 1.5 3.225 N.A. CL 8 4515661 534286 1.8 1.725 11 ML 2 4516047 533200 1.11 8.15 40 CL 3 4516118 533651 2.4 9.6 19 ML 4 4515957 532953 0.37 3.05 69 CL 3 4515841 532505 2.6 3.55 N.A. CL 3 4516300 534230 0.74 1.6 15 ML 2

144

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4516367 534373 0.62 1.5 17 ML 2 4516219 533488 0.92 2.2 23 ML 2 4516197 533291 N.A. 4.35 22 CL 2 4515721 534498 0.5 1.5 19 ML 3 4515664 534209 1.16 1.15 32 CL 2 4516879 534078 1.65 2.85 13 ML 4 4515919 534284 0.64 3.45 15 ML 2 4515037 534780 N.A. 4.95 23 ML 3 4516312 533341 0.9 6.1 18 CL-ML 3 4516299 533355 0.7 3.725 N.A. CL-CH 2 4516312 533331 0.87 0.9 N.A. ll 3 4516307 533355 0.82 5.2 42 CL-MH 3 4516934 533804 3.3 1.8 28 ML 2 4516900 533803 1.68 3.4 4 ML 2 4516841 533166 1.42 1.8 N.A. ML-CL 2 4516865 533170 1.45 1.4 N.A. CL-ML 2 4516862 533153 1.3 4.85 N.A. CL-ML 3 4516857 533152 0.44 2.2 46 CL 2 4516854 533168 N.A. 3 38 CL 1 4516858 533168 0.96 2.05 20 ML 1 4516850 533167 N.A. 2.05 18 ML 3 4515112 534418 1.68 1.05 30 CL 1 4515121 534405 N.A. 2.5 N.A. ML 3 4515121 534425 2.28 2.95 16 ML 4 4516132 534262 0.7 3.8 N.A. MH-CH 3 4516130 534262 0.44 1.8 N.A. SP-SM 6 4516130 534264 0.35 4.5 51 CH 3 4515410 534437 1.64 3.6 27 ML-CL 3 4515760 534548 0.67 1.55 26 CL-ML 3 4515780 534527 0.45 5.15 N.A. ML 6 4516444 535187 1.72 4.15 50 CH-MH 3 4516020 533156 0.71 2.75 22 ML 4 4515803 534714 0.41 1.35 25 ML-CL 2 4515775 534700 0.69 1.75 N.A. ML 3 4515790 534706 0.7 2.8 30 ML 6 4515797 534718 0.4 1.4 36 ML 3 4516137 534118 0.8 1.3 40 CL 3 4516317 534242 0.68 3.2 41 ML-CL 2 4516986 533823 2.3 4.725 N.A. CL 4 4517606 533441 2.7 1.725 N.A. N.A. 3

145

B Soil Parameters

Table B.1: Data for the most susceptible soil layer to liquefaction in 312 borings in Adapazari. North UTM East UTM GW [m] Depth [m] FC [%] USCS N30 4518048 533270 2.8 1.725 N.A. N.A. 22 4517011 532109 0.65 1.725 N.A. N.A. 5 4517114 532675 0.9 3.225 N.A. CH 3 4516743 532443 2.8 1.725 N.A. N.A. 3 4517258 532841 1.65 4.725 N.A. CH 5 4516081 532671 1.7 1.725 N.A. N.A. 7 4517198 531661 3.1 3.225 N.A. N.A. 9 4515582 533257 2.4 4.725 N.A. CL 8 4515909 533039 1.9 6.225 N.A. CL 7 4515587 533840 1.7 1.725 N.A. CH 4 4513949 532538 2.8 3.225 N.A. N.A. 11 4517830 533760 N.A. 1.725 N.A. N.A. 4 4517551 535694 1.4 1.725 N.A. N.A. 6 4518577 534205 3.5 15.225 N.A. N.A. 14 4517882 534397 2.3 1.725 N.A. N.A. 7 4514166 534765 1 1.725 N.A. N.A. 8 4514577 534692 1.1 1.725 N.A. N.A. 2 4515784 534254 0.7 3.225 N.A. N.A. 4 4514916 535331 1 1.725 N.A. N.A. 9 4517612 534620 1 2.225 N.A. N.A. 44 4518243 534987 2.1 1.725 N.A. CL 2 4516776 534861 0.9 1.725 N.A. N.A. 4 4517015 535543 1 3.225 N.A. CL 5 4516822 535448 0.5 1.725 N.A. N.A. 6 4515965 534786 0.7 1.725 N.A. ML 8 4516008 535203 1 1.725 N.A. ML 4 4515905 535233 N.A. 3.225 N.A. N.A. 5 4515365 534148 1.87 1.725 52 CH 6 4515169 533935 1.6 1.725 N.A. CL 3 4515191 533503 0.4 3.225 N.A. CH 12 4515117 533114 2.2 3.225 N.A. CL 13 4515222 534601 0.9 3.225 N.A. CL 4 4515800 535014 0.4 3.225 N.A. CL 3 4515034 535010 1.6 1.725 N.A. ML 6 4515765 534620 0.89 1.725 N.A. ML 3 4512450 535212 3 1.725 N.A. ML 10 4513304 535081 1.15 1.725 N.A. N.A. 8 4512641 535583 4.1 1.725 N.A. ML 14 4513151 535445 2.8 3.225 N.A. CL 7

146

C Software Codes

C.1 Seismic hazard analysis tools


Matlab les for evaluating the BPN in subsection 4.1.1 - A Bayesian probabilistic network for a generic seismic source
f u n c t i o n BPN_PSHA

10

15

20

% % % % % % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN. Output is specified in form of marginal and conditional probability distributions. For controling HUGIN from MATLAB the then the available functions in this created from this library. The HUGIN following command and create a HUGIN bpn=actxserver(HAPI.Globals); ActiveX server is loaded and library are used to alter objects ActiveX Server is loaded with the API object named bpn:

%An object domain is created which holds the network:


25

domain = i n v o k e ( bpn , LoadDomainFromNet , C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ . . . . . . BPN_PSHA . n e t , 0 , 0 ) ;

%The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ;

30

%The number of discrete states of the nodes Magnitude, Distance, %Eps_SD and SD is set:
35

nM= 1 0 ; nR = 1 0 ; nEps_SD = 5 0 ;

147

C Software Codes
nSD = 1 0 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ; s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ;

40

45

% Marginal distribution of node Distance (Figure 4.1b) is calculated and % plotted:


[ R , P_R ] = Line_EQ_R_1 ( 5 0 0 , nR , 50 ,75 , 15 , 30); M=M_EQ; figure bar ( P_R )

50

% Marginal distribution of node Magnitude (Figure 4.1d) is calculated and % plotted:


[ Nu_Mmin , M_EQ, P_M] =EQ_M_1 ( 4 , 7 . 3 , 4 . 4 , 1 , nM ) ; figure bar (P_M)

55

% Marginal distribution of node e_SD (Figure 4.1f) is calculated and % plotted


60

[ Eps_SD , dSD , e ] = EPS_SD_1 ( nEps_SD ) ; figure bar ( Eps_SD )

%The discrete probabilities are set for node Magnitude in the BPN:
65

f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

%The discrete probabilities are set for node Distance in the BPN:
70

f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

75

%The discrete probabilities are set for node Eps_SD in the BPN:
f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end

80

%The conditional probability table of the node SD is initialized with %zeros:


f o r i = 1 : (nMnRnSDnEps_SD ) s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

85

90

%for all combinations of the states in the node Magnitude and Distance %the spectral displacements are calculated with the Boore Joyner and Fumal %attenuation model

148

[SDBOORE] = SD_1 ( nEps_SD ,M, R ) ;

%The limits for the discretisation of the node SD are set:


CoeffSD = [ 0 0 . 0 0 1 0 . 0 0 2 0 . 0 0 3 0 . 0 0 4 0 . 0 0 5 0 . 0 0 6 0 . 0 1 0 . 0 4 0 . 0 7 max (SDBOORE ) ] ;
95

%The labels of the states are set for node SD in the BPN:
f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , [ SD= num2str ( CoeffSD ( i + 1 ) ) ] ) ; end
100

%The discrete probabilities are set for node SD in the BPN:


K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1 ) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end end end

105

110

% The loaded BPN is compiled with the HUGIN inference engine:


i n v o k e ( domain , Compile ) ;
115

% The BPN with the manipulated states and probabilities can be saved % using the command:
i n v o k e ( domain , SaveAsNet , C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ . . . BPN_PSHA_Output . n e t ) ;

120

% Marginal distribution of node SD (Figure 4.2b) is calculated and % plotted


f o r i = 1 : nSD

125

% The marginal probability of any state of any node can be read out % from a compiled BPN. Note that the first state is labeled as 0, the % second state as 1, etc.
P_SD ( i ) = g e t ( ndSD , B e l i e f , ( i 1 ) ) ; end figure bar ( P_SD )

130

% Conditional probability distribution of node SD given M=5.5 and R=60km % (Figure 4.2c) is calculated and plotted % The states of the nodes with evidences are selected
135

i n v o k e ( ndEQ_M , S e l e c t S t a t e , 4 ) ; i n v o k e ( ndEQ_R , S e l e c t S t a t e , 5 ) ;

% With new evidences set, the BPN has to be propagate the findigs
i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ;
140

f o r i = 1 : nSD P_SD ( i ) = g e t ( ndSD , B e l i e f , ( i 1 ) ) ; end

149

C Software Codes
figure bar ( P_SD ) f u n c t i o n BPN_PSHA_Deaggregation

145

10

15

20

% by Yahya Y. Bayraktarli, 30/08/2009 % ETH Zrich % bayraktarli@hotmail.com % % Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment % in earthquake modeling using Bayesian networks, Georisk, accepted for % publication. % % This script reads a Bayesian probabilistic network for probabilistic % seismic hazard analysis, calculates the probability distribution of the % nodes in the BPN and compiles the BPN with the inference engine of HUGIN. % Output is specified in form of deaggregation. % % %For controling HUGIN from MATLAB the ActiveX server is loaded and %then the available functions in this library are used to alter objects %created from this library. The HUGIN ActiveX Server is loaded with the %following command and create a HUGIN API object named bpn:
bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

%An object domain is created which holds the network:


25

domain = i n v o k e ( bpn , LoadDomainFromNet , . . . C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ BPN_PSHA . n e t , 0 , 0 ) ;

%The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ;

30

%The number of discrete states of the nodes Magnitude, Distance, %Eps_SD and SD is set:
35

40

nM= 1 0 ; nR = 1 0 ; nEps_SD = 5 0 ; nSD = 1 0 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ; s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ;

%The probability distribution of node Eps_SD is calculated:


45

[ Eps_SD , dSD , e ] = EPS_SD_1 ( nEps_SD ) ;

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M] =EQ_M_1 ( 4 , 7 . 3 , 4 . 4 , 1 , nM ) ;
50

%The probability distribution of node Distance is calculated:

150

[ R , P_R ] = Line_EQ_R_1 ( 5 0 0 , nR , 50 ,75 , 15 , 30); M=M_EQ;

%The discrete probabilities are set for node Magnitude in the BPN:
55

f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

60

%The discrete probabilities are set for node Distance in the BPN:
f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

65

%The discrete probabilities are set for node Eps_SD in the BPN:
f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end

70

%The conditional probability table of the node SD is initialized with %zeros:


75

f o r i = 1 : (nMnRnSDnEps_SD ) s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

80

%for all combinations of the states in the node Magnitude and Distance %the spectral displacements are calculated with the Boore Joyner and Fumal %attenuation model
[SDBOORE] = SD_1 ( nEps_SD ,M, R ) ;

%The limits for the discretisation of the node SD are set:


85

CoeffSD = [ 0 0 . 0 0 1 0 . 0 0 2 0 . 0 0 3 0 . 0 0 4 0 . 0 0 5 0 . 0 0 6 0 . 0 1 0 . 0 4 0 . 0 7 max (SDBOORE ) ] ;

%The labels of the states are set for node SD in the BPN:
f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , [ SD= num2str ( CoeffSD ( i + 1 ) ) ] ) ; end

90

%The discrete probabilities are set for node SD in the BPN:


K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end end end

95

100

% The loaded BPN is compiled with the HUGIN inference engine:

151

C Software Codes
i n v o k e ( domain , Compile ) ;
105

% The BPN with the manipulated states and probabilities can be saved using % the command:
i n v o k e ( domain , SaveAsNet , . . . C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ BPN_PSHA_Output . n e t ) ;
110

% The states of the nodes with evidences are selected


i n v o k e ( ndSD , S e l e c t S t a t e , 4 ) ;
115

% With new evidences set, the BPN has to be propagate the findigs
i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ;

% Deaggregation by magnitude and distance for 4mm<=SD<5mm (Figure 4.3) is % calculated:


120

125

130

f o r j = 1 : nR P_R2 ( j ) = g e t ( ndEQ_R , B e l i e f , ( j 1 ) ) ; end f o r i = 1 :nM f o r j = 1 : nR i n v o k e ( ndEQ_R , S e l e c t S t a t e , ( j 1 ) ) ; i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ; P_M_R ( i , j ) = P_R2 ( j ) ( g e t ( ndEQ_M , B e l i e f , ( i 1 ) ) ) ; end end figure b a r 3 ( P_M_R ) f u n c t i o n [ Eps_SD , dSD , e ] = EPS_SD_1 ( nEps_SD )

%Calculates the probabilites of the standardnormal distribution for the %bins given a number of discretisation nEps_SD
5

emin =3; emax = 3 ; f o r i = 1 : ( nEps_SD 1) e ( i ) = emin + ( i 1)( emaxemin ) / ( nEps_SD 2 ) ; end f o r i = 1 : nEps_SD i f i ==1 Eps_SD ( i ) = normcdf ( e ( i ) , 0 , 1 ) ; e l s e i f i ==( l e n g t h ( e ) + 1 ) Eps_SD ( i ) = normcdf ( e ( 1 ) , 0 , 1 ) ; else Eps_SD ( i ) = normcdf ( e ( i ) , 0 , 1 ) normcdf ( e ( i 1 ) , 0 , 1 ) ; end end Eps_SD=Eps_SD ; f o r i = 1 : nEps_SD i f i ==1 dSD ( i ) = e ( i ) 0 . 5 ;

10

15

20

152

25

e l s e i f i ==( l e n g t h ( e ) + 1 ) dSD ( i ) = e ( i 1 ) + 0 . 5 ; else dSD ( i ) = ( e ( i 1)+ e ( i ) ) / 2 ; end end dSD=dSD ; f u n c t i o n [ Nu_Mmin , M_EQ, P_M , M l i m i t s ] =EQ_M_1 ( Mmin , Mmax , a , b , nM)

30

%Assuming that earthquakes of magnitude less than Mmin do not contribute to %damage the mean rate of exceedance for Mmin is Nu_Mmin. P_M is the %probability of having a magnitude in the magnitude range characterised by %M_EQ. The formula for Gutenberg and Richter law with upper (Mmax) and %lower (Mmin) bounds is applied. %a and b are the parameters of the Gutenberg richter recurrence relation.
Nu_Mmin = 1 0 ^ ( a+b ( Mmin ) ) ;

10

f o r i = 1 : (nM+ 1 ) M l i m i t s ( i ) =Mmin+ (MmaxMmin ) / nM ( i 1 ) ; end


15

20

f o r i = 1 :nM M_EQ( i ) = ( M l i m i t s ( i ) + M l i m i t s ( i + 1 ) ) / 2 ; end M_EQ=M_EQ ; f o r i = 1 :nM P_M( i ) = ( exp ( 2.303 b ( M l i m i t s ( i )Mmin)) exp ( 2.303 b ( M l i m i t s ( i +1)Mmin ) ) ) . . . /(1 exp ( 2.303 b (MmaxMmin ) ) ) ; end P_M=P_M ; f u n c t i o n [ R , P_R , R l i m i t s ] = Line_EQ_R_1 ( nr , nR , X1 , Y1 , X2 , Y2 )

%Coordinates of the site (0,0), Coordinates of the two ends of the line %source are (X1,Y1) and (X2,Y2). %nr is the discretisation of the line source, nR is the discretisation of %the bins for the EQ_Distance node

10

for i = 1 : ( nr +1) x ( i ) =X1+ (X2X1 ) ( i 1 ) / n r ; y ( i ) =Y1+ (Y2Y1 ) ( i 1 ) / n r ; end for i =1: nr r ( i )= s q r t ( ( ( x ( i )+ x ( i + 1 ) ) / 2 ) ^ 2 + ( ( y ( i )+ y ( i + 1 ) ) / 2 ) ^ 2 ) ; end f o r i = 1 : ( nR + 1 ) R l i m i t s ( i ) = min ( r ) + ( max ( r )min ( r ) ) / nR ( i 1 ) ; end f o r i = 1 : nR R_EQ ( i ) = ( R l i m i t s ( i ) + R l i m i t s ( i + 1 ) ) / 2 ;

15

20

153

C Software Codes
end H_R= h i s t ( r , R_EQ ) ; P_R=H_R / n r ; P_R=P_R ; R=R_EQ ; f u n c t i o n [ SD] = SD_1 ( nEps_SD ,M, R)

25

%for all combinations of the states in the node Magnitude and Distance %the spectral displacements are calculated with the Boore Joyner and Fumal %attenuation model
5

T=0.49; [ Eps_SD , dSD , e ] = EPS_SD_1 ( nEps_SD ) ; N1 = 1 ; f o r j = 1 : nEps_SD f o r m= 1 : l e n g t h (M) f o r n =1: l e n g t h (R) [MU_SA SIGMA_SA] = b j f _ a t t e n _ 1 (M(m) , R( n ) , T , 1 , 3 1 0 , 1 ) ; SD ( N1 ) = 9 . 8 0 6 ( TT / 4 / p i / p i ) exp ( l o g (MU_SA) +SIGMA_SAdSD ( j ) ) ; i f SD ( N1) <=0 SD ( N1 ) = 0 ; end N1=N1 + 1 ; end end end f u n c t i o n [ sa , s i g m a ] = b j f _ a t t e n _ 1 (M, R , T , F a u l t _ T y p e , Vs , a r b )

10

15

20

10

15

% % % % % % % % % % % % %

by Jack Baker, 2/1/05 Stanford University bakerjw@stanford.edu Boore Joyner and Fumal attenuation model (1997 Seismological Research Letters, Vol 68, No 1, p154). This script includes standard deviations for either arbitrary or average components of ground motion (See Baker and Cornell, 2005, "What spectral acceleration are you using," Earthquake Spectra, submitted).

20

25

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % INPUT % % M = Moment Magnitude % R = boore joyner distance % T = period: 0.1 to 2s (0.001s is a placeholder for the PGA) % Fault_Type = 1 for strike-slip fault % = 2 for reverse-slip fault % = 0 for non-specified mechanism % Vs = shear wave velocity averaged over top 30 m

154

30

35

% (use 310 for soil, 620 for rock) % arb = 1 for arbitrary component sigma % = 0 for average component sigma % % OUTPUT % % sa = median spectral acceleration prediction % sigma = logarithmic standard deviation of spectral acceleration % prediction FOR AN ARBITRARY OR AVERAGE COMPONENT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% coefficients
40

45

50

55

60

65

70

75

period = [0.001 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.22 . . . 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 . . . 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.1 1.2 1.3 1.4 1.5 1.6 . . . 1.7 1.8 1.9 2 ] ; B1ss = [ 0.313 1 . 0 0 6 1 . 0 7 2 1 . 1 0 9 1 . 1 2 8 1 . 1 3 5 1 . 1 2 8 1 . 1 1 2 1 . 0 9 1 . 0 6 3 1 . 0 3 2 . . . 0.999 0.925 0.847 0.764 0.681 0.598 0.518 0.439 0.361 0.286 0.212 . . . 0 . 1 4 0 . 0 7 3 0 . 0 0 5 0.058 0.122 0.268 0.401 0.523 0.634 0.737 . . . 0.829 0.915 0.993 1.066 1.133 1.249 1.345 1.428 1.495 . . . 1.552 1.598 1.634 1.663 1.685 1.699 ] ; B1rv = [ 0.117 1 . 0 8 7 1 . 1 6 4 1 . 2 1 5 1 . 2 4 6 1 . 2 6 1 1 . 2 6 4 1 . 2 5 7 1 . 2 4 2 1 . 2 2 2 1 . 1 9 8 . . . 1.17 1.104 1.033 0.958 0.881 0.803 0.725 0.648 0.57 0.495 0.423 . . . 0 . 3 5 2 0 . 2 8 2 0 . 2 1 7 0 . 1 5 1 0 . 0 8 7 0.063 0.203 0.331 0.452 0.562 . . . 0.666 0.761 0.848 0.932 1.009 1.145 1.265 1.37 1.46 1.538 . . . 1.608 1.668 1.718 1.763 1.801 ] ; B 1 a l l = [ 0.242 1 . 0 5 9 1 . 1 3 1 . 1 7 4 1 . 2 1 . 2 0 8 1 . 2 0 4 1 . 1 9 2 1 . 1 7 3 1 . 1 5 1 1 . 1 2 2 . . . 1.089 1.019 0.941 0.861 0.78 0.7 0.619 0.54 0.462 0.385 0.311 0.239 . . . 0 . 1 6 9 0 . 1 0 2 0 . 0 3 6 0.025 0.176 0.314 0.44 0.555 0.661 0.76 . . . 0.851 0.933 1.01 1.08 1.208 1.315 1.407 1.483 1.55 1.605 . . . 1.652 1.689 1.72 1.743 ] ; B2 = [ 0 . 5 2 7 0 . 7 5 3 0 . 7 3 2 0 . 7 2 1 0 . 7 1 1 0 . 7 0 7 0 . 7 0 2 0 . 7 0 2 0 . 7 0 2 0 . 7 0 5 0 . 7 0 9 . . . 0.711 0.721 0.732 0.744 0.758 0.769 0.783 0.794 0.806 0.82 0.831 . . . 0.84 0.852 0.863 0.873 0.884 0.907 0.928 0.946 0.962 0.979 0.992 . . . 1.006 1.018 1.027 1.036 1.052 1.064 1.073 1.08 1.085 1.087 1.089 . . . 1.087 1.087 1.085 ] ; B3 = [ 0 0.226 0.23 0.233 0.233 0.23 0.228 0.226 0.221 0.216 0.212 . . . 0.207 0.198 0.189 0.18 0.168 0.161 0.152 0.143 0.136 0.127 . . . 0.12 0.113 0.108 0.101 0.097 0.09 0.078 0.069 0.06 0.053 . . . 0.046 0.041 0.037 0.035 0.032 0.032 0.03 0.032 0.035 0.039 . . . 0.044 0.051 0.058 0.067 0.074 0.085 ] ; B5 = [ 0.778 0.934 0.937 0.939 0.939 0.938 0.937 0.935 0.933 0.93 . . . 0.927 0.924 0.918 0.912 0.906 0.899 0.893 0.888 0.882 . . . 0.877 0.872 0.867 0.862 0.858 0.854 0.85 0.846 0.837 0.83 . . . 0.823 0.818 0.813 0.809 0.805 0.802 0.8 0.798 0.795 0.794 . . . 0.793 0.794 0.796 0.798 0.801 0.804 0.808 0.812 ] ; Bv = [ 0.371 0.212 0.211 0.215 0.221 0.228 0.238 0.248 0.258 0.27 . . . 0.281 0.292 0.315 0.338 0.36 0.381 0.401 0.42 0.438 0.456 . . . 0.472 0.487 0.502 0.516 0.529 0.541 0.553 0.579 0.602 . . . 0.622 0.639 0.653 0.666 0.676 0.685 0.692 0.698 0.706 0.71 . . . 0.711 0.709 0.704 0.697 0.689 0.679 0.667 0.655 ] ; Va = [ 1396 1112 1291 1452 1596 1718 1820 1910 1977 2037 2080 2118 2158 . . .

155

C Software Codes
80

85

90

95

100

105

110

2178 2173 2158 2133 2104 2070 2032 1995 1954 1919 1884 1849 1816 . . . 1782 1710 1644 1592 1545 1507 1476 1452 1432 1416 1406 1396 1400 . . . 1416 1442 1479 1524 1581 1644 1714 1795 ] ; h = [ 5.57 6.27 6.65 6.91 7.08 7.18 7.23 7.24 7.21 7.16 7.1 7.02 6.83 6.62 . . . 6.39 6.17 5.94 5.72 5.5 5.3 5.1 4.91 4.74 4.57 4.41 4.26 4.13 3.82 . . . 3.57 3.36 3.2 3.07 2.98 2.92 2.89 2.88 2.9 2.99 3.14 3.36 3.62 3.92 . . . 4.26 4.62 5.01 5.42 5.85 ] ; sigma1 = [ 0.431 0.44 0.437 0.437 0.435 0.435 0.435 0.435 0.435 0.435 0.435 . . . 0.435 0.437 0.437 0.437 0.44 0.44 0.442 0.444 0.444 0.447 0.447 0.449 . . . 0.449 0.451 0.451 0.454 0.456 0.458 0.461 0.463 0.465 0.467 0.467 . . . 0.47 0.472 0.474 0.477 0.479 0.481 0.484 0.486 0.488 0.49 0.493 . . . 0.493 0.495 ] ; sigmac = [ 0 . 1 6 0 0.134 0.141 0.148 0.153 0.158 0.163 0.166 0.169 0.173 0.176 . . . 0.177 0.182 0.185 0.189 0.192 0.195 0.197 0.199 0.200 0.202 0.204 . . . 0.205 0.206 0.209 0.210 0.211 0.214 0.216 0.218 0.220 0.221 0.223 . . . 0.226 0.228 0.230 0.230 0.233 0.236 0.239 0.241 0.244 0.246 0.249 . . . 0.251 0.254 0 . 2 5 6 ] ; sigmar = [ 0.460 0.460 0.459 0.461 0.461 0.463 0.465 0.466 0.467 0.468 . . . 0.469 0.470 0.473 0.475 0.476 0.480 0.481 0.484 0.487 0.487 0.491 . . . 0.491 0.494 0.494 0.497 0.497 0.501 0.504 0.506 0.510 0.513 0.515 . . . 0.518 0.519 0.522 0.525 0.527 0.531 0.534 0.537 0.541 0.544 0.546 . . . 0.550 0.553 0.555 0 . 5 5 7 ] ; sigmae = [ 0.184 0 0 0 0 0 0 0 0 0.002 0.005 0.009 0.016 0.025 0.032 0.039 . . . 0.048 0.055 0.064 0.071 0.078 0.085 0.092 0.099 0.104 0.111 0.115 . . . 0.129 0.143 0.154 0.166 0.175 0.184 0.191 0.2 0.207 0.214 0.226 . . . 0.235 0.244 0.251 0.256 0.262 0.267 0.269 0.274 0.276 ] ; sigmalny = [ 0.495 0.460 0.459 0.461 0.461 0.463 0.465 0.466 0.467 0.468 . . . 0.469 0.470 0.474 0.475 0.477 0.482 0.484 0.487 0.491 0.492 0.497 . . . 0.499 0.502 0.504 0.508 0.510 0.514 0.520 0.526 0.533 0.539 0.544 . . . 0.549 0.553 0.559 0.564 0.569 0.577 0.583 0.590 0.596 0.601 0.606 . . . 0.611 0.615 0.619 0 . 6 2 2 ] ;

% interpolate between periods if neccesary


i f ( l e n g t h ( f i n d ( p e r i o d == T ) ) == 0 ) i n d e x _ l o w = sum ( p e r i o d <T ) ; T_low = p e r i o d ( i n d e x _ l o w ) ; T_hi = p e r i o d ( index_low + 1 ) ; [ sa_low , s i g m a _ l o w ] = b j f _ a t t e n (M, R , T_low , F a u l t _ T y p e , Vs , a r b ) ; [ s a _ h i , s i g m a _ h i ] = b j f _ a t t e n (M, R , T_hi , F a u l t _ T y p e , Vs , a r b ) ;
120

115

125

x = [ l o g ( T_low ) l o g ( T _ h i ) ] ; Y_sa = [ l o g ( s a _ l o w ) l o g ( s a _ h i ) ] ; Y_sigma = [ s i g m a _ l o w s i g m a _ h i ] ; s a = exp ( i n t e r p 1 ( x , Y_sa , l o g ( T ) ) ) ; s i g m a = i n t e r p 1 ( x , Y_sigma , l o g ( T ) ) ; else i = f i n d ( p e r i o d == T ) ;

130

% compute median and sigma


r = s q r t ( R^2 + h ( i ) ^ 2 ) ;

156

135

i f ( F a u l t _ T y p e == 1 ) b1 = B1ss ( i ) ; e l s e i f ( F a u l t _ T y p e == 2 ) b1 = B1rv ( i ) ; else b1 = B 1 a l l ( i ) ; end l n y = b1+ B2 ( i ) (M6)+ B3 ( i ) (M6)^2+ B5 ( i ) l o g ( r ) + Bv ( i ) l o g ( Vs / Va ( i ) ) ; s a = exp ( l n y ) ; i f ( a r b ) % arbitrary component sigma sigma = sigmalny ( i ) ; else % average component sigma sigma = s q r t ( sigma1 ( i )^2 + sigmae ( i ) ^ 2 ) ; end end

140

145

Matlab les for evaluating the BPNs in Subsection 4.1.5 - PSHA using BPN for Adapazari, Turkey
f u n c t i o n [ P_PGA , P_SD ] = BPN_PSHA_Adapazari

10

15

% % % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN. Output is specified in form of marginal probability distributions of the nodes PGA and SD (see Figure 4.14 and 4.15). For each seismic source, Z and each year, Q a set of BPNs are calculated:

20

for Z=1:1 f o r Q= 1 : 5 0 %This loop and the relevant parts of the code are skipped

25

% % % % %

%for the "poisson" case For controling HUGIN from MATLAB the ActiveX then the available functions in this library objects created from this library. The HUGIN loaded with the following command and create named bpn:

server is loaded and are used to alter ActiveX Server is a HUGIN API object

bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

% An object domain is created which holds the network:


30

domain = i n v o k e ( bpn , LoadDomainFromNet , . . .

157

C Software Codes
C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ BPN_PSHA_Adapazari . n e t , 0 , 0 ) ;

% The nodes to be manipulated are defined:


35

ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_PGA= i n v o k e ( domain , GetNodeByName , Epsilon_PGA ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndPGA= i n v o k e ( domain , GetNodeByName , PGA ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ;

40

%The number of discrete states of the nodes Magnitude, Distance, %Eps_PGA, Eps_SD, PGA and SD is set:
nM= 6 ; nR = 5 ; nEps_PGA = 1 0 ; nEps_SD = 1 0 ; nPGA= 7 ; nSD = 7 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ; s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndEps_PGA , N u m b e r O f S t a t e s , nEps_PGA ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ; s e t ( ndPGA , N u m b e r O f S t a t e s ,nPGA ) ;

45

50

55

%The probability distribution of node Eps_PGA is calculated:


[ Eps_PGA , dPGA] = EPS_PGA_5 ( nEps_PGA ) ;

%The probability distribution of node Eps_SD is calculated:


60

[ Eps_SD , dSD ] = EPS_SD_5 ( nEps_SD , nEps_PGA , 0 . 6 4 ) ;

%The probability distribution of node Distance is calculated:


[ R , P_R , R l i m i t s ] = Line_EQ_R_5 ( 5 0 0 , nR , 6 7 , 0 , 4 8 , 9 ) ;
65

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson_5 ( 5 , 0 . 9 8 , 1 . 1 2 , nM, Q, Z ) ;

%The probability distribution of node Magnitude is calculated %for the "poisson" case %[Nu_Mmin,M_EQ,P_M,Mlimits]=EQ_M_5(5,1.15,1.12,nM);
70

M=M_EQ;

%The discrete probabilities are set for node Magnitude in the %BPN:
75

f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

%The discrete probabilities are set for node Distance in the BPN:
80

f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

158

85

%The discrete probabilities are set for node Eps_PGA in the BPN:
f o r i = 1 : ( nEps_PGA ) s e t ( ndEps_PGA . T a b l e , D a t a , ( i 1) , Eps_PGA ( i ) ) ; s e t ( ndEps_PGA , S t a t e L a b e l , ( i 1) , [ Eps_PGA= num2str ( dPGA ( i ) ) ] ) ; end

90

%The discrete probabilities are set for node Eps_SD in the BPN:
f o r i = 1 : ( nEps_SDnEps_PGA ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; end
95

%The conditional probability table of the node PGA is initialized % with zeros:
f o r i = 1 : (nMnRnEps_PGAnPGA ) s e t ( ndPGA . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

100

%The conditional probability table of the node SD is initialized %with zeros:


105

f o r i = 1 : (nMnRnSDnEps_SD ) s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

110

%for all combinations of the states in the node Magnitude and %Distance the peak ground accelerations and spectral %displacements are calculated with the Boore Joyner and Fumal %attenuation model
[PGABOORE] = PGA_5 ( nEps_PGA ,M, R ) ; [SDBOORE] = SD_5 ( nEps_SD , nEps_PGA ,M, R ) ;

115

%The limits for the discretisation of the nodes SD and PGA are %set:
CoeffPGA = [ 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 1 . 2 max (PGABOORE ) ] ; CoeffSD = [ 0 0 . 0 0 5 0 . 0 2 0 . 0 5 0 . 1 0 . 3 0 . 5 max (SDBOORE ) ] ;

120

%The labels of the states are set for node Eps_SD, PGA and SD %in the BPN:
f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end f o r i = 1 : ( nPGA ) s e t ( ndPGA , S t a t e L a b e l , ( i 1) , . . . [ PGA= num2str ( ( CoeffPGA ( i ) + CoeffPGA ( i + 1 ) ) / 2 ) ] ) ; end f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , . . . [ SD= num2str ( ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ) ] ) ; end

125

130

%The discrete probabilities are set for node PGA in the BPN:
135

N= 0 ; f o r i = 1 : l e n g t h (PGABOORE)

159

C Software Codes
f o r j = 1 :nPGA i f PGABOORE( i ) <= CoeffPGA ( j + 1) & PGABOORE( i ) > CoeffPGA ( j ) s e t ( ndPGA . T a b l e , D a t a , ( j +N 1 ) , 1 ) ; N=N+nPGA ; end end end
145

140

%The discrete probabilities are set for node SD in the BPN:


K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end end end

150

155

% The loaded BPN is compiled with the HUGIN inference engine:


i n v o k e ( domain , Compile ) ;

160

% The BPN with the manipulated states and probabilities can be % saved using the commands:
F i l e n a m e = [ C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ . . . BPN_PSHA_Adapazari_Output_T num2str ( Z ) _ num2str (Q) . n e t ] ; i n v o k e ( domain , SaveAsNet , F i l e n a m e ) ;

165

%For each seismic Source the marginal probabilities of each state %and each year are calculated
f o r j = 1 :nPGA P_PGA . S o u r c e S 1 (Q ) . P_PGA ( j ) = g e t ( ndPGA , B e l i e f , ( j 1 ) ) ; end f o r j = 1 : nSD P_SD . S o u r c e S 1 (Q ) . P_SD ( j ) = g e t ( ndSD , B e l i e f , ( j 1 ) ) ; end end end f u n c t i o n [ Eps_PGA , dPGA] = EPS_PGA_5 ( nEps_PGA )

170

%Calculates the probabilites of the standardnormal distribution for the %bins given a number of discretisation
5

emin =3; emax = 3 ; f o r i = 1 : ( nEps_PGA 1) e ( i ) = emin + ( i 1)( emaxemin ) / ( nEps_PGA 2 ) ; end for i =1:( length ( e )+1) i f i ==1 Eps_PGA ( i ) = normcdf ( e ( i ) , 0 , 1 ) ; e l s e i f i ==( l e n g t h ( e ) + 1 )

10

160

15

Eps_PGA ( i ) = normcdf ( e ( 1 ) , 0 , 1 ) ; else Eps_PGA ( i ) = normcdf ( e ( i ) , 0 , 1 ) normcdf ( e ( i 1 ) , 0 , 1 ) ; end end Eps_PGA=Eps_PGA ; for i =1:( length ( e )+1) i f i ==1 dPGA ( i ) = e ( i ) 0 . 5 ; e l s e i f i ==( l e n g t h ( e ) + 1 ) dPGA ( i ) = e ( i 1 ) + 0 . 5 ; else dPGA ( i ) = ( e ( i 1)+ e ( i ) ) / 2 ; end end dPGA=dPGA ; f u n c t i o n [ Eps_SD , dSD ] = EPS_SD_5 ( nEps_SD , nEps_PGA , T )

20

25

30

%Calculates the probabilites of the standardnormal distribution for the %bins given a number of discretisation nEPS_SD, nEPS_PGA
5

%Setting the array for the discretisation of PGA


emin =3; emax = 3 ; f o r i = 1 : ( nEps_PGA 1) e ( i ) = emin + ( i 1)( emaxemin ) / ( nEps_PGA 2 ) ; end for i =1:( length ( e )+1) i f i ==1 Eps_PGA ( i ) = normcdf ( e ( i ) , 0 , 1 ) ; e l s e i f i ==( l e n g t h ( e ) + 1 ) Eps_PGA ( i ) = normcdf ( e ( 1 ) , 0 , 1 ) ; else Eps_PGA ( i ) = normcdf ( e ( i ) , 0 , 1 ) normcdf ( e ( i 1 ) , 0 , 1 ) ; end end Eps_PGA=Eps_PGA ; for i =1:( length ( e )+1) i f i ==1 dPGA ( i ) = e ( i ) 0 . 5 ; e l s e i f i ==( l e n g t h ( e ) + 1 ) dPGA ( i ) = e ( i 1 ) + 0 . 5 ; else dPGA ( i ) = ( e ( i 1)+ e ( i ) ) / 2 ; end end dPGA=dPGA ;

10

15

20

25

30

%Setting the array for the discretisation of SD


35

hmin = 4.5;

161

C Software Codes
hmax = 4 . 5 ; f o r i = 1 : ( nEps_SD 1) h ( i ) = hmin + ( i 1)( hmaxhmin ) / ( nEps_SD 2 ) ; end
40

%Correlation PGA-SD depending on the period


T=[0.64]; [RHO] = SD_PGA_cor ( T ) ;
45

50

f o r i = 1 : ( nEps_PGA ) MU1( i ) =dPGA ( i ) RHO ( 1 ) ; SIGMA1 ( i )=(1 RHO( 1 ) RHO ( 1 ) ) ^ 0 . 5 ; end MU= [MU1 ] ; MU=MU( : ) ; SIGMA= [SIGMA1 ] ; SIGMA=SIGMA ( : ) ; f o r j = 1 : ( l e n g t h (MU) ) for i =1:( length ( h )+1) i f i ==1 Eps_SD ( i , j ) = normcdf ( h ( i ) ,MU( j ) , SIGMA( j ) ) ; e l s e i f i ==( l e n g t h ( h ) + 1 ) Eps_SD ( i , j )=1 normcdf ( h ( i 1) ,MU( j ) , SIGMA( j ) ) ; else Eps_SD ( i , j ) = normcdf ( h ( i ) ,MU( j ) , SIGMA( j ) ) . . . normcdf ( h ( i 1) ,MU( j ) , SIGMA( j ) ) ; end end end Eps_SD=Eps_SD ( : ) ;

55

60

65

%Setting the array for the discretisation


70

75

for i =1:( length ( h )+1) i f i ==1 dSD ( i ) = h ( i ) 0 . 5 ; e l s e i f i ==( l e n g t h ( h ) + 1 ) dSD ( i ) = h ( i 1 ) + 0 . 5 ; else dSD ( i ) = ( h ( i 1)+h ( i ) ) / 2 ; end end dSD=dSD ; f u n c t i o n [ Nu_Mmin , M_EQ, P_M , M l i m i t s ] =EQ_M_5 ( Mmin , a , b , nM)

%Assuming that earthquakes of magnitude less than Mmin do not contribute to %damage the mean rate of exceedance for Mmin is Nu_Mmin. P_M is the %probability of having a magnitude in the magnitude range characterised by %M_EQ. The formula for Gutenberg and Richter law with upper (Mmax) and %lower (Mmin) bounds is applied. %a and b are the parameters of the Gutenberg richter recurrence relation.
Nu_Mmin = 1 0 ^ ( ab ( Mmin ) ) ;

162

10

Mlimits =[4.75 5.25 5.75 6.25 6.75 7.25 7 . 7 5 ] ; M_EQ= [ 5 5 . 5 6 6 . 5 7 7 . 5 ] ; M_EQ=M_EQ ;


15

f o r i = 1 : (nM+ 1 ) LambdaM ( i ) = 1 0 ^ ( ab ( M l i m i t s ( i ) ) ) ; end L a m b d a T o t a l =LambdaM(1) LambdaM (nM+ 1 ) ;

20

f o r i = 1 : (nM) P_M( i ) = ( LambdaM ( i )LambdaM ( i + 1 ) ) / L a m b d a T o t a l ; end P_M=P_M ; f u n c t i o n [ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson_5 ( Mmin , a , b , nM, Q, Z )

%Assuming that earthquakes of magnitude less than Mmin do not contribute to %damage the mean rate of exceedance for Mmin is Nu_Mmin. P_M is the %probability of having a magnitude in the magnitude range characterised by %M_EQ. The formula for Gutenberg and Richter law with upper (Mmax) and %lower (Mmin) bounds is applied. %a and b are the parameters of the Gutenberg richter recurrence relation.
Nu_Mmin = 1 0 ^ ( ab ( Mmin ) ) ; Mlimits =[4.75 5.25 5.75 6.25 6.75 7.25 7 . 7 5 ] ; M_EQ= [ 5 5 . 5 6 6 . 5 7 7 . 5 ] ; M_EQ=M_EQ ;

10

15

%Line source with 100 year characteristic EQ return period and COV=0.5. %No EQ for 30 years, T=10,30,50 years mean rate of %exceedance
LambdaNonPoisson = [ . . . 9 . 7 7 0 0 0 E12 9 . 7 7 0 0 0 E12 1 . 0 6 4 3 3 E10 1 . 0 6 4 3 3 E10 7 . 6 8 1 0 7 E10 7 . 6 8 1 0 7 E10 4 . 0 4 1 8 2 E09 4 . 0 4 1 8 2 E09 1 . 6 6 0 5 1 E08 1 . 6 6 0 5 1 E08 5 . 5 9 9 5 3 E08 5 . 5 9 9 5 3 E08 1 . 6 0 9 0 9 E07 1 . 6 0 9 0 9 E07 4 . 0 5 4 6 3 E07 4 . 0 5 4 6 3 E07 9 . 1 6 0 2 1 E07 9 . 1 6 0 2 1 E07 1 . 8 8 8 1 8 E06 1 . 8 8 8 1 8 E06 3 . 6 0 1 0 6 E06 3 . 6 0 1 0 6 E06 6 . 4 2 6 4 6 E06 6 . 4 2 6 4 6 E06 1 . 0 8 3 1 1 E05 1 . 0 8 3 1 1 E05 1 . 7 3 7 1 2 E05 1 . 7 3 7 1 2 E05 2 . 6 6 8 0 6 E05 2 . 6 6 8 0 6 E05 3 . 9 4 5 2 0 E05 3 . 9 4 5 2 0 E05 5 . 6 4 1 5 1 E05 5 . 6 4 1 5 1 E05 7 . 8 3 1 2 0 E05 7 . 8 3 1 2 0 E05 0.000105872 0.000105872 0.000139790 0.000139790 1 . 2 2 2 1 8 E05 1 . 5 4 3 7 2 E05 1 . 9 2 7 0 3 E05 2 . 3 7 9 2 8 E05 2 . 9 0 7 7 3 E05 3 . 5 1 9 7 2 E05 4 . 2 2 2 5 2 E05 5 . 0 2 3 3 1 E05 5 . 9 2 9 0 7 E05 6 . 9 4 6 5 2 E05 8 . 0 8 2 0 9 E05 9 . 3 4 1 8 2 E05 0.000107313 0.000122557 0.000139197 0.000157272 0.000176820 0.000197869 0.000220442 0.000244558

20

25

30

35

163

C Software Codes
0.000180700 0.000229165 0.000285658 0.000350550 0.000424109 0.000506497 0.000597765 0.000697864 0.000806644 0.000923867 0.001049210 0.001182280 0.001322619 0.001469719 0.001623027 0.001781957 0.001945898 0.002114225 0.002286303 0.002461495 0.002639171 0.002818710 0.002999504 0.003180968 0.003362536 0.003543667 0.003723848 0.003902595 0.004079453 0.004253997 0.000180700 0.000229165 0.000285658 0.000350550 0.000424109 0.000506497 0.000597765 0.000697864 0.000806644 0.000923867 0.001049210 0.001182280 0.001322619 0.001469719 0.001623027 0.001781957 0.001945898 0.002114225 0.002286303 0.002461495 0.002639171 0.002818710 0.002999504 0.003180968 0.003362536 0.003543667 0.003723848 0.003902595 0.004079453 0.004253997 0.000270229 0.000297459 0.000326249 0.000356594 0.000388482 0.000421897 0.000456817 0.000493217 0.000531066 0.000570329 0.000610967 0.000652938 0.000696196 0.000740694 0.000786379 0.000833198 0.000881096 0.000930015 0.000979896 0.001030679 0.001082303 0.001134706 0.001187826 0.001241600 0.001295966 0.001350861 0.001406223 0.001461991 0.001518102 0.001574498

40

45

50

55

60

65

];

70

f o r i = 1 : (nM) LambdaM ( i ) = 1 0 ^ ( ab ( M l i m i t s ( i ) ) ) ; end

% Equation 4.9 in the PHD dissertation


L a m b d a T o t a l =LambdaM(1) LambdaM (nM) + LambdaNonPoisson (Q, Z ) ;
75

f o r i = 1 : (nM1) P_M( i ) = ( LambdaM ( i )LambdaM ( i + 1 ) ) / L a m b d a T o t a l ; end


80

P_M(nM) = LambdaNonPoisson (Q, Z ) / L a m b d a T o t a l ; P_M=P_M ; f u n c t i o n [ R , P_R_Line , R l i m i t s ] = Line_EQ_R_5 ( nr , nR , X1 , Y1 , X2 , Y2 )

%Coordinates of the site (0,0), Coordinates of the two ends of the line %source are (X1,Y1) and (X2,Y2). %nr is the discretisation of the line source, nR is the discretisation of %the bins for the EQ_Distance node

for i = 1 : ( nr +1)

164

10

x ( i ) =X1+ (X2X1 ) ( i 1 ) / n r ; y ( i ) =Y1+ (Y2Y1 ) ( i 1 ) / n r ; end for i =1: nr r ( i )= s q r t ( ( ( x ( i )+ x ( i + 1 ) ) / 2 ) ^ 2 + ( ( y ( i )+ y ( i + 1 ) ) / 2 ) ^ 2 ) ; end R l i m i t s = [ 0 20 40 60 80 1 0 0 ] ; R_EQ= [ 1 0 30 50 70 9 0 ] ; H_R= h i s t ( r , R_EQ ) ; P_R_Line =H_R / n r ; P_R_Line =P_R_Line ; R=R_EQ ; f u n c t i o n [PGABOORE] = PGA_5 ( nEps_PGA ,M, R)

15

20

%for all combinations of the states in the node Magnitude and Distance %the peak ground accelerations are calculated with the Boore Joyner and %Fumal attenuation model
5

T1 = 0 . 0 0 1 ; [ Eps_PGA , dPGA] = EPS_PGA_5 ( nEps_PGA ) ; N1 = 1 ; f o r j = 1 : nEps_PGA f o r m= 1 : l e n g t h (M) f o r n =1: l e n g t h (R) [MU_SA SIGMA_SA] = b j f _ a t t e n _ 5 (M(m) , R( n ) , T1 , 1 , 6 2 0 , 1 ) ; PGABOORE( N1) = exp ( l o g (MU_SA) +SIGMA_SAdPGA ( j ) ) ; i f PGABOORE( N1) <=0 PGABOORE( N1 ) = 0 ; end N1=N1 + 1 ; end end end f u n c t i o n [SDBOORE] = SD_5 ( nEps_SD , nEps_PGA ,M, R)

10

15

20

%for all combinations of the states in the node Magnitude and Distance %the spectral displacements are calculated with the Boore Joyner and Fumal %attenuation model
5

T=0.64; [ Eps_SD , dSD ] = EPS_SD_5 ( nEps_SD , nEps_PGA ) ; N1 = 1 ; f o r j = 1 : nEps_SD f o r m= 1 : l e n g t h (M) f o r n =1: l e n g t h (R) [MU_SA SIGMA_SA] = b j f _ a t t e n _ 5 (M(m) , R( n ) , T , 1 , 3 1 0 , 1 ) ; SDBOORE( N1 ) = 9 . 8 0 6 ( TT / 4 / p i / p i ) exp ( l o g (MU_SA) +SIGMA_SAdSD ( j ) ) ; i f SDBOORE( N1) <=0 SDBOORE( N1 ) = 0 ; end N1=N1 + 1 ;

10

15

165

C Software Codes
end end
20

end f u n c t i o n [ r h o ] = SD_PGA_cor ( T )

% computes correlation between PGA and Sa(T) for a given T


5

% if T is a vector, the function will return a vector rho of the same size
rho = zeros ( s i z e (T ) ) ; for i = 1: length (T) i f T( i ) <0.05 r h o ( i ) = nan ; % outside of fitted range fprintf ( Invalid period ) e l s e i f T ( i ) >5 r h o ( i ) = nan ; % outside of fitted range fprintf ( Invalid period ) e l s e i f T( i ) <0.11 rho ( i ) = 0.500 0.127 log (T( i ) ) ; e l s e i f T( i ) <0.25 rho ( i ) = 0.968 + 0.085 log (T( i ) ) ; else rho ( i ) = 0.568 0.204 log (T( i ) ) ; end end f u n c t i o n [ sa , s i g m a ] = b j f _ a t t e n _ 5 (M, R , T , F a u l t _ T y p e , Vs , a r b )

10

15

20

10

15

% % % % % % % % % % % % %

by Jack Baker, 2/1/05 Stanford University bakerjw@stanford.edu Boore Joyner and Fumal attenuation model (1997 Seismological Research Letters, Vol 68, No 1, p154). This script includes standard deviations for either arbitrary or average components of ground motion (See Baker and Cornell, 2005, "What spectral acceleration are you using," Earthquake Spectra, submitted).

20

25

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % INPUT % % M = Moment Magnitude % R = boore joyner distance % T = period: 0.1 to 2s (0.001s is a placeholder for the PGA) % Fault_Type = 1 for strike-slip fault % = 2 for reverse-slip fault % = 0 for non-specified mechanism

166

30

35

% Vs = shear wave velocity averaged over top 30 m % (use 310 for soil, 620 for rock) % arb = 1 for arbitrary component sigma % = 0 for average component sigma % % OUTPUT % % sa = median spectral acceleration prediction % sigma = logarithmic standard deviation of spectral acceleration % prediction FOR AN ARBITRARY OR AVERAGE COMPONENT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% coefficients
40

45

50

55

60

65

70

75

period = [0.001 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.22 . . . 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 . . . 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.1 1.2 1.3 1.4 1.5 1.6 . . . 1.7 1.8 1.9 2 ] ; B1ss = [ 0.313 1 . 0 0 6 1 . 0 7 2 1 . 1 0 9 1 . 1 2 8 1 . 1 3 5 1 . 1 2 8 1 . 1 1 2 1 . 0 9 1 . 0 6 3 1 . 0 3 2 . . . 0.999 0.925 0.847 0.764 0.681 0.598 0.518 0.439 0.361 0.286 0.212 . . . 0 . 1 4 0 . 0 7 3 0 . 0 0 5 0.058 0.122 0.268 0.401 0.523 0.634 0.737 . . . 0.829 0.915 0.993 1.066 1.133 1.249 1.345 1.428 1.495 . . . 1.552 1.598 1.634 1.663 1.685 1.699 ] ; B1rv = [ 0.117 1 . 0 8 7 1 . 1 6 4 1 . 2 1 5 1 . 2 4 6 1 . 2 6 1 1 . 2 6 4 1 . 2 5 7 1 . 2 4 2 1 . 2 2 2 1 . 1 9 8 . . . 1.17 1.104 1.033 0.958 0.881 0.803 0.725 0.648 0.57 0.495 0.423 . . . 0 . 3 5 2 0 . 2 8 2 0 . 2 1 7 0 . 1 5 1 0 . 0 8 7 0.063 0.203 0.331 0.452 0.562 . . . 0.666 0.761 0.848 0.932 1.009 1.145 1.265 1.37 1.46 1.538 . . . 1.608 1.668 1.718 1.763 1.801 ] ; B 1 a l l = [ 0.242 1 . 0 5 9 1 . 1 3 1 . 1 7 4 1 . 2 1 . 2 0 8 1 . 2 0 4 1 . 1 9 2 1 . 1 7 3 1 . 1 5 1 1 . 1 2 2 . . . 1.089 1.019 0.941 0.861 0.78 0.7 0.619 0.54 0.462 0.385 0.311 0.239 . . . 0 . 1 6 9 0 . 1 0 2 0 . 0 3 6 0.025 0.176 0.314 0.44 0.555 0.661 0.76 . . . 0.851 0.933 1.01 1.08 1.208 1.315 1.407 1.483 1.55 1.605 . . . 1.652 1.689 1.72 1.743 ] ; B2 = [ 0 . 5 2 7 0 . 7 5 3 0 . 7 3 2 0 . 7 2 1 0 . 7 1 1 0 . 7 0 7 0 . 7 0 2 0 . 7 0 2 0 . 7 0 2 0 . 7 0 5 0 . 7 0 9 . . . 0.711 0.721 0.732 0.744 0.758 0.769 0.783 0.794 0.806 0.82 0.831 . . . 0.84 0.852 0.863 0.873 0.884 0.907 0.928 0.946 0.962 0.979 0.992 . . . 1.006 1.018 1.027 1.036 1.052 1.064 1.073 1.08 1.085 1.087 1.089 . . . 1.087 1.087 1.085 ] ; B3 = [ 0 0.226 0.23 0.233 0.233 0.23 0.228 0.226 0.221 0.216 0.212 . . . 0.207 0.198 0.189 0.18 0.168 0.161 0.152 0.143 0.136 0.127 . . . 0.12 0.113 0.108 0.101 0.097 0.09 0.078 0.069 0.06 0.053 . . . 0.046 0.041 0.037 0.035 0.032 0.032 0.03 0.032 0.035 0.039 . . . 0.044 0.051 0.058 0.067 0.074 0.085 ] ; B5 = [ 0.778 0.934 0.937 0.939 0.939 0.938 0.937 0.935 0.933 0.93 . . . 0.927 0.924 0.918 0.912 0.906 0.899 0.893 0.888 0.882 . . . 0.877 0.872 0.867 0.862 0.858 0.854 0.85 0.846 0.837 0.83 . . . 0.823 0.818 0.813 0.809 0.805 0.802 0.8 0.798 0.795 0.794 . . . 0.793 0.794 0.796 0.798 0.801 0.804 0.808 0.812 ] ; Bv = [ 0.371 0.212 0.211 0.215 0.221 0.228 0.238 0.248 0.258 0.27 . . . 0.281 0.292 0.315 0.338 0.36 0.381 0.401 0.42 0.438 0.456 . . . 0.472 0.487 0.502 0.516 0.529 0.541 0.553 0.579 0.602 . . . 0.622 0.639 0.653 0.666 0.676 0.685 0.692 0.698 0.706 0.71 . . . 0.711 0.709 0.704 0.697 0.689 0.679 0.667 0.655 ] ;

167

C Software Codes
Va = [ 1396 1112 1291 1452 1596 1718 1820 1910 1977 2037 2080 2118 2158 . . . 2178 2173 2158 2133 2104 2070 2032 1995 1954 1919 1884 1849 1816 . . . 1782 1710 1644 1592 1545 1507 1476 1452 1432 1416 1406 1396 1400 . . . 1416 1442 1479 1524 1581 1644 1714 1795 ] ; h = [ 5.57 6.27 6.65 6.91 7.08 7.18 7.23 7.24 7.21 7.16 7.1 7.02 6.83 6.62 . . . 6.39 6.17 5.94 5.72 5.5 5.3 5.1 4.91 4.74 4.57 4.41 4.26 4.13 3.82 . . . 3.57 3.36 3.2 3.07 2.98 2.92 2.89 2.88 2.9 2.99 3.14 3.36 3.62 3.92 . . . 4.26 4.62 5.01 5.42 5.85 ] ; sigma1 = [ 0.431 0.44 0.437 0.437 0.435 0.435 0.435 0.435 0.435 0.435 0.435 . . . 0.435 0.437 0.437 0.437 0.44 0.44 0.442 0.444 0.444 0.447 0.447 0.449 . . . 0.449 0.451 0.451 0.454 0.456 0.458 0.461 0.463 0.465 0.467 0.467 . . . 0.47 0.472 0.474 0.477 0.479 0.481 0.484 0.486 0.488 0.49 0.493 . . . 0.493 0.495 ] ; sigmac = [ 0 . 1 6 0 0.134 0.141 0.148 0.153 0.158 0.163 0.166 0.169 0.173 0.176 . . . 0.177 0.182 0.185 0.189 0.192 0.195 0.197 0.199 0.200 0.202 0.204 . . . 0.205 0.206 0.209 0.210 0.211 0.214 0.216 0.218 0.220 0.221 0.223 . . . 0.226 0.228 0.230 0.230 0.233 0.236 0.239 0.241 0.244 0.246 0.249 . . . 0.251 0.254 0 . 2 5 6 ] ; sigmar = [ 0.460 0.460 0.459 0.461 0.461 0.463 0.465 0.466 0.467 0.468 . . . 0.469 0.470 0.473 0.475 0.476 0.480 0.481 0.484 0.487 0.487 0.491 . . . 0.491 0.494 0.494 0.497 0.497 0.501 0.504 0.506 0.510 0.513 0.515 . . . 0.518 0.519 0.522 0.525 0.527 0.531 0.534 0.537 0.541 0.544 0.546 . . . 0.550 0.553 0.555 0 . 5 5 7 ] ; sigmae = [ 0.184 0 0 0 0 0 0 0 0 0.002 0.005 0.009 0.016 0.025 0.032 0.039 . . . 0.048 0.055 0.064 0.071 0.078 0.085 0.092 0.099 0.104 0.111 0.115 . . . 0.129 0.143 0.154 0.166 0.175 0.184 0.191 0.2 0.207 0.214 0.226 . . . 0.235 0.244 0.251 0.256 0.262 0.267 0.269 0.274 0.276 ] ; sigmalny = [ 0.495 0.460 0.459 0.461 0.461 0.463 0.465 0.466 0.467 0.468 . . . 0.469 0.470 0.474 0.475 0.477 0.482 0.484 0.487 0.491 0.492 0.497 . . . 0.499 0.502 0.504 0.508 0.510 0.514 0.520 0.526 0.533 0.539 0.544 . . . 0.549 0.553 0.559 0.564 0.569 0.577 0.583 0.590 0.596 0.601 0.606 . . . 0.611 0.615 0.619 0 . 6 2 2 ] ;

80

85

90

95

100

105

110

% interpolate between periods if neccesary


i f ( l e n g t h ( f i n d ( p e r i o d == T ) ) == 0 ) i n d e x _ l o w = sum ( p e r i o d <T ) ; T_low = p e r i o d ( i n d e x _ l o w ) ; T_hi = p e r i o d ( index_low + 1 ) ; [ sa_low , s i g m a _ l o w ] = b j f _ a t t e n (M, R , T_low , F a u l t _ T y p e , Vs , a r b ) ; [ s a _ h i , s i g m a _ h i ] = b j f _ a t t e n (M, R , T_hi , F a u l t _ T y p e , Vs , a r b ) ;
120

115

125

x = [ l o g ( T_low ) l o g ( T _ h i ) ] ; Y_sa = [ l o g ( s a _ l o w ) l o g ( s a _ h i ) ] ; Y_sigma = [ s i g m a _ l o w s i g m a _ h i ] ; s a = exp ( i n t e r p 1 ( x , Y_sa , l o g ( T ) ) ) ; s i g m a = i n t e r p 1 ( x , Y_sigma , l o g ( T ) ) ; else i = f i n d ( p e r i o d == T ) ;

130

% compute median and sigma


r = s q r t ( R^2 + h ( i ) ^ 2 ) ;

168

135

i f ( F a u l t _ T y p e == 1 ) b1 = B1ss ( i ) ; e l s e i f ( F a u l t _ T y p e == 2 ) b1 = B1rv ( i ) ; else b1 = B 1 a l l ( i ) ; end l n y = b1+ B2 ( i ) (M6)+ B3 ( i ) (M6)^2+ B5 ( i ) l o g ( r ) + Bv ( i ) l o g ( Vs / Va ( i ) ) ; s a = exp ( l n y ) ; i f ( a r b ) % arbitrary component sigma sigma = sigmalny ( i ) ; else % average component sigma sigma = s q r t ( sigma1 ( i )^2 + sigmae ( i ) ^ 2 ) ; end end

140

145

169

C Software Codes

C.2 Soil analysis tools


GSLIB les for simulating random elds
P a r a m e t e r s f o r SGSIM ( Depth . p a r ) START OF PARAMETERS : Depth . d a t 1 2 0 3 0 0 1.0 1 . 0 e21 1 sgsim . t r n 0 histsmth . out 1 2 0.0 16.0 1 0.0 1 16.0 1 s g s i m D e p t h . dbg sgsimDepth . out 100 100 50 100 150 50 100 1 0.5 1.0 69069 0 8 12 1 1 3 0 10000.0 10000.0 50.0 0.0 0.0 0.0 0 . . / data / ydata . dat 4 1 0.1 3 0.9 0.0 0.0 0.0 5000.0 5000.0 10.0

10

15

20

25

30

35

%file with data %columns for X,Y,Z,vr,wt,sec.var. %trimming limits %transform the data (0=no, 1=yes) %file for output trans table %consider ref. dist (0=no, 1=yes) %file with ref. dist distribution %columns for vr and wt %zmin,zmax(tail extrapolation) %lower tail option, parameter %upper tail option, parameter %debugging level: 0,1,2,3 %file for debugging output %file for simulation output %number of realizations to generate %nx,xmn,xsiz %ny,ymn,ysiz %nz,zmn,zsiz %random number seed %min and max original data for sim %number of simulated nodes to use %assign data to nodes (0=no, 1=yes) %multiple grid search (0=no, 1=yes),num %maximum data per octant (0=not used) %maximum search radii (hmax,hmin,vert) %angles for search ellipsoid %ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC %file with LVM, EXDR, or COLC variable %column for secondary variable %nst, nugget effect %it,cc,ang1,ang2,ang3 %a_hmax, a_hmin, a_vert

P a r a m e t e r s f o r SGSIM ( USCS . p a r ) START OF PARAMETERS : USCS . d a t 1 2 0 3 0 0 1.0 1 . 0 e21 1 sgsim . t r n 0 histsmth . out 1 2

10

%file with data %columns for X,Y,Z,vr,wt,sec.var. %trimming limits %transform the data (0=no, 1=yes) %file for output trans table %consider ref. dist (0=no, 1=yes) %file with ref. dist distribution %columns for vr and wt

170

15

20

25

30

35

0.0 25.0 1 0.0 1 25.0 1 sgsimUSCS . dbg sgsimUSCS . o u t 100 100 50 100 150 50 100 1 0.5 1.0 69069 0 8 12 1 1 3 0 10000.0 10000.0 50.0 0.0 0.0 0.0 0 . . / data / ydata . dat 4 1 0.1 3 0.9 0.0 0.0 0.0 5000.0 5000.0 10.0

%zmin,zmax(tail extrapolation) %lower tail option, parameter %upper tail option, parameter %debugging level: 0,1,2,3 %file for debugging output %file for simulation output %number of realizations to generate %nx,xmn,xsiz %ny,ymn,ysiz %nz,zmn,zsiz %random number seed %min and max original data for sim %number of simulated nodes to use %assign data to nodes (0=no, 1=yes) %multiple grid search (0=no, 1=yes),num %maximum data per octant (0=not used) %maximum search radii (hmax,hmin,vert) %angles for search ellipsoid %ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC %file with LVM, EXDR, or COLC variable %column for secondary variable %nst, nugget effect %it,cc,ang1,ang2,ang3 %a_hmax, a_hmin, a_vert

P a r a m e t e r s f o r SGSIM (GW. p a r ) START OF PARAMETERS : GW. d a t 1 2 0 3 0 0 1.0 1 . 0 e21 1 sgsim . t r n 0 histsmth . out 1 2 0.0 16.0 1 0.0 1 16.0 1 sgsimGW . dbg sgsimGW . o u t 100 100 50 100 150 50 100 1 0.5 1.0 69069 0 8 12 1 1 3 0

10

15

20

25

%file with data %columns for X,Y,Z,vr,wt,sec.var. %trimming limits %transform the data (0=no, 1=yes) %file for output trans table %consider ref. dist (0=no, 1=yes) %file with ref. dist distribution %columns for vr and wt %zmin,zmax(tail extrapolation) %lower tail option, parameter %upper tail option, parameter %debugging level: 0,1,2,3 %file for debugging output %file for simulation output %number of realizations to generate %nx,xmn,xsiz %ny,ymn,ysiz %nz,zmn,zsiz %random number seed %min and max original data for sim %number of simulated nodes to use %assign data to nodes (0=no, 1=yes) %multiple grid search (0=no, 1=yes),num %maximum data per octant (0=not used)

171

C Software Codes
10000.0 10000.0 50.0 0.0 0.0 0.0 0 . . / data / ydata . dat 4 1 0.1 3 0.9 0.0 0.0 0.0 5000.0 5000.0 10.0

30

35

%maximum search radii (hmax,hmin,vert) %angles for search ellipsoid %ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC %file with LVM, EXDR, or COLC variable %column for secondary variable %nst, nugget effect %it,cc,ang1,ang2,ang3 %a_hmax, a_hmin, a_vert

P a r a m e t e r s f o r SGSIM ( FC . p a r ) START OF PARAMETERS : FC . d a t 1 2 0 3 0 0 1.0 1 . 0 e21 1 sgsim . t r n 0 histsmth . out 1 2 0.0 70.0 1 0.0 1 70.0 1 sgsimFC . dbg sgsimFC . o u t 100 100 50 100 150 50 100 1 0.5 1.0 69069 0 8 12 1 1 3 0 10000.0 10000.0 50.0 0.0 0.0 0.0 0 . . / data / ydata . dat 4 1 0.1 3 0.9 0.0 0.0 0.0 5000.0 5000.0 10.0

10

15

20

25

30

35

%file with data %columns for X,Y,Z,vr,wt,sec.var. %trimming limits %transform the data (0=no, 1=yes) %file for output trans table %consider ref. dist (0=no, 1=yes) %file with ref. dist distribution %columns for vr and wt %zmin,zmax(tail extrapolation) %lower tail option, parameter %upper tail option, parameter %debugging level: 0,1,2,3 %file for debugging output %file for simulation output %number of realizations to generate %nx,xmn,xsiz %ny,ymn,ysiz %nz,zmn,zsiz %random number seed %min and max original data for sim %number of simulated nodes to use %assign data to nodes (0=no, 1=yes) %multiple grid search (0=no, 1=yes),num %maximum data per octant (0=not used) %maximum search radii (hmax,hmin,vert) %angles for search ellipsoid %ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC %file with LVM, EXDR, or COLC variable %column for secondary variable %nst, nugget effect %it,cc,ang1,ang2,ang3 %a_hmax, a_hmin, a_vert

P a r a m e t e r s f o r SGSIM (Nm. p a r ) START OF PARAMETERS : N30 . d a t 1 2 0 3 0 0 1.0 1 . 0 e21

%file with data %columns for X,Y,Z,vr,wt,sec.var. %trimming limits

172

10

15

20

25

30

35

1 sgsim . t r n 0 histsmth . out 1 2 0.0 50.0 1 0.0 1 50.0 1 sgsimNeu . dbg sgsimNeu . o u t 100 100 50 100 150 50 100 1 0.5 1.0 69069 0 8 12 1 1 3 0 10000.0 10000.0 50.0 0.0 0.0 0.0 0 . . / data / ydata . dat 4 1 0.15 2 0.85 0.0 0.0 0.0 5000.0 5000.0 10.0

%transform the data (0=no, 1=yes) %file for output trans table %consider ref. dist (0=no, 1=yes) %file with ref. dist distribution %columns for vr and wt %zmin,zmax(tail extrapolation) %lower tail option, parameter %upper tail option, parameter %debugging level: 0,1,2,3 %file for debugging output %file for simulation output %number of realizations to generate %nx,xmn,xsiz %ny,ymn,ysiz %nz,zmn,zsiz %random number seed %min and max original data for sim %number of simulated nodes to use %assign data to nodes (0=no, 1=yes) %multiple grid search (0=no, 1=yes),num %maximum data per octant (0=not used) %maximum search radii (hmax,hmin,vert) %angles for search ellipsoid %ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC %file with LVM, EXDR, or COLC variable %column for secondary variable %nst, nugget effect %it,cc,ang1,ang2,ang3 %a_hmax, a_hmin, a_vert

Matlab les for generating random elds for use in Matlab


f u n c t i o n [ Depth_data ]= D e p t h _ f i e l d filenameOUT = [ . . / GSLIB / s g s i m D e p t h . o u t ] ; filenamePAR = [ . . / GSLIB / s g s i m D e p t h . p a r ] ;

%determine number of realisations and size of the fields


5

10

f i l e P A R = f o p e n ( filenamePAR , r ) ; for j =1:18 ans = f g e t l ( f i l e P A R ) ; end N= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; nx= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; ny= f s c a n f ( f i l e P A R , %f , 1 ) ; f c l o s e ( filePAR ) ;

15

%construct field for USCS


f i l e O U T = f o p e n ( filenameOUT , r ) ; for i =1:3 ans = f g e t l ( f i l e O U T ) ; end

20

173

C Software Codes
f o r i = 1 :N [A, np ] = f s c a n f ( fileOUT , %f , [ nx , ny ] ) ; D e p t h _ d a t a ( i ) . Depth =A ; end f c l o s e ( fileOUT ) ; s a v e Depth D e p t h _ d a t a f u n c t i o n [ USCS_data ] = U S C S _ f i e l d filenameOUT = [ . . / GSLIB / sgsimUSCS . o u t ] ; filenamePAR = [ . . / GSLIB / sgsimUSCS . p a r ] ;

25

%determine number of realisations and size of the fields


5

10

f i l e P A R = f o p e n ( filenamePAR , r ) ; for j =1:18 ans = f g e t l ( f i l e P A R ) ; end N= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; nx= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; ny= f s c a n f ( f i l e P A R , %f , 1 ) ; f c l o s e ( filePAR ) ;

15

%construct field for USCS


f i l e O U T = f o p e n ( filenameOUT , r ) ; for i =1:3 ans = f g e t l ( f i l e O U T ) ; end f o r i = 1 :N [A, np ] = f s c a n f ( fileOUT , %f , [ nx , ny ] ) ; B=round (A ) ; C=B+ (B= = 0 ) ; USCS_data ( i ) . USCS=C ; end f c l o s e ( fileOUT ) ; s a v e USCS USCS_data f u n c t i o n [ GW_data ] = GW_field filenameOUT = [ . . / GSLIB / sgsimGW . o u t ] ; filenamePAR = [ . . / GSLIB / sgsimGW . p a r ] ;

20

25

%determine number of realisations and size of the fields


5

10

f i l e P A R = f o p e n ( filenamePAR , r ) ; for j =1:18 ans = f g e t l ( f i l e P A R ) ; end N= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; nx= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; ny= f s c a n f ( f i l e P A R , %f , 1 ) ; f c l o s e ( filePAR ) ;

15

%construct field for GW


f i l e O U T = f o p e n ( filenameOUT , r ) ;

174

20

25

for i =1:3 ans = f g e t l ( f i l e O U T ) ; end f o r i = 1 :N [A, np ] = f s c a n f ( fileOUT , %f , [ nx , ny ] ) ; GW_data ( i ) .GW=A ; end f c l o s e ( fileOUT ) ; s a v e GW GW_data f u n c t i o n [ FC_data ]= F C _ f i e l d filenameOUT = [ . . / GSLIB / sgsimFC . o u t ] ; filenamePAR = [ . . / GSLIB / sgsimFC . p a r ] ;

%determine number of realisations and size of the fields


5

10

f i l e P A R = f o p e n ( filenamePAR , r ) ; for j =1:18 ans = f g e t l ( f i l e P A R ) ; end N= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; nx= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; ny= f s c a n f ( f i l e P A R , %f , 1 ) ; f c l o s e ( filePAR ) ;

15

%construct field for FC


f i l e O U T = f o p e n ( filenameOUT , r ) ; for i =1:3 ans = f g e t l ( f i l e O U T ) ; end f o r i = 1 :N [A, np ] = f s c a n f ( fileOUT , %f , [ nx , ny ] ) ; F C _ d a t a ( i ) . FC=A ; end f c l o s e ( fileOUT ) ; s a v e FC F C _ d a t a f u n c t i o n [ Nm_data ] = N m _ f i e l d filenameOUT = [ . . / GSLIB / sgsimNm . o u t ] ; filenamePAR = [ . . / GSLIB / sgsimNm . p a r ] ;

20

25

%determine number of realisations and size of the fields


5

10

f i l e P A R = f o p e n ( filenamePAR , r ) ; for j =1:18 ans = f g e t l ( f i l e P A R ) ; end N= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; nx= f s c a n f ( f i l e P A R , %f , 1 ) ; ans = f g e t l ( f i l e P A R ) ; ny= f s c a n f ( f i l e P A R , %f , 1 ) ; f c l o s e ( filePAR ) ;

15

%construct field for Nm

175

C Software Codes
f i l e O U T = f o p e n ( filenameOUT , r ) ; for i =1:3 ans = f g e t l ( f i l e O U T ) ; end f o r i = 1 :N [A, np ] = f s c a n f ( fileOUT , %f , [ nx , ny ] ) ; Nm_data ( i ) . Nm=A ; end f c l o s e ( fileOUT ) ; s a v e Nm Nm_data

20

25

Matlab les for calculating random elds for use in liquefaction assessment
f u n c t i o n [ N1_60_data ]= N1_60_field l o a d Nm load Sigmaeff
5

10

15

20

CR= 1 ; CS = 1 ; CE= 1 ; CB= 1 ; N= 1 0 0 ; f o r i = 1 :N for j =1:150 for k =1:100 CN_data ( 1 , i ) . CN( j , k ) = ( 1 0 0 / . . . ( Sigmaeff_data (1 , i ) . Sigmaeff ( j , k ) ) ) ^ 0 . 5 ; i f CN_data ( 1 , i ) . CN( j , k ) > 1 . 7 CN_data ( 1 , i ) . CN( j , k ) = 1 . 7 end end end end f o r i = 1 :N for j =1:150 for k =1:100 N 1 _ 6 0 _ d a t a ( 1 , i ) . N1_60 ( j , k ) = Nm_data ( 1 , i ) . Nm( j , k ) . . . CN_data ( 1 , i ) . CN( j , k ) CECRCSCB ; end end end s a v e N1_60 N 1 _ 6 0 _ d a t a f u n c t i o n [ CSR_data ] = C S R _ f i e l d

25

30

% % % % %

PGA M d Vs

in units of g - scalar input Magnitude - scalar input depth - matrix with the same size as the area of interest. Assumed to be less than 12m matrix with the same size as the area of interest

176

10

15

20

l o a d Depth load Sigmaeff l o a d Sigma l o a d Vs M= [ 0 5 5 . 5 6 6 . 5 7 7 . 5 8 ] ; PGA= [ 0 . 1 0 . 3 0 . 5 0 . 7 0 . 9 1 . 1 1 . 3 ] ; N= 1 0 0 ; n =0; f o r l = 1 : l e n g t h (M) f o r p = 1 : l e n g t h (PGA) n=n + 1 ; PGA_Area=PGA( p ) o n e s ( 1 5 0 , 1 0 0 ) ; M_Area=M( l ) o n e s ( 1 5 0 , 1 0 0 ) ; f o r i = 1 :N

%perfectly correlated variables %simulation of depth reduction factor r_d and CSR
25

30

35

d1=min ( 1 2 , D e p t h _ d a t a ( 1 , i ) . Depth ) ; s i g m a _ e p s _ r d =d1 . ^ 0 . 8 5 . 0 . 0 1 9 8 ; e p s _ r d = normrnd ( 0 , 1 ) s i g m a _ e p s _ r d ; n u m e r a t o r = 1 + ( 23.013 + 2 . 9 4 9 . PGA_Area + 0 . 9 9 9 . M_Area + . . . 0 . 0 5 2 5 . ( V s _ d a t a ( 1 , i ) . Vs ) ) . / . . . ( 1 6 . 2 5 8 + 0 . 2 0 1 exp ( 0 . 3 4 1 ( ( D e p t h _ d a t a ( 1 , i ) . Depth ) + . . . 0 . 0 7 8 5 . ( V s _ d a t a ( 1 , i ) . Vs ) + 7 . 5 8 6 ) ) ) ; d e n o m i n a t o r = 1 + ( 23.013 + 2 . 9 4 9 . PGA_Area + . . . 0 . 9 9 9 . M_Area + 0 . 0 5 2 5 . ( V s _ d a t a ( 1 , i ) . Vs ) ) . / . . . ( 1 6 . 2 5 8 + 0 . 2 0 1 exp ( 0 . 3 4 1 ( 0 . 0 7 8 5 . ( V s _ d a t a ( 1 , i ) . Vs ) + . . . 7.586))); r_d = numerator . / denominator + eps_rd ; CSR_data ( n ) . CSR_data ( 1 , i ) . CSR = 0 . 6 5 . PGA_Area . . . . ( S i g m a _ d a t a ( 1 , i ) . Sigma ) . / ( S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ) . r _ d ; CSR_data ( n ) .M=M( l ) ; CSR_data ( n ) . PGA=PGA( p ) ; end end end s a v e CSR CSR_data f u n c t i o n [ Vs_data ]= V s _ f i e l d l o a d Nm

40

%construct field for Vs


5

10

N= 1 0 0 ; f o r i = 1 :N for j =1:150 for k =1:100 V s _ d a t a ( 1 , i ) . Vs ( j , k ) = 9 0 ( Nm_data ( 1 , i ) . Nm( j , k ) ) ^ 0 . 3 0 9 ; end end end s a v e Vs V s _ d a t a

177

C Software Codes

f u n c t i o n [ Sigma_data ]= S i g m a _ f i e l d l o a d Depth l o a d GW l o a d USCS


5

10

15

% % % % % % % % % % % % %

CH CH-MH CL CL-CH CL-MH CL-ML MH MH-CH ML ML-CL SC SM SP-SM

1 2 3 4 5 6 7 8 9 10 11 12 13

20

Gamma = [ 1 8 . 6 4 1 7 . 2 7 2 0 . 9 1 9 . 7 7 1 8 . 3 9 2 0 . 7 1 5 . 8 9 . . . 17.27 19.52 20.7 21.68 20.31 1 9 . 9 1 ] ; N= 1 0 0 ; f o r i = 1 :N for j =1:150 for k =1:100 i f USCS_data ( 1 , i ) . USCS ( j , k )==1 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 1 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==2 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 2 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==3 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 3 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==4 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 4 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==5 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 5 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==6 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 6 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==7 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 7 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==8 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 8 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==9 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 9 ) ( D e p t h _ d a t a ( 1 , i e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==10 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 1 0 ) ( D e p t h _ d a t a ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==11 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 1 1 ) ( D e p t h _ d a t a ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==12 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 1 2 ) ( D e p t h _ d a t a ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k ) >=13 S i g m a _ d a t a ( 1 , i ) . Sigma ( j , k ) =Gamma ( 1 3 ) ( D e p t h _ d a t a ( 1 , end end end

25

) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; ) . Depth ( j , k ) ) ; i ) . Depth ( j , k ) ) ; i ) . Depth ( j , k ) ) ; i ) . Depth ( j , k ) ) ; i ) . Depth ( j , k ) ) ;

30

35

40

45

50

178

end
55

s a v e Sigma S i g m a _ d a t a f u n c t i o n [ S i g m a e f f _ d a t a ]= S i g m a e f f _ f i e l d l o a d Depth l o a d GW l o a d USCS


5

10

15

% % % % % % % % % % % % %

CH CH-MH CL CL-CH CL-MH CL-ML MH MH-CH ML ML-CL SC SM SP-SM

1 2 3 4 5 6 7 8 9 10 11 12 13

20

Gamma = [ 1 8 . 6 4 1 7 . 2 7 2 0 . 9 1 9 . 7 7 1 8 . 3 9 2 0 . 7 . . . 15.89 17.27 19.52 20.7 21.68 20.31 1 9 . 9 1 ] ; N= 1 0 0 ; f o r i = 1 :N for j =1:150 for k =1:100 i f USCS_data ( 1 , i ) . USCS ( j , k )==1 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 1 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==2 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 2 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==3 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 3 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==4 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 4 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==5 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 5 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==6 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 6 ) ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 ,

25

... i ) .GW( j , k ) ) 9 . 8 0 6 ; ... i ) .GW( j , k ) ) 9 . 8 0 6 ; ... i ) .GW( j , k ) ) 9 . 8 0 6 ; ... i ) .GW( j , k ) ) 9 . 8 0 6 ; ... i ) .GW( j , k ) ) 9 . 8 0 6 ; ... i ) .GW( j , k ) ) 9 . 8 0 6 ;

30

35

40

45

179

C Software Codes
50

55

60

65

70

75

e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==7 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 7 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==8 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 8 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==9 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 9 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==10 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 1 0 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==11 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 1 1 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k )==12 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 1 2 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j e l s e i f USCS_data ( 1 , i ) . USCS ( j , k ) >=13 S i g m a e f f _ d a t a ( 1 , i ) . S i g m a e f f ( j , k ) =Gamma ( 1 3 ) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k )) . . . ( D e p t h _ d a t a ( 1 , i ) . Depth ( j , k)GW_data ( 1 , i ) .GW( j end end end end save Sigmaeff Sigmaeff_data f u n c t i o n [ S o i l _ d a t a , L i q _ d a t a ]= l i q _ f i e l d l o a d Depth load Sigmaeff l o a d FC l o a d N1_60 l o a d CSR M= [ 0 5 5 . 5 6 6 . 5 7 7 . 5 8 ] ; PGA= [ 0 . 1 0 . 3 0 . 5 0 . 7 0 . 9 1 . 1 1 . 3 ] ; N= 1 0 0 ; n =0; f o r l = 1 : l e n g t h (M) f o r p = 1 : l e n g t h (PGA) n=n + 1 ; PGA_Area=PGA( p ) o n e s ( 1 5 0 , 1 0 0 ) ; M_Area=M( l ) o n e s ( 1 5 0 , 1 0 0 ) ; i f M( l )==0 Liq_data ( n ) . Prob_liq =zeros (150 ,100);

, k ))9.806;

, k ))9.806;

, k ))9.806;

, k ))9.806;

, k ))9.806;

, k))9.806;

, k))9.806;

80

10

15

180

20

L i q _ d a t a ( n ) .M=M( l ) ; L i q _ d a t a ( n ) . PGA=PGA( p ) ; else f o r i = 1 :N

%perfectly correlated variables


25

30

35

eps_L = normrnd ( 0 , 2 . 7 ) o n e s ( 1 5 0 , 1 0 0 ) ; g_x = ( N 1 _ 6 0 _ d a t a ( 1 , i ) . N1_60 ) . ( 1 + 0 . 0 0 4 . . . . ( F C _ d a t a ( 1 , i ) . FC ) ) 1 3 . 3 2 . . . . l o g ( CSR_data ( 1 , n ) . CSR_data ( 1 , i ) . CSR ) . . . 2 9 . 5 3 . l o g ( M_Area ) 3 . 7 . . . . log ( ( Sigmaeff_data (1 , i ) . Sigmaeff )/100)+ . . . 0 . 0 5 . ( F C _ d a t a ( 1 , i ) . FC ) + 1 6 . 8 5 + eps_L ; g _ x _ I =g_x ; g _ x _ I ( f i n d ( g_x < 0 ) ) = 1 ; g _ x _ I ( f i n d ( g_x > = 0 ) ) = 0 ; S o i l _ d a t a ( i ) . g_x=g_x ; S o i l _ d a t a ( i ) . g_x_I=g_x_I ; end g_x_I_cum = z e r o s ( 1 5 0 , 1 0 0 ) ; f o r i = 1 :N g_x_I_cum = g_x_I_cum + S o i l _ d a t a ( i ) . g _ x _ I ; S o i l _ d a t a ( i ) . g_x_I_cum = g_x_I_cum ; end L i q _ d a t a ( n ) . P r o b _ l i q = S o i l _ d a t a (N ) . g_x_I_cum . / N; L i q _ d a t a ( n ) .M=M( l ) ; L i q _ d a t a ( n ) . PGA=PGA( p ) ; end end end save Liq L i q _ d a t a

40

45

181

C Software Codes

C.3 Structural damage analysis tools


OpenSees les for calculating maximum interstory drift ratios of the structure classes
# Typ5 . NR . O . R e s i d e n t i a l . t c l # b a s e d on t h e e x a m p l e s i n # h t t p : / / o p e n s e e s . b e r k e l e y . edu / OpenSees / m a n u a l s / ExamplesManual /HTML/ # 5 s t o r y RCMRF, n o t r e t r o f i t t e d , d e s i g n e d b e f o r e 1 9 8 0 , r e s i d e n t i a l u s e # nonlinearBeamColumn element , i n e l a s t i c f i b e r s e c t i o n # # d e f i n e UNITS # p u t s " U n i a x i a l I n e l a s t i c M a t e r i a l , F i b e r RCS e c t i o n , N o n l i n e a r Model " # p u t s " Uniform E a r t h q u a k e E x c i t a t i o n " set sec 1 . ; # d e f i n e b a s i c u n i t s o u t p u t u n i t s set g 9.806; # gravitational acceleration s e t Ubig 1 . e10 ; # a r e a l l y l a r g e number s e t U s m a l l [ e x p r 1 / $Ubig ] ; # a r e a l l y s m a l l number # b a s i c u n i t s a r e : m, s e c and kN # SET UP wipe ; # c l e a r memory o f a l l p a s t model %n definitions model B a s i c B u i l d e r \ # D e f i n e t h e model b u i l d e r , %n ndm=# d i m e n s i o n , n d f =# d o f s ndm 2 n d f 3 ; # s e t d a t a D i r OutputICASP ; # s e t up name o f d a t a d i r e c t o r y # f i l e mkdir $ d a t a D i r ; # create data directory s e t GMdir " . . / G M f i l e s / " ; # ground m o t i o n f i l e d i r e c t o r y source BuildRCrectSection . t c l ; # p r o c e d u r e f o r d e f i n i n i n g RC f i b e r s e c t i o n # d e f i n e GEOMETRY # d e f i n e s t r u c t u r e g e o m e t r y p a r a m t e r s s e t LCol 3 . ; # column h e i g h t s e t LBeam 5 . ; # beam l e n g t h s e t NStory 5 ; # number o f s t o r i e s a b o v e g r o u n d s e t NBay 4 ; # number o f b a y s # define for { set set for NODAL COORDINATES l e v e l 1} { $ l e v e l <=[ e x p r $ N S t o r y + 1 ] } { i n c r l e v e l 1} { Y [ e x p r ( $ l e v e l 1) $LCol ] ; { s e t p i e r 1} { $ p i e r <= [ e x p r $NBay + 1 ] } { i n c r p i e r 1} { s e t X [ e x p r ( $ p i e r 1)$LBeam ] ; s e t nodeID [ e x p r $ l e v e l 10+ $ p i e r ] node $nodeID $X $Y ; # a c t u a l l y d e f i n e node

10

15

20

25

30

35

40

} } # BOUNDARY CONDITIONS fixY 0.0 1 1 1;

# p i n a l l Y= 0 . 0 n o d e s

45

# D e f i n e SECTIONS # define section tags : s e t ColSecTag 1 s e t BeamSecTag 2

182

50

# Section Properties : s e t HCol 0 . 5 0 ; s e t BCol 0 . 5 0 ; s e t HBeam 0 . 5 0 ; s e t BBeam 0 . 3 0 ;

# Column w i d t h # Column h e i g h t # Beam d e p t h # Beam w i d t h

55

60

# General Material parameters s e t G $Ubig ; # make s t i f f s h e a r modulus set J 1.0; # t o r s i o n a l s e c t i o n s t i f f n e s s %n (G makes GJ l a r g e ) s e t GJ [ e x p r $G $ J ] ; # c o n f i n e d and u n c o n f i n e d c o n c r e t e # nominal c o n c r e t e compressive s t r e n g t h s e t f c 20000. ; # Concrete Compressive S t r e n g t h s e t Ec 1 4 0 0 0 0 0 0 . ; # C o n c r e t e E l a s t i c Modulus s e t nu 0 . 2 ; s e t Gc [ e x p r $Ec / 2 . / [ e x p r 1+ $nu ] ] ; # T o r s i o n a l s t i f f n e s s Modulus # confined concrete s e t Kfc 1 . 3 ;

65

70

75

80

85

# r a t i o o f c o n f i n e d t o u n c o n f i n e d %n concrete strength s e t Kres 0 . 2 ; # r a t i o o f r e s i d u a l / u l t i m a t e t o %n maximum s t r e s s s e t fc1C [ e x p r $Kfc $ f c ] ; # CONFINED c o n c r e t e ( mander model ) , %n maximum s t r e s s s e t eps1C [ e x p r 2 . $fc1C / $Ec ] ; # s t r a i n a t maximum s t r e s s s e t fc2C [ e x p r $ K r e s $fc1C ] ; # ultimate stress s e t eps2C [ e x p r 20 $eps1C ] ; # strain at ultimate stress s e t lambda 0 . 1 ; # r a t i o b e t w e e n u n l o a d i n g s l o p e a t %n $ e p s 2 and i n i t i a l s l o p e $Ec # unconfined concrete s e t fc1U $ f c ; # UNCONFINED c o n c r e t e ( t o d e s c h i n i %n p a r a b o l i c model ) , maximum s t r e s s s e t eps1U 0. 00 3; # s t r a i n a t maximum s t r e n g t h o f %n unconfined concrete s e t fc2U [ e x p r $ K r e s $fc1U ] ; # ultimate stress s e t eps2U 0. 01 ; # strain at ultimate stress # t e n s i l e s t r e n g t h p r o p e r t i e s s e t f t C [ e x p r 0.14 $fc1C ] ; s e t f t U [ e x p r 0.14 $fc1U ] ; s e t Ets [ expr $ftU / 0 . 0 0 2 ] ;

90

# tensile strength +tension # tensile strength +tension # tension softening stiffness

95

100

# s e t up l i b r a r y o f m a t e r i a l s # s e t value only i f i t has not i f { [ i n f o e x i s t s i m a t ] ! = 1}{ # been d e f i n e d p r e v i o u s l y . set imat 0 }; s e t IDconcCore 1 s e t IDconcCover 2 u n i a x i a l M a t e r i a l C o n c r e t e 0 2 $ I D c o n c C o r e $fc1C $eps1C $fc2C $eps2C \ $lambda $ f t C $ E t s ; # Core c o n c r e t e ( c o n f i n e d )

183

C Software Codes
u n i a x i a l M a t e r i a l C o n c r e t e 0 2 $ I D c o n c C o v e r $fc1U $eps1U $fc2U $eps2U \ $lambda $ f t U $ E t s ; # Cover c o n c r e t e ( u n c o n f i n e d )
105

110

# REINFORCING STEEL p a r a m e t e r s # s e t Fy 4 2 0 0 0 0 . ; # STEEL y i e l d s t r e s s s e t Es 2 1 0 0 0 0 0 0 0 . ; # modulus o f s t e e l s e t Bs 0 . 0 1 ; # s t r a i n h a r d e n i n g r a t i o s e t R0 1 8 ; # c o n t r o l t r a n s i t i o n from e l a s t i c t o p l a s t i c s e t cR1 0 . 9 2 5 ; # c o n t r o l t r a n s i t i o n from e l a s t i c t o p l a s t i c s e t cR2 0 . 1 5 ; # c o n t r o l t r a n s i t i o n from e l a s t i c t o p l a s t i c set IDSteel 3 u n i a x i a l M a t e r i a l S t e e l 0 2 $ I D S t e e l $Fy $Es $Bs $R0 $cR1 $cR2

115

120

125

# FIBER SECTION p r o p e r t i e s # Column s e c t i o n g e o m e t r y : set cover 0.04; s e t numBarsTopCol 7 ; s e t numBarsBotCol 7 ; s e t numBarsIntCol 10; s e t barAreaTopCol 0.000201; s e t barAreaBotCol 0.000201; set barAreaIntCol 0.000201; set set set set set set numBarsTopBeam numBarsBotBeam numBarsIntBeam barAreaTopBeam barAreaBotBeam barAreaIntBeam 8; 3; 0; 0.000201; 0.000201; 0.000201;

# # # # # # # # # # # # #

r e c t a n g u l a r RCColumn c o v e r number o f r e i n f o r c e m e n t b a r s on t o p l a y e r number o f r e i n f o r c e m e n t b a r s on b o t t o m l a y e r number o f r e i n f o r c i n g b a r s on i n t e r m . l a y e r s l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a number o f r e i n f o r c e m e n t b a r s on t o p l a y e r number o f r e i n f o r c e m e n t b a r s on b o t t o m l a y e r number o f r e i n f o r c i n g b a r s on i n t e r m . l a y e r s l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a l o n g i t u d i n a l r e i n f o r c e m e n t bar a r e a

130

135

s e t nfCoreY 2 0 ; s e t nfCoreZ 20; s e t nfCoverY 2 0 ; s e t nfCoverZ 20;

# number o f f i b e r s i n t h e c o r e p a t c h i n yd i r . # number o f f i b e r s i n t h e c o r e p a t c h i n zd i r . # number o f f i b e r s i n t h e c o v e r p a t c h e s %n with long s i d e s in the y d i r e c t i o n # number o f f i b e r s i n t h e c o v e r p a t c h e s %n with long s i d e s in the z d i r e c t i o n

140

145

150

# r e c t a n g u l a r s e c t i o n w i t h one l a y e r o f s t e e l e v e n l y d i s t r i b u t e d a r o u n d t h e # p e r i m e t e r and a c o n f i n e d c o r e . B u i l d R C r e c t S e c t i o n $ColSecTag $HCol $BCol $ c o v e r $ c o v e r $ I D c o n c C o r e \ $ I D c o n c C o v e r $ I D S t e e l $numBarsTopCol $ b a r A r e a T o p C o l $numBarsBotCol \ $ b a r A r e a B o t C o l $ n u m B a r s I n t C o l $ b a r A r e a I n t C o l $nfCoreY $ n f C o r e Z $nfCoverY \ $nfCoverZ B u i l d R C r e c t S e c t i o n $BeamSecTag $HBeam $BBeam $ c o v e r $ c o v e r $ I D c o n c C o r e \ $ I D c o n c C o v e r $ I D S t e e l $numBarsTopBeam $barAreaTopBeam $numBarsBotBeam \ $barAreaBotBeam $numBarsIntBeam $ b a r A r e a I n t B e a m $nfCoreY $ n f C o r e Z \ $nfCoverY $ n f C o v e r Z # d e f i n e ELEMENTS # s e t up g e o m e t r i c t r a n s f o r m a t i o n s o f e l e m e n t # s e p a r a t e c o l u m n s and beams , i n c a s e o f PD e l t a a n a l y s i s f o r c o l u m n s

184

155

set IDColTransf 1; # a l l columns s e t IDBeamTransf 2 ; # a l l beams s e t ColTransfType Linear ; # Linear PDelta C o r o t a t i o n a l geomTransf $ColTransfType $IDColTransf ; # columns can have PDelta e f f e c t s g e o m T r a n s f L i n e a r $IDBeamTransf # D e f i n e BeamColumn E l e m e n t s s e t np 5 ; # number o f Gauss i n t e g r a t i o n p o i n t s f o r n o n l i n e a r %n c u r v a t u r e d i s t r i b u t i o n np =2 f o r l i n e a r d i s t r i b u t i o n ok # columns s e t N0col 1 0 0 ; # column e l e m e n t numbers set level 0 f o r { s e t l e v e l 1} { $ l e v e l <= $ N S t o r y } { i n c r l e v e l 1} { f o r { s e t p i e r 1} { $ p i e r <= [ e x p r $NBay + 1 ] } { i n c r p i e r 1} { s e t elemID [ e x p r $N0col + $ l e v e l 10 + $ p i e r ] s e t n o d e I [ e x p r $ l e v e l 10 + $ p i e r ] s e t n o d e J [ e x p r ( $ l e v e l +1)10 + $ p i e r ] e l e m e n t n o n l i n e a r B e a m C o l u m n $elemID $ n o d e I $ n o d e J $np \ $ColSecTag $ I D C o l T r a n s f ; # c o l u m n s } } # beams s e t N0beam 2 0 0 ; # beam e l e m e n t numbers s e t M0 0 f o r { s e t l e v e l 2} { $ l e v e l <=[ e x p r $ N S t o r y + 1 ] } { i n c r l e v e l 1} { f o r { s e t bay 1} { $bay <= $NBay} { i n c r bay 1} { s e t elemID [ e x p r $N0beam + $ l e v e l 10 + $bay ] s e t n o d e I [ e x p r $M0 + $ l e v e l 10 + $bay ] s e t n o d e J [ e x p r $M0 + $ l e v e l 10 + $bay + 1 ] e l e m e n t n o n l i n e a r B e a m C o l u m n $elemID $ n o d e I $ n o d e J $np \ $BeamSecTag $IDBeamTransf ; # beams } } # D e f i n e GRAVITY LOADS, w e i g h t and m a s s e s # c a l c u l a t e d e a d l o a d o f frame , assume t h i s t o be an i n t e r n a l f r a m e %n ( do LL i n a s i m i l a r manner ) # c a l c u l a t e d i s t r i b u t e d w e i g h t a l o n g t h e beam l e n g t h s e t GammaConcrete 2 5 . ; # R e i n f o r c e d C o n c r e t e f l o o r s l a b s set Tslab 0.15; # 15 cm s l a b s e t L s l a b [ e x p r 2$LBeam / 2 ] ; # assume s l a b e x t e n d s a d i s t a n c e o f %n $LBeam1 / 2 i n / o u t o f p l a n e s e t Q s l a b [ e x p r $GammaConcrete $ T s l a b $ L s l a b ] ; s e t QdlCol [ e x p r $GammaConcrete $HCol $BCol ] ; # s e l f w e i g h t o f Column , %n weight per length s e t QBeam [ e x p r $GammaConcrete $HBeam$BBeam ] ; # s e l f w e i g h t o f Beam , %n weight per length s e t QdlBeam [ e x p r $ Q s l a b + $QBeam ] ; # d e a d l o a d d i s t r i b u t e d %n a l o n g beam . s e t W e i g h t C o l [ e x p r $QdlCol $LCol ] ; # t o t a l Column w e i g h t s e t WeightBeam [ e x p r $QdlBeam$LBeam ] ; # t o t a l Beam w e i g h t # a s s i g n masses t o t h e nodes t h a t t h e columns a r e c o n n e c t e d t o # e a c h c o n n e c t i o n t a k e s t h e mass o f 1 / 2 o f e a c h e l e m e n t f r a m i n g i n t o i t set iFloorWeight ""

160

165

170

175

180

185

190

195

200

205

185

C Software Codes
set WeightTotal 0.0 f o r { s e t l e v e l 2} { $ l e v e l <=[ e x p r $ N S t o r y + 1 ] } { i n c r l e v e l 1} { ; set FloorWeight 0.0 i f { $ l e v e l == [ e x p r $ N S t o r y + 1 ] } { s e t ColWeightFact 1; # one column i n t o p s t o r y } else { s e t ColWeightFact 2; # two c o l u m n s e l s e w h e r e } f o r { s e t p i e r 1} { $ p i e r <= [ e x p r $NBay + 1 ] } { i n c r p i e r 1} { ; i f { $ p i e r == 1 | | $ p i e r == [ e x p r $NBay + 1 ] } { s e t BeamWeightFact 1 ; # one beam a t e x t e r i o r n o d e s } else {; s e t BeamWeightFact 2 ; # two beams e l e w h e r e } s e t WeightNode [ e x p r $ C o l W e i g h t F a c t $ W e i g h t C o l / 2 + \ $BeamWeightFact $WeightBeam / 2 ] s e t MassNode [ e x p r $WeightNode / $g ] ; s e t nodeID [ e x p r $ l e v e l 10+ $ p i e r ] mass $nodeID $MassNode 0 . 0 0 . 0 0 . 0 0 . 0 0 . 0 ; # d e f i n e mass s e t F l o o r W e i g h t [ e x p r $ F l o o r W e i g h t +$WeightNode ] ; } lappend iFloorWeight $FloorWeight s et WeightTotal [ expr $WeightTotal+ $FloorWeight ] } s e t M a s s T o t a l [ e x p r $ W e i g h t T o t a l / $g ] ; # t o t a l mass # D e f i n e RECORDERS # r e c o r d e r Node f i l e $ d a t a D i r / DFree21 . o u t \ node 21 d o f 1 d i s p ; # d i s p l a c e m e n t s of nodes # r e c o r d e r D r i f t f i l e $ d a t a D i r / Dr1 . o u t \ iNode 11 jNode 21 d o f 1 p e r p D i r n 2 ; # l a t e r a l d r i f t # r e c o r d e r D r i f t f i l e $ d a t a D i r / Dr2 . o u t \ iNode 21 jNode 31 d o f 1 p e r p D i r n 2 ; # l a t e r a l d r i f t # r e c o r d e r D r i f t f i l e $ d a t a D i r / Dr3 . o u t \ iNode 31 jNode 41 d o f 1 p e r p D i r n 2 ; # l a t e r a l d r i f t # r e c o r d e r D r i f t f i l e $ d a t a D i r / Dr4 . o u t \ iNode 41 jNode 51 d o f 1 p e r p D i r n 2 ; # l a t e r a l d r i f t # r e c o r d e r D r i f t f i l e $ d a t a D i r / Dr5 . o u t \ iNode 51 jNode 61 d o f 1 p e r p D i r n 2 ; # l a t e r a l d r i f t ## ## D e f i n e DISPLAY # DisplayModel2D NodeNumbers # d e f i n e GRAVITY # GRAVITY LOADS # d e f i n e g r a v i t y l o a d a p p l i e d t o beams and c o l u m n s %n e l e L o a d a p p l i e s l o a d s i n l o c a l c o o r d i n a t e a x i s p a t t e r n P l a i n 101 L i n e a r { f o r { s e t l e v e l 1} { $ l e v e l <= $ N S t o r y } { i n c r l e v e l 1} { f o r { s e t p i e r 1} { $ p i e r <= [ e x p r $NBay + 1 ] } { i n c r p i e r 1} { s e t elemID [ e x p r $N0col + $ l e v e l 10 + $ p i e r ] e l e L o a d e l e $elemID t y p e beamUniform 0 $QdlCol ; # COLUMNS } } f o r { s e t l e v e l 2} { $ l e v e l <=[ e x p r $ N S t o r y + 1 ] } { i n c r l e v e l 1} { f o r { s e t bay 1} { $bay <= $NBay} { i n c r bay 1} {

210

215

220

225

230

235

240

245

250

255

260

186

s e t elemID [ e x p r $N0beam + $ l e v e l 10 + $bay ] e l e L o a d e l e $elemID t y p e beamUniform $QdlBeam ; } }


265

# BEAMS

270

275

280

285

290

295

300

305

310

} # G r a v i t y a n a l y s i s p a r a m e t e r s load c o n t r o l l e d s t a t i c a n a l y s i s s e t T o l 1 . 0 e 8; # convergence t o l e r a n c e for t e s t variable constraintsTypeGravity Plain ; # default ; i f { [ i n f o e x i s t s R i g i d D i a p h r a g m ] == 1} { i f { $ R i g i d D i a p h r a g m =="ON" } { v a r i a b l e c o n s t r a i n t s T y p e G r a v i t y L a g r a n g e ; # l a r g e model : %n try Transformation }; # i f r i g i d d i a p h r a g m i s on }; # i f r i g i d diaphragm e x i s t s constraints $constraintsTypeGravity ; # how i t h a n d l e s b o u n d a r y c o n d i t i o n s n u m b e r e r RCM; # r e n u m b e r dof s t o m i n i m i z e band %n w i d t h ( o p t i m i z a t i o n ) , i f you want t o system BandGeneral ; # how t o s t o r e and s o l v e t h e s y s t e m %n o f e q u a t i o n s i n t h e a n a l y s i s ( l a r g e %n model : t r y UmfPack ) t e s t NormDispIncr $Tol 6 ; # d e t e r m i n e i f c o n v e r g e n c e h a s b e e n %n a c h i e v e d a t t h e end o f an i t e r a t i o n %n step a l g o r i t h m Newton ; # u s e Newton s s o l u t i o n a l g o r i t h m : %n u p d a t e s t a n g e n t s t i f f n e s s a t e v e r y %n iteration set NstepGravity 1; # a p p l y g r a v i t y i n 10 s t e p s s e t DGravity [ expr 1 . / $NstepGravity ] ; # f i r s t load increment ; i n t e g r a t o r L o a d C o n t r o l $ D G r a v i t y ; # d e t e r m i n e t h e n e x t t i m e s t e p f o r %n an a n a l y s i s analysis Static ; # d e f i n e t y p e o f a n a l y s i s s t a t i c o r %n transient analyze $NstepGravity ; # apply g r a v i t y # m a i n t a i n c o n s t a n t g r a v i t y l o a d s and r e s e t t i m e t o z e r o l o a d C o n s t t i m e 0 . 0 p u t s " Model B u i l t " # # Uniform EQ g r o u n d m o t i o n # e x e c u t e t h i s f i l e a f t e r you h a v e b u i l t t h e model , and a f t e r a p p l i e d g r a v i t y # Uniform E a r t h q u a k e g r o u n d m o t i o n ( u n i f o r m a c c . i n p u t a t a l l s u p p o r t n o d e s ) # s e t f i l e I d [ open " O u t p u t Typ5NRNRes . d a t " w] # f o r { s e t i 1} { $ i <321} { i n c r i 1} { ; # Loop f o r g r o u n d m o t i o n s e t s e t GMdirection 1; # ground m o t i o n d i r e c t i o n s e t GMfile $ i ; # ground m o t i o n f i l e n a m e s s e t GMfact 1 . 0 ; # ground m o t i o n s c a l i n g f a c t o r # d i s p l a y deformed shape : s e t ViewScale 5 ; # amplify d i s p l a y of deformed shape # DisplayModel2D DeformedShape \ $ViewScale ; # d i s p l a y d e f o r m e d s h a p e , %n t h e s c a l i n g f a c t o r n e e d s %n t o be a d j u s t e d f o r e a c h model # r e c o r d e r p l o t $ d a t a D i r / DFree . o u t D i s p l \ 10 700 400 400 c o l u m n s 1 2 ; # a window t o p l o t t h e n o d a l %n

187

C Software Codes
displacements versus time # s e t up GM n a l y s i s p a r a m e t e r s a s et DtAnalysis [ expr 0.01 $sec ] ; # t i m e s t e p Dt f o r l a t e r a l a n a l y s i s s e t T m a x A n a l y s i s [ e x p r 2 0 . 4 8 $ s e c ] ; # maximum d u r a t i o n o f GM a n a l y s i s %n s h o u l d be 50 $ s e c # s e t up a n a l y s i s p a r a m e t e r s variable constraintsTypeDynamic Transformation ; c o n s t r a i n t s $constraintsTypeDynamic ; v a r i a b l e numbererTypeDynamic RCM n u m b e r e r $numbererTypeDynamic v a r i a b l e systemTypeDynamic B a n d G e n e r a l ; # t r y UmfPack f o r l a r g e p r o b l e m s s y s t e m $systemTypeDynamic v a r i a b l e TolDynamic 1 . e 8; # Convergence Test : t o l e r a n c e v a r i a b l e maxNumIterDynamic 1 0 ; # C o n v e r g e n c e T e s t : maximum number %n o f i t e r a t i o n s t h a t w i l l be p e r f o r %n med b e f o r e " f a i l u r e t o c o n v e r g e " i s %n returned v a r i a b l e printFlagDynamic 0; # Convergence Test : f l a g used to %n p r i n t i n f o r m a t i o n on c o n v e r g e n c e %n ( optional ) # 1: print info %n on e a c h s t e p ; v a r i a b l e testTypeDynamic EnergyIncr ; # Convergencet e s t type t e s t $ t e s t T y p e D y n a m i c $TolDynamic $maxNumIterDynamic $ p r i n t F l a g D y n a m i c ; # f o r improved c o n v e r g e n c e p r o c e d u r e : v a r i a b l e maxNumIterConvergeDynamic 2 0 0 0 ; v a r i a b l e printFlagConvergeDynamic 0; v a r i a b l e a l g o r i t h m T y p e D y n a m i c M o d i fi e d N e wt o n a l g o r i t h m $algorithmTypeDynamic ; v a r i a b l e NewmarkGamma 0 . 5 ; # Newmark i n t e g r a t o r gamma p a r a m e t e r v a r i a b l e NewmarkBeta 0 . 2 5 ; # Newmark i n t e g r a t o r b e t a p a r a m e t e r v a r i a b l e i n t e g r a t o r T y p e D y n a m i c Newmark ; i n t e g r a t o r $ i n t e g r a t o r T y p e D y n a m i c $NewmarkGamma $NewmarkBeta v a r i a b l e analysisTypeDynamic T r a n s i e n t a n a l y s i s $analysisTypeDynamic # # # # d e f i n e & a p p l y damping RAYLEIGH damping p a r a m e t e r s , Where t o p u t M/ Kp r o p damping , s w i t c h e s ( h t t p : / / o p e n s e e s . b e r k e l e y . edu / OpenSees / m a n u a l s / u s e r m a n u a l / 1 0 9 9 . htm ) D=$alphaM M+ $ b e t a K c u r r K c u r r e n t +$betaKcomm K l a st C o m m i t + $ b e a t K i n i t $ K i n i t i a l

315

320

325

330

335

340

345

350

355

360

365

s e t xDamp 0 . 0 2 ; # damping r a t i o s e t MpropSwitch 1 . 0 ; set KcurrSwitch 0 . 0 ; s e t KcommSwitch 1 . 0 ; set KinitSwitch 0.0; set nEigenI 1; # mode 1 s et nEigenJ 3; # mode 3 s e t lambdaN [ e i g e n [ e x p r $ n E i g e n J ] ] ; # e i g e n v a l u e a n a l y s i s f o r n E i g e n J modes s e t l a m b d a I [ l i n d e x $lambdaN [ e x p r $ n E i g e n I 1 ] ] ; # e i g e n v a l u e mode i s e t l a m b d a J [ l i n d e x $lambdaN [ e x p r $ n E i g e n J 1 ] ] ; # e i g e n v a l u e mode j s e t omegaI [ e x p r pow ( $ l a m b d a I , 0 . 5 ) ] ; s e t omegaJ [ e x p r pow ( $lambdaJ , 0 . 5 ) ] ; # M r o p . damping ; D = alphaM M p

188

370

375

s e t alphaM [ e x p r $MpropSwitch $xDamp ( 2 $omegaI $omegaJ ) / ( $omegaI +$omegaJ ) ] ; # c u r r e n t K ; + beatKcurr KCurrent s e t b e t a K c u r r [ e x p r $ K c u r r S w i t c h 2 . $xDamp / ( $omegaI +$omegaJ ) ] ; # l a s t c o m m i t t e d K; +betaKcomm K l a s t C o m m i t t s e t betaKcomm [ e x p r $KcommSwitch 2 . $xDamp / ( $omegaI +$omegaJ ) ] ; # i n i t i a l K ; + b e a t K i n i t Kini s e t b e t a K i n i t [ e x p r $ K i n i t S w i t c h 2 . $xDamp / ( $omegaI +$omegaJ ) ] ; # RAYLEIGH damping r a y l e i g h $alphaM $ b e t a K c u r r $ b e t a K i n i t $betaKcomm ; # p e r f o r m Dynamic GroundMotion A n a l y s i s # t h e f o l l o w i n g commands a r e u n i q u e t o t h e Uniform E a r t h q u a k e e x c i t a t i o n s e t IDloadTag [ expr (400+ $ i ) ] ; # for uniformSupport e x c i t a t i o n # Uniform EXCITATION : a c c e l e r a t i o n i n p u t s e t GMfatt [ e x p r $GMfact ] ; # d a t a i n i n p u t f i l e i s i n g U n i f t s # time s e r i e s information s e t A c c e l S e r i e s " S e r i e s d t 0 . 0 1 f i l e P a t h $GMdir / $GMfile . g3 f a c t o r $GMfatt " ; # c r e a t e Uniform e x c i t a t i o n p a t t e r n U n i f o r m E x c i t a t i o n $ I D l o a d T a g $ G M d i r e c t i o n a c c e l $AccelSeries ; s e t Nsteps [ expr i n t ( $TmaxAnalysis / $DtAnalysis ) ] ; # a c t u a l l y p e r f o r m a n a l y s i s ; r e t u r n s ok =0 i f a n a l y s i s was s u c c e s s f u l # s e t ok [ a n a l y z e $ N s t e p s $ D t A n a l y s i s ] ; set drift_max 0 set d r i f t _ j 0 f o r { s e t j 0} { $ j < 2048} { i n c r j 1} { ; # Loop c a l c u l a t i n g MIDR i n e a c h t s t e p s e t ok [ a n a l y z e 1 0 . 0 1 ] s e t d_1 [ n o d e D i s p 21 1 ] s e t d_2 [ n o d e D i s p 31 1 ] s e t d_3 [ n o d e D i s p 41 1 ] s e t d_4 [ n o d e D i s p 51 1 ] s e t d_5 [ n o d e D i s p 61 1 ] # s e t d r i _ 1 [ e x p r ( $d_1 / 3 . 0 ) ] s e t d r i _ 2 [ e x p r ( ( $d_2$d_1 ) / 3 . 0 ) ] s e t d r i _ 3 [ e x p r ( ( $d_3$d_2 ) / 3 . 0 ) ] s e t d r i _ 4 [ e x p r ( ( $d_4$d_3 ) / 3 . 0 ) ] s e t d r i _ 5 [ e x p r ( ( $d_5$d_4 ) / 3 . 0 ) ] # s e t d r i f t _ 1 [ e x p r abs ( $ d r i _ 1 ) ] s e t d r i f t _ 2 [ e x p r abs ( $ d r i _ 2 ) ] s e t d r i f t _ 3 [ e x p r abs ( $ d r i _ 3 ) ] s e t d r i f t _ 4 [ e x p r abs ( $ d r i _ 4 ) ] s e t d r i f t _ 5 [ e x p r abs ( $ d r i _ 5 ) ] # set l i s t [ l i s t $drift_1 $drift_2 $drift_3 $drift_4 $drift_5 ] f o r e a c h e l e m e n t [ l r a n g e $ l i s t 1 end ] { i f { $element > $ d r i f t _ j } { set d r i f t _ j $element } } i f { ( $ d r i f t _ j >= $ d r i f t _ m a x ) } { s e t d r i f t _ m a x $ d r i f t _ j } i f { $ok ! = 0} { ; # a n a l y s i s was n o t s u c c e s s f u l .

380

385

390

395

400

405

410

415

189

C Software Codes
420

425

430

435

440

445

450

}; };

# # c h a n g e some a n a l y s i s p a r a m e t e r s t o a c h i e v e c o n v e r g e n c e # performance i s slower i n s i d e t h i s loop # Timec o n t r o l l e d a n a l y s i s s e t ok 0 ; s e t controlTime [ getTime ] ; w h i l e { $ c o n t r o l T i m e < $ T m a x A n a l y s i s && $ok == 0} { s e t controlTime [ getTime ] s e t ok [ a n a l y z e 1 $ D t A n a l y s i s ] i f { $ok ! = 0} { p u t s " T r y i n g Newton w i t h I n i t i a l T a n g e n t . . " t e s t NormDispIncr $ T o l 1000 0 a l g o r i t h m Newton i n i t i a l s e t ok [ a n a l y z e 1 $ D t A n a l y s i s ] t e s t $ t e s t T y p e D y n a m i c $TolDynamic \ $maxNumIterDynamic 0 a l g o r i t h m $algorithmTypeDynamic } i f { $ok ! = 0} { p u t s " T r y i n g Broyden . . " a l g o r i t h m Broyden 8 s e t ok [ a n a l y z e 1 $ D t A n a l y s i s ] a l g o r i t h m $algorithmTypeDynamic } i f { $ok ! = 0} { p u t s " Trying NewtonWithLineSearch . . " a l g o r i t h m NewtonLineSearch . 8 s e t ok [ a n a l y z e 1 $ D t A n a l y s i s ] a l g o r i t h m $algorithmTypeDynamic } } # end i f ok ! 0 # Loop f o r c a l c u l a t i n g t h e MIDR f o r e a c h t i m e s t e p

455

460

# s e t f i l e I d [ open " O u t p u t . t x t " w] p u t s $ f i l e I d " $GMfile [ e x p r $ d r i f t _ m a x 1 0 0 ] [ e x p r abs ( $d_5 ) ] $ d r i f t _ 5 " p u t s " Beben $ i " wipeAnalysis #} ; # Loop f o r g r o u n d m o t i o n s e t # close $fileId p u t s " Ground Motion Done . End Time : [ g e t T i m e ] " # B u i l d R C r e c t S e c t i o n . t c l # b a s e d on t h e e x a m p l e s i n # h t t p : / / o p e n s e e s . b e r k e l e y . edu / OpenSees / m a n u a l s / ExamplesManual /HTML/ p r o c B u i l d R C r e c t S e c t i o n { i d HSec BSec coverH c o v e r B c o r e I D c o v e r I D s t e e l I D \ numBarsTop b a r A r e a T o p numBarsBot b a r A r e a B o t n u m B a r s I n t T o t b a r A r e a I n t nfCoreY \ n f C o r e Z nfCoverY n f C o v e r Z } { ################################################ # B u i l d R C r e c t S e c t i o n $ i d $HSec $BSec $coverH $ c o v e r B $ c o r e I D %n $ c o v e r I D $ s t e e l I D $numBarsTop $ b a r A r e a T o p $numBarsBot $ b a r A r e a B o t %n $ n u m B a r s I n t T o t $ b a r A r e a I n t $nfCoreY $ n f C o r e Z $nfCoverY $ n f C o v e r Z ################################################

10

190

15

20

25

30

35

40

45

50

55

60

# B u i l d f i b e r r e c t a n g u l a r RC s e c t i o n , 1 s t e e l l a y e r t o p , 1 b o t , %n 1 skin , confined core # D e f i n e a p r o c e d u r e which g e n e r a t e s a r e c t a n g u l a r RC s e c t i o n # w i t h one l a y e r o f s t e e l a t t o p & bottom , s k i n r e i n f o r c e m e n t and a # confined core . # by : S i l v i a Mazzoni , 2006 # a d a p t e d from M i c h a e l H . S c o t t , 2003 # Formal arguments # i d t a g f o r t h e s e c t i o n t h a t i s g e n e r a t e d by t h i s p r o c e d u r e # HSec d e p t h o f s e c t i o n , a l o n g l o c a l y a x i s # BSec w i d t h o f s e c t i o n , a l o n g l o c a l z a x i s # cH d i s t a n c e from s e c t i o n b o u n d a r y t o n e u t r a l a x i s o f r e i n f . # cB d i s t a n c e from s e c t i o n b o u n d a r y t o s i d e o f r e i n f o r c e m e n t # coreID m a t e r i a l t a g for t h e c o r e patch # coverID m a t e r i a l tag for the cover p a t c h e s # steelID material tag for the r e i n f o r c i n g s t e e l # numBarsTop number o f r e i n f o r c i n g b a r s i n t h e t o p l a y e r # numBarsBot number o f r e i n f o r c i n g b a r s i n t h e b o t t o m l a y e r # n u m B a r s I n t T o t TOTAL number o f r e i n f o r c i n g b a r s on t h e %n i n t e r m e d i a t e l a y e r s , s y m m e t r i c a b o u t z a x i s and 2 b a r s %n p e r l a y e r n e e d s t o be an e v e n i n t e g e r # b a r A r e a T o p c r o s s s e c t i o n a l a r e a o f e a c h r e i n f o r c i n g bar %n in top layer # b a r A r e a B o t c r o s s s e c t i o n a l a r e a o f e a c h r e i n f o r c i n g bar %n in bottom l a y e r # b a r A r e a I n t c r o s s s e c t i o n a l a r e a o f e a c h r e i n f o r c i n g bar %n in intermediate layer # nfCoreY number o f f i b e r s i n t h e c o r e p a t c h i n t h e yd i r # n f C o r e Z number o f f i b e r s i n t h e c o r e p a t c h i n t h e zd i r # nfCoverY number o f f i b e r s i n t h e c o v e r p a t c h e s w i t h %n long s i d e s in the y d i r e c t i o n # n f C o v e r Z number o f f i b e r s i n t h e c o v e r p a t c h e s w i t h %n long s i d e s in the z d i r e c t i o n # # y # ^ # | # # | o o o | | coverH # | | | # | o o | | # z < | + | HSec # | o o | | # | | | # | o o o o o o | | coverH # # |Bsec | # || c o v e r B || # # y # ^ # | #

191

C Software Codes
65

70

75

80

85

90

95

# |\ cover / | # | \Top/| | # |c| |c| # |o| |o| # z <|v | core | v | HSec # |e| |e| # |r| |r| # | /Bot\ | # |/ cover \| # # Bsec # # # Notes # The c o r e c o n c r e t e e n d s a t t h e NA o f t h e r e i n f o r c e m e n t # The c e n t e r o f t h e s e c t i o n i s a t ( 0 , 0 ) i n t h e l o c a l a x i s s y s t e m # s e t coverY [ e x p r $HSec / 2 . 0 ] ; # The d i s t a n c e from t h e %n s e c t i o n za x i s t o t h e e d g e %n of the cover concrete s e t c o v e r Z [ e x p r $BSec / 2 . 0 ] ; # The d i s t a n c e from t h e %n s e c t i o n ya x i s t o t h e e d g e %n of the cover concrete s e t coreY [ e x p r $coverY$coverH ] ; # The d i s t a n c e from t h e %n s e c t i o n za x i s t o t h e e d g e %n o f t h e c o r e c o n c r e t e %n e d g e o f t h e c o r e c o n c r . %n / i n n e r edge of cover concr . s e t c o r e Z [ e x p r $coverZ $ c o v e r B ] ; # The d i s t a n c e from t h e %n s e c t i o n ya x i s t o t h e e d g e %n o f t h e c o r e c o n c r e t e %n e d g e o f t h e c o r e c o n c r e t e / %n i n n e r edge of cover c o n c r e t e s e t n u m B a r s I n t [ e x p r $ n u m B a r s I n t T o t / 2 ] ; # Nr . o f i n t e r m . b a r s p e r s i d e # Define the f i b e r s e c t i o n s e c t i o n fiberSec $id { # Define t h e c o r e patch p a t c h q u a d r $ c o r e I D $ n f C o r e Z $nfCoreY $coreY $ c o r e Z $coreY $ c o r e Z $coreY $ c o r e Z $coreY $ c o r e Z # Define the four cover patches p a t c h q u a d r $ c o v e r I D 2 $nfCoverY $coverY $ c o v e r Z $coreY $ c o r e Z $coreY $ c o r e Z $coverY $ c o v e r Z p a t c h q u a d r $ c o v e r I D 2 $nfCoverY $coreY $ c o r e Z $coverY $ c o v e r Z $coverY $ c o v e r Z $coreY $ c o r e Z p a t c h q u a d r $ c o v e r I D $ n f C o v e r Z 2 $coverY $ c o v e r Z $coverY $ c o v e r Z $coreY $ c o r e Z $coreY $ c o r e Z p a t c h q u a d r $ c o v e r I D $ n f C o v e r Z 2 $coreY $ c o r e Z $coreY $ c o r e Z $coverY $ c o v e r Z $coverY $ c o v e r Z # define reinforcing layers l a y e r s t r a i g h t $ s t e e l I D $ n u m B a r s I n t $ b a r A r e a I n t $coreY $ c o r e Z

100

105

\ \ \ \

110

115

192

120

$coreY $ c o r e Z ; # i n t e r m e d i a t e s k i n r e i n f . +z l a y e r s t r a i g h t $ s t e e l I D $ n u m B a r s I n t $ b a r A r e a I n t $coreY $ c o r e Z \ $coreY $ c o r e Z ; # i n t e r m e d i a t e s k i n r e i n f . z l a y e r s t r a i g h t $ s t e e l I D $numBarsTop $ b a r A r e a T o p $coreY $ c o r e Z \ $coreY $ c o r e Z ; # top layer reinfocement l a y e r s t r a i g h t $ s t e e l I D $numBarsBot $ b a r A r e a B o t $coreY $ c o r e Z \ $coreY $ c o r e Z ; # bottom l a y e r r e i n f o r c e m e n t }; }; # end o f f i b e r s e c t i o n d e f i n i t i o n # end o f p r o c e d u r e

125

193

C Software Codes

C.4 Risk analysis tools


Matlab les for generating the BPNs for Example 1
f u n c t i o n BPN_PSHA_Adapazari_RM_Poisson ( T , X1 , Y1 , X2 , Y2 , a , b )

10

15

% % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN.

For each seismic source, Z

a set of BPNs are calculated:

for Z=1:1
20

% For controling HUGIN from MATLAB the ActiveX server is loaded and % then the available functions in this library are used to alter % objects created from this library. The HUGIN ActiveX Server is % loaded with the followingcommand and create a HUGIN API object %named bpn:
bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

25

% An object domain is created which holds the network:


domain = i n v o k e ( bpn , LoadDomainFromNet , C : \ D i s s e r t a t i o n \ . . . C h a p t e r 4 1\BPN \ E X 1 _ Ty p5 _O _R es _s in gl e_ ep s . n e t , 0 , 0 ) ;
30

% The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_PGA= i n v o k e ( domain , GetNodeByName , Epsilon_PGA ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndPGA= i n v o k e ( domain , GetNodeByName , PGA ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ; ndDamage= i n v o k e ( domain , GetNodeByName , Damage ) ;

35

40

%The number of discrete states of the nodes Magnitude, Distance, %Eps_PGA, Eps_SD, PGA and SD is set:
nM= 6 ; nR = 5 ; nEps_PGA = 1 0 ; nEps_SD = 1 0 ; nPGA= 7 ; nSD = 7 ; nDamage = 3 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ;

45

194

50

s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndEps_PGA , N u m b e r O f S t a t e s , nEps_PGA ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ; s e t ( ndPGA , N u m b e r O f S t a t e s ,nPGA ) ; s e t ( ndDamage , N u m b e r O f S t a t e s , nDamage ) ;

55

%The probability distribution of node Eps_PGA is calculated:


[ Eps_PGA , dPGA] =EPS_PGA ( nEps_PGA ) ;

%The probability distribution of node Eps_SD is calculated:


60

[ Eps_SD , dSD ] = EPS_SD ( nEps_SD , nEps_PGA , T ) ;

%The probability distribution of node Distance is calculated:


[ R , P_R , R l i m i t s ] = Line_EQ_R ( 5 0 0 , nR , X1 , Y1 , X2 , Y2 ) ;
65

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson ( 5 , a , b , nM, Q, Z ) ; M=M_EQ;

70

%The discrete probabilities are set for node Magnitude in the %BPN:
f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

75

%The discrete probabilities are set for node Distance in the BPN:
f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

80

%The discrete probabilities are set for node Eps_PGA in the BPN:
f o r i = 1 : ( nEps_PGA ) s e t ( ndEps_PGA . T a b l e , D a t a , ( i 1) , Eps_PGA ( i ) ) ; s e t ( ndEps_PGA , S t a t e L a b e l , ( i 1) , . . . [ Eps_PGA= num2str ( dPGA ( i ) ) ] ) ; end

85

%The discrete probabilities are set for node Eps_SD in the BPN:
90

f o r i = 1 : ( nEps_SDnEps_PGA ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; end

95

%The conditional probability table of the node PGA is initialized %with zeros:
f o r i = 1 : (nMnRnEps_PGAnPGA ) s e t ( ndPGA . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

100

%The conditional probability table of the node SD is initialized %with zeros:


f o r i = 1 : (nMnRnSDnEps_SD )

195

C Software Codes
s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end
105

%for all combinations of the states in the node Magnitude and %Distance the peak ground accelerations and spectral %displacements are calculated with the Boore Joyner and Fumal %attenuation model
110

[PGABOORE] =PGA( nEps_PGA ,M, R ) ; [SDBOORE] =SD ( nEps_SD , nEps_PGA ,M, R ) ;

%The limits for the discretisation of the nodes SD and PGA are %set:
115

CoeffPGA = [0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 1 . 2 max (PGABOORE ) ] ; CoeffSD = [ 0 0 . 0 0 5 0 . 0 2 0 . 0 5 0 . 1 0 . 3 0 . 5 max (SDBOORE ) ] ; f o r i = 1 : ( nSD ) S ( i ) = ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ; SDmm( i ) = S ( i ) 1 0 0 0 ; end

120

%The labels of the states are set for node Eps_SD, PGA and SD %in the BPN:
125

130

135

f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end f o r i = 1 : ( nPGA ) s e t ( ndPGA , S t a t e L a b e l , ( i 1) , [ PGA= num2str ( ( CoeffPGA ( i ) + . . . CoeffPGA ( i + 1 ) ) / 2 ) ] ) ; end f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , [ SD= num2str ( ( CoeffSD ( i ) + . . . CoeffSD ( i + 1 ) ) / 2 ) ] ) ; end

%The discrete probabilities are set for node PGA in the BPN:
N= 0 ; f o r i = 1 : l e n g t h (PGABOORE) f o r j = 1 :nPGA i f PGABOORE( i ) <= CoeffPGA ( j + 1) & PGABOORE( i ) > CoeffPGA ( j ) s e t ( ndPGA . T a b l e , D a t a , ( j +N 1 ) , 1 ) ; N=N+nPGA ; end end end

140

145

%The discrete probabilities are set for node SD in the BPN:


150

155

K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end

196

end end

160

[ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R M _ 1 (SDmm ) ; f o r i = 1 : ( nSD 5 2 3 ) s e t ( ndDamage . T a b l e , D a t a , ( i 1) , P_Damage ( i ) ) ; end

165

%The BPNs are set for each Source


F i l e n a m e = [ C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ EX1_Typ5_O_Res_S . . . num2str ( Z ) _ P o i s s o n _ 1 . n e t ] ; i n v o k e ( domain , SaveAsNet , F i l e n a m e ) ; end

170

end f u n c t i o n BPN_PSHA_Adapazari_RM ( T , X1 , Y1 , X2 , Y2 , a , b , SS )

10

15

% % % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN.

For each seismic source, Z and each year, Q a set of BPNs are calculated:

20

f o r Z=SS : SS f o r Q= 1 : 5 0

25

% % % % %

For controling HUGIN from MATLAB the ActiveX server is loaded and then the available functions in this library are used to alter objects created from this library. The HUGIN ActiveX Server is loaded with the followingcommand and create a HUGIN API object named bpn:

bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

% An object domain is created which holds the network:


30

domain = i n v o k e ( bpn , LoadDomainFromNet , C : \ D i s s e r t a t i o n \ . . . C h a p t e r 4 1\BPN \ E X 1 _ T y p 5 _ O _R es _s in gl e_ ep s . n e t , 0 , 0 ) ;

% The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_PGA= i n v o k e ( domain , GetNodeByName , Epsilon_PGA ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndPGA= i n v o k e ( domain , GetNodeByName , PGA ) ;

35

197

C Software Codes
ndSD= i n v o k e ( domain , GetNodeByName , SD ) ; ndDamage= i n v o k e ( domain , GetNodeByName , Damage ) ;
40

%The number of discrete states of the nodes Magnitude, Distance, %Eps_PGA, Eps_SD, PGA and SD is set:
nM= 6 ; nR = 5 ; nEps_PGA = 1 0 ; nEps_SD = 1 0 ; nPGA= 7 ; nSD = 7 ; nDamage = 3 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ; s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndEps_PGA , N u m b e r O f S t a t e s , nEps_PGA ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ; s e t ( ndPGA , N u m b e r O f S t a t e s ,nPGA ) ; s e t ( ndDamage , N u m b e r O f S t a t e s , nDamage ) ;

45

50

55

%The probability distribution of node Eps_PGA is calculated:


[ Eps_PGA , dPGA] =EPS_PGA ( nEps_PGA ) ;
60

%The probability distribution of node Eps_SD is calculated:


[ Eps_SD , dSD ] = EPS_SD ( nEps_SD , nEps_PGA , T ) ;

%The probability distribution of node Distance is calculated:


65

[ R , P_R , R l i m i t s ] = Line_EQ_R ( 5 0 0 , nR , X1 , Y1 , X2 , Y2 ) ;

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson ( 5 , a , b , nM, Q, Z ) ; M=M_EQ;
70

%The discrete probabilities are set for node Magnitude in the %BPN:
f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

75

%The discrete probabilities are set for node Distance in the BPN:
80

f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

%The discrete probabilities are set for node Eps_PGA in the BPN:
85

f o r i = 1 : ( nEps_PGA ) s e t ( ndEps_PGA . T a b l e , D a t a , ( i 1) , Eps_PGA ( i ) ) ; s e t ( ndEps_PGA , S t a t e L a b e l , ( i 1 ) , [ Eps_PGA= num2str ( dPGA ( i ) ) ] ) ; end

90

%The discrete probabilities are set for node Eps_SD in the BPN:

198

f o r i = 1 : ( nEps_SDnEps_PGA ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; end


95

%The conditional probability table of the node PGA is initialized %with zeros:
f o r i = 1 : (nMnRnEps_PGAnPGA ) s e t ( ndPGA . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

100

%The conditional probability table of the node SD is initialized %with zeros:


f o r i = 1 : (nMnRnSDnEps_SD ) s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

105

110

%for all combinations of the states in the node Magnitude and %Distance the peak ground accelerations and spectral %displacements are calculated with the Boore Joyner and Fumal %attenuation model
[PGABOORE] =PGA( nEps_PGA ,M, R ) ; [SDBOORE] =SD ( nEps_SD , nEps_PGA ,M, R ) ;

115

%The limits for the discretisation of the nodes SD and PGA are %set:
CoeffPGA = [ 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 1 . 2 max (PGABOORE ) ] ; CoeffSD = [ 0 0 . 0 0 5 0 . 0 2 0 . 0 5 0 . 1 0 . 3 0 . 5 max (SDBOORE ) ] ; f o r i = 1 : ( nSD ) S ( i ) = ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ; SDmm( i ) = S ( i ) 1 0 0 0 ; end

120

125

%The labels of the states are set for node Eps_SD, PGA and SD %in the BPN:
f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end f o r i = 1 : ( nPGA ) s e t ( ndPGA , S t a t e L a b e l , ( i 1) , [ PGA= num2str ( ( CoeffPGA ( i ) + . . . CoeffPGA ( i + 1 ) ) / 2 ) ] ) ; end f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , [ SD= num2str ( ( CoeffSD ( i ) + . . . CoeffSD ( i + 1 ) ) / 2 ) ] ) ; end

130

135

%The discrete probabilities are set for node PGA in the BPN:
140

N= 0 ; f o r i = 1 : l e n g t h (PGABOORE) f o r j = 1 :nPGA i f PGABOORE( i ) <= CoeffPGA ( j + 1) & PGABOORE( i ) > CoeffPGA ( j ) s e t ( ndPGA . T a b l e , D a t a , ( j +N 1 ) , 1 ) ;

199

C Software Codes
N=N+nPGA ;
145

end end end

%The discrete probabilities are set for node SD in the BPN:


150

155

K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end end end

160

%The discrete probabilities are calculated given the spectral %displacement values are set for node Damage
[ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R M _ 1 (SDmm ) ; f o r i = 1 : ( nSD 5 2 3 ) s e t ( ndDamage . T a b l e , D a t a , ( i 1) , P_Damage ( i ) ) ; end

165

%The BPNs are set for each Source and Year


F i l e n a m e = [ C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ EX1_Typ5_O_Res_S . . . num2str ( Z ) _Y num2str (Q) . n e t ] ; i n v o k e ( domain , SaveAsNet , F i l e n a m e ) ; end end f u n c t i o n [ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R M _ 1 ( SD )

170

%For the three damage states the parameters of the lognormal %distribution are given in Table X (unretrofitted case).
5

Lambda_Yellow = [ 3 . 6 9 0 3 . 7 2 4 3 . 7 5 8 3 . 7 9 2 3 . 8 2 9 ] ; Zeta_Yellow =[0.341 0.366 0.390 0.414 0 . 4 3 9 ] ; Lambda_Red = [ 4 . 1 0 6 4 . 1 6 0 4 . 2 1 5 4 . 2 7 0 4 . 3 2 4 ] ; Zeta_Red =[ 0 .2 66 0.306 0.346 0.386 0 . 4 2 6 ] ;

10

%For each state of the node SD the probabilities of being in one of the %three damage states are calculated.
N= 1 ; for i =1:5 f o r k = 1 : l e n g t h ( SD ) p1 (N)=1 l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i ) ) ; p2 (N) = l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i )) . . . l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; p3 (N) = l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; N=N+ 1 ; end end

15

20

%The probabilities of being in one of the three damage states form the

200

%conditional probability table of the node Damage


25

DamageNode1 = [ p1 ; p2 ; p3 ] ; P_Damage1=DamageNode1 . ( DamageNode1 > 0 ) ; P_Damage1=P_Damage1 ( : ) ;

30

%For the three damage states the parameters of the lognormal %distribution are given in Table X (retrofitted case).
Lambda_Yellow = [ 3 . 7 7 7 3 . 8 1 1 3 . 8 4 5 3 . 8 7 9 3 . 9 1 3 ] ; Zeta_Yellow =[0.250 0.274 0.299 0.323 0 . 3 4 7 ] ; Lambda_Red = [ 4 . 2 1 9 4 . 2 7 3 4 . 3 2 8 4 . 3 8 3 4 . 4 3 7 ] ; Zeta_Red = [ 0. 1 85 0.225 0.265 0.304 0 . 3 4 4 ] ;

35

%For each state of the node SD the probabilities of being in one of the %three damage states are calculated.
N= 1 ; for i =1:5 f o r k = 1 : l e n g t h ( SD ) p1 (N)=1 l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i ) ) ; p2 (N) = l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i )) . . . l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; p3 (N) = l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; N=N+ 1 ; end end

40

45

50

%The probabilities of being in one of the three damage states form the %conditional probability table of the node Damage
DamageNode2 = [ p1 ; p2 ; p3 ] ; P_Damage2=DamageNode2 . ( DamageNode2 > 0 ) ; P_Damage2=P_Damage2 ( : ) ; P_Damage = [ P_Damage1 ; P_Damage2 ] ;

Visual Basic les for evaluating the BPNs in Example 1 within the GIS environment
OpenShape . b a s C r e a t e s a s h a p e f i l e d e v e l o p e d by A d r i e n n e G r e t Regamey A t t r i b u t e VB_Name = " OpenShape " Option E x p l i c i t P u b l i c F u n c t i o n O p e n S h a p e f i l e ( s p a t h As S t r i n g , s F i l e N a m e As S t r i n g ) _ As I F e a t u r e C l a s s Dim MxDoc As IMxDocument S e t MxDoc = ThisDocument Dim pMap As IMap S e t pMap = MxDoc . FocusMap Get a c c e s s t o F e a t u r e C l a s s Dim pWSF As I W o r k s p a c e F a c t o r y S e t pWSF = New S h a p e f i l e W o r k s p a c e F a c t o r y

10

15

201

C Software Codes

20

Dim Set Dim Set

pWorkspace As I W o r k s p a c e pWorkspace = pWSF . O p e n F r o m F i l e ( s p a t h , 0 ) p f W o r k s p a c e As I F e a t u r e W o r k s p a c e p f W o r k s p a c e = pWorkspace

Set OpenShapefile = pfWorkspace . OpenFeatureClass ( sFileName )


25

Dim p F L a y e r As I F e a t u r e L a y e r S e t p F L a y e r = New F e a t u r e L a y e r Set pFLayer . F e a t u r e C l a s s = O p e n S h a p e f i l e

30

Dim p D a t a s e t As I D a t a s e t Set pDataset = OpenShapefile p F L a y e r . Name = s F i l e N a m e

35

Dim pMxDoc As IMxDocument S e t pMxDoc = ThisDocument pMxDoc . AddLayer p F L a y e r pMxDoc . A c t i v e V i e w . P a r t i a l R e f r e s h e s r i V i e w G e o g r a p h y , pFLayer , N o t h i n g

40

End F u n c t i o n Module1 . b a s

10

15

20

25

A t t r i b u t e VB_Name = " Module1 " Option E x p l i c i t P r i v a t e Sub BN_Hugin ( ) _______________________________________ A c o l l e c t i o n to hold t he found p a r s e E r r o r s Dim p a r s e E r r o r s As C o l l e c t i o n _______________________________________ Get Map Dim pMxDoc As IMxDocument Dim pMap As IMap S e t pMxDoc = ThisDocument S e t pMap = pMxDoc . FocusMap _______________________________________ Get t h e s h a p e f i l e w i t h a l l t h e d a t a Dim p F e a t u r e C l a s s As I F e a t u r e C l a s s S e t p F e a t u r e C l a s s = O p e n S h a p e f i l e ( " Z: < F o l d e r > " , " Typ5_O_Res " ) Dim p L a y e r As I F e a t u r e L a y e r S e t p L a y e r = pMap . L a y e r ( 0 ) Dim p F e a t u r e C l a s s S e l As I F e a t u r e C l a s s Set p F e a t u r e C l a s s S e l = pLayer . F e a t u r e C l a s s _______________________________________ c r e a t e cursor to loop through " building_type " Dim I n d e x F I D As I n t e g e r I n d e x F I D = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " FID " )

202

30

35

Dim I n d e x O c c u p a n c y As S t r i n g I n d e x O c c u p a n c y = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Occupancy " ) Dim I n d e x S t o r y A r e a As Double IndexStoryArea = pFeatureClassSel . FindField (" FloorArea ") Dim IndexEU1 As Double IndexEU1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " EU1 " ) Dim IndexEU2 As Double IndexEU2 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " EU2 " ) Dim I n d e x O p t As S t r i n g IndexOpt = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " OptAction " ) Dim I n d e x L i q 2 1 As Double I n d e x L i q 2 1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Avg_Liq21 " ) _______________________________________ Define cursor for s e l e c t e d f e a t u r e s Dim p C u r s o r As I F e a t u r e C u r s o r Set pCursor = p F e a t u r e C l a s s S e l . Update ( Nothing , F a l s e ) _______________________________________ C u r s o r t h r o u g h t h e s e l e c t e d rows Dim pRowSel As I F e a t u r e S e t pRowSel = p C u r s o r . N e x t F e a t u r e Dim pOID As Double Dim pQuery As I Q u e r y F i l t e r _______________________________________ LOOP THROUGH EACH BUILDING OF THE LAYER Dim M1 As Double Dim M2 As Double Dim MTotal1 As Double Dim MTotal2 As Double Dim Nu ( 1 To 9 ) As Double The r a t e o f e x c e e d i n g t h e minimum m a g n i t u d e o f 5 f o r each of the 9 s e i s m i c s o u r c e s are Nu ( 1 ) = 0 . 0 0 0 0 2 3 9 8 8 3 Nu ( 2 ) = 0 . 0 0 0 0 4 7 8 6 3 Nu ( 3 ) = 0 . 0 0 0 0 8 9 1 2 5 1 Nu ( 4 ) = 0 . 0 0 0 0 7 7 6 2 4 7 Nu ( 5 ) = 0 . 0 0 0 0 5 1 2 8 6 1 Nu ( 6 ) = 0 . 0 0 0 0 2 0 8 9 3 Nu ( 7 ) = 0 . 0 0 0 0 3 5 4 8 1 3 Nu ( 8 ) = 0 . 0 0 3 0 1 9 9 5 2 Nu ( 9 ) = 0 . 0 0 4 0 7 3 8 0 3 Dim S o u r c e As I n t e g e r

40

45

50

55

60

65

70

75

Do While Not pRowSel I s N o t h i n g M1 = 0 M2 = 0 F o r S o u r c e = 2 To 2 _______________________________________

80

203

C Software Codes
I m p o r t BPN from Hugin Dim d As HAPI . Domain Dim BN As S t r i n g
85

Dim N e t z e ( 1 To 3 ) N e t z e ( 1 ) = " EX1_Typ5_O_Res_S " Netze ( 2 ) = CStr ( Source ) Netze ( 3 ) = " _Poisson . n et " BN = J o i n ( Netze , " " ) S e t d = HAPI . LoadDomainFromNet (BN, p a r s e E r r o r s , 1 0 ) g e t t h e node w i t h l a b e l " . . . " and name " . . . " from t h e domain Dim NodeLiq As HAPI . Node S e t NodeLiq = d . GetNodeByName ( " L i q u e f a c t i o n " ) Dim N o d e L i q T a b l e As HAPI . T a b l e S e t N o d e L i q T a b l e = NodeLiq . T a b l e Dim Set Dim Set NodeCost As HAPI . Node NodeCost = d . GetNodeByName ( " C o s t " ) N o d e C o s t T a b l e As HAPI . T a b l e N o d e C o s t T a b l e = NodeCost . T a b l e

90

95

100

105

Dim d e c i s i o n R e t r o f i t As Node S e t d e c i s i o n R e t r o f i t = d . GetNodeByName ( " R e t r o f i t " ) Initialize Dim l a u f As I n t e g e r Dim M As I n t e g e r M = IndexLiq21 F o r l a u f = 0 To 83 S t e p 2 N o d e L i q T a b l e . D a t a ( l a u f ) = 1 pRowSel . V a l u e (M) N o d e L i q T a b l e . D a t a ( l a u f + 1 ) = pRowSel . V a l u e (M) M = M + 1 Next l a u f Dim Dim Dim Dim F a t a l i t y As Double R e b u i l d i n g As Double R e p a i r As Double R e t r o f i t As Double

110

115

120

125

70 %=Occupancy at time of EQ (M2) 80% O c c u p a n t s t r a p p e d (M3) 20 % Occupants died imediately (M4) 80% O c c u p a n t s d e a d a f t e r 10 d a y s (M5) LSCS=250000 USD ( GDPpc=10000 USD) F a t a l i t y = pRowSel . V a l u e ( I n d e x O c c u p a n c y ) _ (0.70.80.2+0.70.8(1 0.2)0.8)250000 U n i t r e b u i l d i n g c o s t = 300USD / m2 , I m p o r t a n c e f a c t o r f o r h o s p i t a l =10 , N o n s t r u c t u r a l e l e m e n t s 50% o f b u i l d i n g v a l u e , R e b u i l d i n g = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 5

130

204

135

U n i t r e b u i l d i n g c o s t = 300USD / m2 , 20% C o s t o f r e b u i l d i n g f o r r e p a i r , N o n s t r u c t u r a l e l e m e n t s 50 % of building value, I m p o r t a n c e f a c t o r f o r h o s p i t a l s =10 R e p a i r = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 0 . 2 5 U n i t r e t r o f i t c o s t =250USD / column , a v e r a g e s p a n l e n g t h i s 4m R e t r o f i t = ( ( ( S q r ( 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) ) ) / 5 ) ^ 2 ) 50

140

145

150

D i s c o u t r a t e =2% NodeCostTable . Data ( 0 ) NodeCostTable . Data ( 1 ) NodeCostTable . Data ( 2 ) NodeCostTable . Data ( 3 ) NodeCostTable . Data ( 4 ) NodeCostTable . Data ( 5 ) d . Compile

= = = = = =

(0 + 0 + 0 + 0) / 0.02 (0 + Repair + 0 + 0) / 0.02 (0 + 0 + Rebuilding + F a t a l i t y ) / 0.02 ( R e t r o f i t + 0 + 0 + 0) / 0.02 ( R e t r o f i t + Repair + 0 + 0) / 0.02 ( R e t r o f i t + 0 + Rebuilding + F a t a l i t y ) / 0.02

155

160

M1 = M1 + d e c i s i o n R e t r o f i t . E x p e c t e d U t i l i t y ( 0 ) Nu ( S o u r c e ) M2 = M2 + d e c i s i o n R e t r o f i t . E x p e c t e d U t i l i t y ( 1 ) Nu ( S o u r c e ) Dim FID , N e t z ( 1 To 3 ) , BPN FID = pRowSel . V a l u e ( I n d e x F I D ) N e t z ( 1 ) = C S t r ( FID ) N e t z ( 2 ) = "MB_" N e t z ( 3 ) = BN " _EX1_Typ5_O_Res_S1_Poisson . n e t " BPN = J o i n ( Netz , " " ) d . SaveAsNet (BPN) Next S o u r c e MTotal1 = M1 MTotal2 = M2 pRowSel . V a l u e ( IndexEU1 ) = MTotal1 pRowSel . V a l u e ( IndexEU2 ) = MTotal2 I f ( MTotal1 <= MTotal2 ) Then pRowSel . V a l u e ( I n d e x O p t ) = "No" Else pRowSel . V a l u e ( I n d e x O p t ) = " Yes " End I f _______________________________________ U p d a t e c u r s o r o f s e l e c t e d DHM f e a t u r e s p C u r s o r . U p d a t e F e a t u r e pRowSel S e t pRowSel = p C u r s o r . N e x t F e a t u r e Loop _______________________________________ END LOOP THROUGH EACH BUILDING OF THE LAYER End Sub _______________________________________

165

170

175

180

185

205

C Software Codes
Module1N o n P o i s s o n . b a s

10

15

20

25

30

35

A t t r i b u t e VB_Name = " Module1 " Option E x p l i c i t P r i v a t e Sub BN_Hugin ( ) _______________________________________ A c o l l e c t i o n to hold th e found p a r s e E r r o r s Dim p a r s e E r r o r s As C o l l e c t i o n _______________________________________ Get Map Dim pMxDoc As IMxDocument Dim pMap As IMap S e t pMxDoc = ThisDocument S e t pMap = pMxDoc . FocusMap _______________________________________ Get t h e s h a p e f i l e w i t h a l l t h e d a t a Dim p F e a t u r e C l a s s As I F e a t u r e C l a s s S e t p F e a t u r e C l a s s = O p e n S h a p e f i l e ( " Z: < F o l d e r > " , " Typ5_O_Res_NonPoissonCopy " ) Dim p L a y e r As I F e a t u r e L a y e r S e t p L a y e r = pMap . L a y e r ( 0 ) Dim p F e a t u r e C l a s s S e l As I F e a t u r e C l a s s Set p F e a t u r e C l a s s S e l = pLayer . F e a t u r e C l a s s _______________________________________ c r e a t e cursor to loop through " building_type " Dim I n d e x F I D As I n t e g e r I n d e x F I D = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " FID " ) Dim I n d e x O c c u p a n c y As S t r i n g I n d e x O c c u p a n c y = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Occupancy " ) Dim I n d e x S t o r y A r e a As Double IndexStoryArea = pFeatureClassSel . FindField (" FloorArea ") Dim IndexEU1 As Double IndexEU1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " EU1 " ) Dim IndexEU2 As Double IndexEU2 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " EU2 " ) Dim I n d e x O p t As S t r i n g IndexOpt = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " OptAction " ) Dim I n d e x L i q 2 1 As Double I n d e x L i q 2 1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Avg_Liq21 " ) _______________________________________ Define cursor for s e l e c t e d f e a t u r e s Dim p C u r s o r As I F e a t u r e C u r s o r Set pCursor = p F e a t u r e C l a s s S e l . Update ( Nothing , F a l s e ) _______________________________________ C u r s o r t h r o u g h t h e s e l e c t e d rows Dim pRowSel As I F e a t u r e S e t pRowSel = p C u r s o r . N e x t F e a t u r e Dim pOID As Double Dim pQuery As I Q u e r y F i l t e r _______________________________________

40

45

50

206

55

60

65

70

LOOP THROUGH EACH BUILDING OF THE LAYER Dim M1 As Double Dim M2 As Double Dim MTotal1 As Double Dim MTotal2 As Double Dim Nu ( 1 To 9 ) As Double Nu ( 1 ) = 0 . 0 0 0 0 2 3 9 8 8 3 Nu ( 2 ) = 0 . 0 0 0 0 4 7 8 6 3 Nu ( 3 ) = 0 . 0 0 0 0 8 9 1 2 5 1 Nu ( 4 ) = 0 . 0 0 0 0 7 7 6 2 4 7 Nu ( 5 ) = 0 . 0 0 0 0 5 1 2 8 6 1 Nu ( 6 ) = 0 . 0 0 0 0 2 0 8 9 3 Nu ( 7 ) = 0 . 0 0 0 0 3 5 4 8 1 3 Nu ( 8 ) = 0 . 0 0 3 0 1 9 9 5 2 Nu ( 9 ) = 0 . 0 0 4 0 7 3 8 0 3 Dim S o u r c e As I n t e g e r Dim Year As I n t e g e r Do While Not pRowSel I s N o t h i n g M1 = 0 M2 = 0

75

80

F o r e a c h s e i s m i c s o u r c e and e a c h y e a r t h e e x p e c t e d c o s t s a r e a g g r e g a t e d F o r S o u r c e = 1 To 9 F o r Year = 1 To 50 _______________________________________ I m p o r t BPN from Hugin Dim d As HAPI . Domain Dim BN As S t r i n g

85

90

Dim N e t z e ( 1 To 5 ) N e t z e ( 1 ) = " EX1_Typ5_O_Res_S " Netze ( 2 ) = CStr ( Source ) N e t z e ( 3 ) = "_Y" N e t z e ( 4 ) = C S t r ( Year ) Netze ( 5 ) = " . ne t " BN = J o i n ( Netze , " " ) S e t d = HAPI . LoadDomainFromNet (BN, p a r s e E r r o r s , 1 0 )

95

g e t t h e node w i t h l a b e l " . . . " and name " . . . " from t h e domain Dim NodeLiq As HAPI . Node S e t NodeLiq = d . GetNodeByName ( " L i q u e f a c t i o n " ) Dim N o d e L i q T a b l e As HAPI . T a b l e S e t N o d e L i q T a b l e = NodeLiq . T a b l e Dim Set Dim Set NodeCost As HAPI . Node NodeCost = d . GetNodeByName ( " C o s t " ) N o d e C o s t T a b l e As HAPI . T a b l e N o d e C o s t T a b l e = NodeCost . T a b l e

100

105

Dim d e c i s i o n R e t r o f i t As Node

207

C Software Codes
S e t d e c i s i o n R e t r o f i t = d . GetNodeByName ( " R e t r o f i t " ) Initialize Dim l a u f As I n t e g e r Dim M As I n t e g e r M = IndexLiq21 F o r l a u f = 0 To 83 S t e p 2 N o d e L i q T a b l e . D a t a ( l a u f ) = 1 pRowSel . V a l u e (M) N o d e L i q T a b l e . D a t a ( l a u f + 1 ) = pRowSel . V a l u e (M) M = M + 1 Next l a u f Dim Dim Dim Dim F a t a l i t y As Double R e b u i l d i n g As Double R e p a i r As Double R e t r o f i t As Double

110

115

120

125

130

70%=Occupancy a t t i m e o f EQ (M2) 80 % Occupants trapped (M3) 20% O c c u p a n t s d i e d i m e d i a t e l y (M4) 80 % Occupants dead after 10 days(M5) LSCS=250000 USD ( GDPpc=10000 USD) F a t a l i t y = pRowSel . V a l u e ( I n d e x O c c u p a n c y ) _ (0.70.80.2+0.70.8(1 0.2)0.8)250000

135

U n i t r e b u i l d i n g c o s t = 300USD / m2 I m p o r t a n c e f a c t o r f o r h o s p i t a l =10 N o n s t r u c t u r a l e l e m e n t s 50 % of building value, R e b u i l d i n g = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 5 U n i t r e b u i l d i n g c o s t = 300USD / m2 20 %Cost of rebuilding for repair N o n s t r u c t u r a l e l e m e n t s 50% o f b u i l d i n g v a l u e I m p o r t a n c e f a c t o r f o r h o s p i t a l s =10 D i s c o u t r a t e =2% R e p a i r = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 0 . 2 5

140

145

U n i t r e t r o f i t c o s t =250USD / column , a v e r a g e s p a n l e n g t h i s 4m R e t r o f i t = ( ( ( S q r ( 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) ) ) / 5 ) ^ 2 ) 50 NodeCostTable . Data ( 0 ) NodeCostTable . Data ( 1 ) NodeCostTable . Data ( 2 ) NodeCostTable . Data ( 3 ) NodeCostTable . Data ( 4 ) NodeCostTable . Data ( 5 ) d . Compile M1 = M1 + d e c i s i o n R e t r o f i t . E x p e c t e d U t i l i t y ( 0 ) Nu ( S o u r c e ) M2 = M2 + d e c i s i o n R e t r o f i t . E x p e c t e d U t i l i t y ( 1 ) Nu ( S o u r c e ) = = = = = = ( 0 + 0 + 0 + 0 ) Exp ( 0.02 Year ) ( 0 + R e p a i r +0+0) Exp ( 0.02 Year ) (0+0+ R e b u i l d i n g + F a t a l i t y ) Exp ( 0.02 Year ) ( R e t r o f i t +0+0+0) Exp ( 0.02 Year ) ( R e t r o f i t + R e p a i r +0+0) Exp ( 0.02 Year ) ( R e t r o f i t +0+ R e b u i l d i n g + F a t a l i t y ) Exp ( 0.02 Year )

150

155

208

160

Next Year Next S o u r c e MTotal1 = M1 MTotal2 = M2 pRowSel . V a l u e ( IndexEU1 ) = MTotal1 pRowSel . V a l u e ( IndexEU2 ) = MTotal2 I f ( MTotal1 <= MTotal2 ) Then pRowSel . V a l u e ( I n d e x O p t ) = "No" Else pRowSel . V a l u e ( I n d e x O p t ) = " Yes " End I f _______________________________________ U p d a t e c u r s o r o f s e l e c t e d DHM f e a t u r e s p C u r s o r . U p d a t e F e a t u r e pRowSel S e t pRowSel = p C u r s o r . N e x t F e a t u r e Loop _______________________________________ END LOOP THROUGH EACH BUILDING OF THE LAYER End Sub _______________________________________

165

170

175

180

209

C Software Codes
Matlab les for generating the BPNs for Example 2
f u n c t i o n BPN_PSHA_Adapazari_RA_Poisson_Res ( T , X1 , Y1 , X2 , Y2 , a , b )

10

15

% % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN.

for Z=2:2
20

For each seismic source, Z a set of BPNs are calculated: %e.g. for seismic source S2 % For controling HUGIN from MATLAB the ActiveX server is loaded and % then the available functions in this library are used to alter % objects created from this library. The HUGIN ActiveX Server is % loaded with the following command and create a HUGIN API object % named bpn:
bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

25

% An object domain is created which holds the network:


domain = i n v o k e ( bpn , LoadDomainFromNet , C : \ D i s s e r t a t i o n \ . . . C h a p t e r 4 1\BPN \ E X 2 _ Ty p5 _O _R es _s in gl e_ ep s . n e t , 0 , 0 ) ;
30

% The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_PGA= i n v o k e ( domain , GetNodeByName , Epsilon_PGA ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndPGA= i n v o k e ( domain , GetNodeByName , PGA ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ; ndDamage= i n v o k e ( domain , GetNodeByName , Damage ) ;

35

40

%The number of discrete states of the nodes Magnitude, Distance, %Eps_PGA, Eps_SD, PGA and SD is set:
nM= 6 ; nR = 5 ; nEps_PGA = 1 0 ; nEps_SD = 1 0 ; nPGA= 7 ; nSD = 7 ; nDamage = 3 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ; s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndEps_PGA , N u m b e r O f S t a t e s , nEps_PGA ) ;

45

50

210

s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ; s e t ( ndPGA , N u m b e r O f S t a t e s ,nPGA ) ; s e t ( ndDamage , N u m b e r O f S t a t e s , nDamage ) ;


55

%The probability distribution of node Eps_PGA is calculated:


[ Eps_PGA , dPGA] =EPS_PGA ( nEps_PGA ) ;

%The probability distribution of node Eps_SD is calculated:


60

[ Eps_SD , dSD ] = EPS_SD ( nEps_SD , nEps_PGA , T ) ;

%The probability distribution of node Distance is calculated:


[ R , P_R , R l i m i t s ] = Line_EQ_R ( 5 0 0 , nR , X1 , Y1 , X2 , Y2 ) ;
65

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson ( 5 , a , b , nM, Q, Z ) ; M=M_EQ;

70

%The discrete probabilities are set for node Magnitude in the %BPN:
f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

75

%The discrete probabilities are set for node Distance in the BPN:
f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

80

%The discrete probabilities are set for node Eps_PGA in the BPN:
f o r i = 1 : ( nEps_PGA ) s e t ( ndEps_PGA . T a b l e , D a t a , ( i 1) , Eps_PGA ( i ) ) ; s e t ( ndEps_PGA , S t a t e L a b e l , ( i 1) , . . . [ Eps_PGA= num2str ( dPGA ( i ) ) ] ) ; end

85

%The discrete probabilities are set for node Eps_SD in the BPN:
90

f o r i = 1 : ( nEps_SDnEps_PGA ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; end

95

%The conditional probability table of the node PGA is initialized %with zeros:
f o r i = 1 : (nMnRnEps_PGAnPGA ) s e t ( ndPGA . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

100

%The conditional probability table of the node SD is initialized %with zeros:


f o r i = 1 : (nMnRnSDnEps_SD ) s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

211

C Software Codes
105

%for all combinations of the states in the node Magnitude and %Distance the peak ground accelerations and spectral displace%ments are calculated with the Boore Joyner and Fumal attenuation %model
110

[PGABOORE] =PGA( nEps_PGA ,M, R ) ; [SDBOORE] =SD ( nEps_SD , nEps_PGA ,M, R ) ;

%The limits for the discretisation of the nodes SD and PGA are %set:
115

CoeffPGA = [0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 1 . 2 max (PGABOORE ) ] ; CoeffSD = [ 0 0 . 0 0 5 0 . 0 2 0 . 0 5 0 . 1 0 . 3 0 . 5 max (SDBOORE ) ] ; f o r i = 1 : ( nSD ) S ( i ) = ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ; SDmm( i ) = S ( i ) 1 0 0 0 ; end

120

%The labels of the states are set for node Eps_SD, PGA and SD %in the BPN:
125

130

135

f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end f o r i = 1 : ( nPGA ) s e t ( ndPGA , S t a t e L a b e l , ( i 1) , . . . [ PGA= num2str ( ( CoeffPGA ( i ) + CoeffPGA ( i + 1 ) ) / 2 ) ] ) ; end f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , . . . [ SD= num2str ( ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ) ] ) ; end

%The discrete probabilities are set for node PGA in the BPN:
N= 0 ; f o r i = 1 : l e n g t h (PGABOORE) f o r j = 1 :nPGA i f PGABOORE( i ) <= CoeffPGA ( j + 1) & PGABOORE( i ) > CoeffPGA ( j ) s e t ( ndPGA . T a b l e , D a t a , ( j +N 1 ) , 1 ) ; N=N+nPGA ; end end end

140

145

%The discrete probabilities are set for node SD in the BPN:


150

155

K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end end end

212

160

%The discrete probabilities are calculated given the spectral %displacement values are set for node Damage
[ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R A _ 1 (SDmm ) ; f o r i = 1 : ( nSD 5 3 ) s e t ( ndDamage . T a b l e , D a t a , ( i 1) , P_Damage ( i ) ) ; end

165

%The BPNs are set for each Source


F i l e n a m e = [ C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ EX2_Typ5_O_Res_S . . . num2str ( Z ) _ P o i s s o n _ 1 . n e t ] ; i n v o k e ( domain , SaveAsNet , F i l e n a m e ) ;
170

end end f u n c t i o n [ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R A _ 1 ( SD )

%For the three damage states the parameters of the lognormal %distribution are given in Table X.
5

Lambda_Yellow = [ 3 . 6 9 0 3 . 7 2 4 3 . 7 5 8 3 . 7 9 2 3 . 8 2 9 ] ; Zeta_Yellow =[0.341 0.366 0.390 0.414 0 . 4 3 9 ] ; Lambda_Red = [ 4 . 1 0 6 4 . 1 6 0 4 . 2 1 5 4 . 2 7 0 4 . 3 2 4 ] ; Zeta_Red = [ 0. 2 66 0.306 0.346 0.386 0 . 4 2 6 ] ;

10

%For each state of the node SD the probabilities of being in one of the %three damage states are calculated.
N= 1 ; for i =1:5 f o r k = 1 : l e n g t h ( SD ) p1 (N)=1 l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i ) ) ; p2 (N) = l o g n c d f ( SD ( k ) , Lambda_Yellow ( i ) , Z e t a _ Y e l l o w ( i )) . . . l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; p3 (N) = l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; N=N+ 1 ; end end

15

20

%The probabilities of being in one of the three damage states form the %conditional probability table of the node Damage
25

DamageNode1 = [ p1 ; p2 ; p3 ] ; P_Damage1=DamageNode1 . ( DamageNode1 > 0 ) ; P_Damage=P_Damage1 ( : ) ; f u n c t i o n [ P_C ] = C o s t D i s t S 2 g l o b = a c t x s e r v e r ( HAPI . G l o b a l s ) ; N= 1 ; f o r i = 1 : 6 %over all states of the magnitude node f o r j = 1 : 5 %over all states of the distance node f o r k = 1 : 5 %over all states of the epsilon node f o r m= 1 : 5 3 4 f i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ . . . S e i s m i c h a z a r d a n a l y s i s \ PhD \ GISBPN O u t p u t \ . . . RA_Typ5_O_Res_Poisson_Property \ . . .

10

213

C Software Codes
num2str (m1) _aEX2_Typ5_O_Res_S1_Poisson . n e t ] ; domain = i n v o k e ( g l o b , LoadDomainFromNet , f i l e n a m e , 0 , 0 ) ; ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; n d E p s _ F r a g = i n v o k e ( domain , GetNodeByName , E p s _ F r a g ) ; s e t ( ndEQ_R . T a b l e , D a t a , 0 , 0 . 0 0 0 0 0 1 ) ; s e t ( ndEQ_R . T a b l e , D a t a , 1 , 0 . 0 0 0 0 0 1 ) ; s e t ( ndEQ_R . T a b l e , D a t a , 4 , 0 . 0 0 0 0 0 1 ) ; n d C o s t = i n v o k e ( domain , GetNodeByName , C o s t ) ; i n v o k e ( domain , Compile ) ; i n v o k e ( ndEQ_M , S e l e c t S t a t e , ( i 1 ) ) ; i n v o k e ( ndEQ_R , S e l e c t S t a t e , ( j 1 ) ) ; i n v o k e ( ndEps_Frag , S e l e c t S t a t e , ( k 1 ) ) ; i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ; for l =1:10 P _ C o s t 1 (m, l ) = g e t ( ndCost , B e l i e f , ( l 1 ) ) ; end end f o r m= 1 : 7 1 2 f i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ . . . S e i s m i c h a z a r d a n a l y s i s \ PhD \ GISBPN O u t p u t \ . . . RA_Typ5_N_Res_Poisson_Property \ . . . num2str (m1) _aEX2_Typ5_N_Res_S1_Y1 . n e t ] ; domain = i n v o k e ( g l o b , LoadDomainFromNet , f i l e n a m e , 0 , 0 ) ; ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; n d E p s _ F r a g = i n v o k e ( domain , GetNodeByName , E p s _ F r a g ) ; s e t ( ndEQ_R . T a b l e , D a t a , 0 , 0 . 0 0 0 0 0 1 ) ; s e t ( ndEQ_R . T a b l e , D a t a , 1 , 0 . 0 0 0 0 0 1 ) ; s e t ( ndEQ_R . T a b l e , D a t a , 4 , 0 . 0 0 0 0 0 1 ) ; n d C o s t = i n v o k e ( domain , GetNodeByName , C o s t ) ; i n v o k e ( domain , Compile ) ; i n v o k e ( ndEQ_M , S e l e c t S t a t e , ( i 1 ) ) ; i n v o k e ( ndEQ_R , S e l e c t S t a t e , ( j 1 ) ) ; i n v o k e ( ndEps_Frag , S e l e c t S t a t e , ( k 1 ) ) ; i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ; for l =1:10 P _ C o s t 2 (m, l ) = g e t ( ndCost , B e l i e f , ( l 1 ) ) ; end end P_Cost =[ P_Cost1 ; P_Cost2 ] ; P_C (N ) . P_C= P _ C o s t ; N=N+ 1 ; clear d end end end s a v e 1P_S1M6R5 P_C f u n c t i o n [ P_C ] = C o s t D i s t S 2 _ I n t e g r a t e d g l o b = a c t x s e r v e r ( HAPI . G l o b a l s ) ; N= 1 ; f o r m= 1 : 5 3 4

15

20

25

30

35

40

45

50

55

214

10

f i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ S e i s m i c h a z a r d a n a l y s i s \ PhD \ . . . GISBPN O u t p u t \ RA_Typ5_O_Res_Poisson \ . . . num2str (m1) _EX2_Typ5_O_Res_S1_Poisson . n e t ] ; domain = i n v o k e ( g l o b , LoadDomainFromNet , f i l e n a m e , 0 , 0 ) ; n d C o s t = i n v o k e ( domain , GetNodeByName , C o s t ) ; i n v o k e ( domain , Compile ) ; for l =1:10 P _ C o s t 1 (m, l ) = g e t ( ndCost , B e l i e f , ( l 1 ) ) ; end end f o r m= 1 : 7 1 2 f i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ S e i s m i c h a z a r d a n a l y s i s \ PhD \ . . . GISBPN O u t p u t \ RA_Typ5_N_Res_Poisson \ . . . num2str (m1) _EX2_Typ5_N_Res_S1_Poisson . n e t ] ; domain = i n v o k e ( g l o b , LoadDomainFromNet , f i l e n a m e , 0 , 0 ) ; n d C o s t = i n v o k e ( domain , GetNodeByName , C o s t ) ; i n v o k e ( domain , Compile ) ; for l =1:10 P _ C o s t 2 (m, l ) = g e t ( ndCost , B e l i e f , ( l 1 ) ) ; end end P_Cost =[ P_Cost1 ; P_Cost2 ] ; P_C (N ) . P_C= P _ C o s t ; s a v e 1_S1MR P_C f u n c t i o n [ EX ,M, MVERT, EXVERT, Risk ,CX_MEAN] = A g g r e g a t i o n _ P o i s s o n _ S 2

15

20

25

%For each combination of the 150 states (6 magnitude states*5 distance %states*5 epsilon states) the discrete probabilities for the states of the %node Cost were stored in PC_Poisson_S2
l o a d P C _ P o i s so n _ S 2

%Total number of buildings


N= 1 2 4 6 ;
10

%States of the node Cost


c =[0;99;168;266;442;1196;2519;3993;6287;14454;14454];

%The temporary number of states for the node Cost S2 are set to 10000
n d i s =10000;
15

%The state boundaries CXLIM and representative value for each state CX_MEAN %are set
Cmax=max ( c ) N ; Cmin = 0 ; for i =1: ndis CXLIM( i ) = i ( CmaxCmin ) / n d i s ; end CXLIM= [ 0 CXLIM ] ; for i =1: ndis CX_MEAN( i ) = ( CXLIM( i ) +CXLIM( i + 1 ) ) / 2 ; end CX_MEAN=CX_MEAN ;

20

25

%50000 samples of random numbers are drawn for each of the 1246 buildings

215

C Software Codes
%and depending on the probability of the node cost, the corresponding %cost terms are added
f o r m= 1 : 1 5 0 d i s p l a y (m) A= P C _ P o i s s o n _ S 2 ( 1 ,m ) . P C _P o is s on _ S2 ; A=A ; p=cumsum (A ) ; p_temp = z e r o s ( 1 ,N ) ; p = [ p_temp ; p ] ; K= 5 0 0 0 0 ; CT= 0 ; f o r j = 1 :K r =rand ( 1 ,N ) ; f o r i = 1 :N CT=CT+c ( sum ( ( r ( i ) > p ( : , i ) ) ) ) ; end CT_T ( j ) =CT ; CT= 0 ; end M= h i s t ( CT_T ,CX_MEAN ) ; M=M ; M=M/ K ; MVERT(m ) . m=M; end
55

30

35

40

45

50

%The marginal distribution of each of the 150 combinations are calculated


T=1; P_M= [ 0 . 7 2 4 8 9 4 0 . 1 9 9 6 5 2 0 . 0 5 4 9 8 4 0 . 0 1 5 1 4 5 0 . 0 0 4 1 7 1 0 . 0 0 1 1 4 9 ] ; P_R = [ 0 . 0 0 0 0 0 1 0 . 0 0 0 0 0 1 0 . 6 2 6 0 . 3 7 4 0 . 0 0 0 0 0 1 ] ; P_Eps = [ 0 . 0 2 2 8 0 . 1 3 5 9 0 . 6 8 2 6 0 . 1 3 5 9 0 . 0 2 2 8 ] ; for i =1:6 for j =1:5 for k =1:5 P _ M a r g i n a l ( T) =P_M( i ) P_R ( j ) P_Eps ( k ) ; T=T + 1 ; end end end Risk= zeros ( ndis , 1 ) ;

60

65

70

%The distribution of risk is calculated


for T=1:150 R i s k = R i s k +MVERT( 1 , T ) . m P _ M a r g i n a l ( T ) ; end save Risk_S2 Risk f u n c t i o n [ EX ,M, MVERT,CX_MEAN] = A g g r e g a t i o n _ P o i s s o n _ S 2 _ I n t e g r a t e d

75

%For each combination of the 150 states (6 magnitude states*5 distance %states*5 epsilon states) the discrete probabilities for the states of the

216

%node Cost were stored in PC_Poisson_S2


l o a d 1_S1MR

%Total number of buildings


N= 1 2 4 6 ;
10

%States of the node Cost


c =[0;99;168;266;442;1196;2519;3993;6287;14454;14454];

%The temporary number of states for the node Cost S2 are set to 10000
n d i s =10000;
15

%The state boundaries CXLIM and representative value for each state CX_MEAN %are set
Cmax=max ( c ) N ; Cmin = 0 ; for i =1: ndis CXLIM( i ) = i ( CmaxCmin ) / n d i s ; end CXLIM= [ 0 CXLIM ] ; for i =1: ndis CX_MEAN( i ) = ( CXLIM( i ) +CXLIM( i + 1 ) ) / 2 ; end CX_MEAN=CX_MEAN ;

20

25

30

%50000 samples of random numbers are drawn for each of the 1246 buildings %and depending on the probability of the node cost, the corresponding %cost terms are added
A=P_C . P_C ; A=A ; p=cumsum (A ) ; p_temp = z e r o s ( 1 ,N ) ; p = [ p_temp ; p ] ; K= 5 0 0 0 0 ; CT= 0 ; f o r j = 1 :K r =rand ( 1 ,N ) ; f o r i = 1 :N CT=CT+c ( sum ( ( r ( i ) > p ( : , i ) ) ) ) ; end CT_T ( j ) =CT ; CT= 0 ; end

35

40

45

%The distribution of risk is calculated


50

M= h i s t ( CT_T ,CX_MEAN ) ; M=M ; M=M/ K ; R i s k =M; save R i s k _ S 2 _ I n t e g r a t e d Risk

217

C Software Codes
Visual Basic les for evaluating the BPNs in Example 2 within the GIS environment
OpenShape . b a s C r e a t e s a s h a p e f i l e d e v e l o p e d by A d r i e n n e G r e t Regamey A t t r i b u t e VB_Name = " OpenShape " Option E x p l i c i t P u b l i c F u n c t i o n O p e n S h a p e f i l e ( s p a t h As S t r i n g , s F i l e N a m e As S t r i n g ) _ As I F e a t u r e C l a s s Dim MxDoc As IMxDocument S e t MxDoc = ThisDocument Dim pMap As IMap S e t pMap = MxDoc . FocusMap Get a c c e s s t o F e a t u r e C l a s s Dim pWSF As I W o r k s p a c e F a c t o r y S e t pWSF = New S h a p e f i l e W o r k s p a c e F a c t o r y Dim Set Dim Set pWorkspace As I W o r k s p a c e pWorkspace = pWSF . O p e n F r o m F i l e ( s p a t h , 0 ) p f W o r k s p a c e As I F e a t u r e W o r k s p a c e p f W o r k s p a c e = pWorkspace

10

15

20

Set OpenShapefile = pfWorkspace . OpenFeatureClass ( sFileName )


25

Dim p F L a y e r As I F e a t u r e L a y e r S e t p F L a y e r = New F e a t u r e L a y e r Set pFLayer . F e a t u r e C l a s s = O p e n S h a p e f i l e

30

Dim p D a t a s e t As I D a t a s e t Set pDataset = OpenShapefile p F L a y e r . Name = s F i l e N a m e

35

Dim pMxDoc As IMxDocument S e t pMxDoc = ThisDocument pMxDoc . AddLayer p F L a y e r pMxDoc . A c t i v e V i e w . P a r t i a l R e f r e s h e s r i V i e w G e o g r a p h y , pFLayer , N o t h i n g

40

End F u n c t i o n Module1Ex2 . b a s A t t r i b u t e VB_Name = " Module1 " Option E x p l i c i t P r i v a t e Sub BN_Hugin ( ) _______________________________________ A c o l l e c t i o n to hold t he found p a r s e E r r o r s Dim p a r s e E r r o r s As C o l l e c t i o n _______________________________________

218

10

15

20

25

30

35

Get Map Dim pMxDoc As IMxDocument Dim pMap As IMap S e t pMxDoc = ThisDocument S e t pMap = pMxDoc . FocusMap _______________________________________ Get t h e s h a p e f i l e w i t h a l l t h e d a t a Dim p F e a t u r e C l a s s As I F e a t u r e C l a s s S e t p F e a t u r e C l a s s = O p e n S h a p e f i l e ( " Z: < F o l d e r > " , " Typ5_O_Res " ) Dim p L a y e r As I F e a t u r e L a y e r S e t p L a y e r = pMap . L a y e r ( 0 ) Dim p F e a t u r e C l a s s S e l As I F e a t u r e C l a s s Set p F e a t u r e C l a s s S e l = pLayer . F e a t u r e C l a s s _______________________________________ c r e a t e cursor to loop through " building_type " Dim I n d e x F I D As I n t e g e r I n d e x F I D = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " FID " ) Dim I n d e x O c c u p a n c y As S t r i n g I n d e x O c c u p a n c y = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Occupancy " ) Dim I n d e x S t o r y A r e a As Double IndexStoryArea = pFeatureClassSel . FindField (" FloorArea ") Dim I n d e x L i q 2 1 As Double I n d e x L i q 2 1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Avg_Liq21 " ) Dim I n d e x C o s t T o t a l 1 As Double IndexCostTotal1 = pFeatureClassSel . FindField (" CostTotal1 ") Dim I n d e x C o s t T o t a l 2 As Double IndexCostTotal2 = pFeatureClassSel . FindField (" CostTotal2 ") Dim I n d e x C o s t T o t a l 3 As Double IndexCostTotal3 = pFeatureClassSel . FindField (" CostTotal3 ") _______________________________________ Define cursor for s e l e c t e d f e a t u r e s Dim p C u r s o r As I F e a t u r e C u r s o r Set pCursor = p F e a t u r e C l a s s S e l . Update ( Nothing , F a l s e ) _______________________________________ C u r s o r t h r o u g h t h e s e l e c t e d rows Dim pRowSel As I F e a t u r e S e t pRowSel = p C u r s o r . N e x t F e a t u r e Dim pOID As Double Dim pQuery As I Q u e r y F i l t e r

40

45

50

55

60

Dim Nu ( 1 To 9 ) As Double Nu ( 1 ) = 0 . 0 0 0 0 2 3 9 8 8 3 Nu ( 2 ) = 0 . 0 0 0 0 4 7 8 6 3 Nu ( 3 ) = 0 . 0 0 0 0 8 9 1 2 5 1 Nu ( 4 ) = 0 . 0 0 0 0 7 7 6 2 4 7 Nu ( 5 ) = 0 . 0 0 0 0 5 1 2 8 6 1 Nu ( 6 ) = 0 . 0 0 0 0 2 0 8 9 3 Nu ( 7 ) = 0 . 0 0 0 0 3 5 4 8 1 3 Nu ( 8 ) = 0 . 0 0 3 0 1 9 9 5 2 Nu ( 9 ) = 0 . 0 0 4 0 7 3 8 0 3

219

C Software Codes
_______________________________________ LOOP THROUGH EACH BUILDING OF THE LAYER Do While Not pRowSel I s N o t h i n g Dim S o u r c e As I n t e g e r F o r S o u r c e = 1 To 9 _______________________________________ I m p o r t BPN from Hugin Dim d As HAPI . Domain Dim BN As S t r i n g Dim N e t z e ( 1 To 3 ) N e t z e ( 1 ) = " EX2_Typ5_O_Res_S " Netze ( 2 ) = CStr ( Source ) Netze ( 3 ) = " _Poisson . n et " BN = J o i n ( Netze , " " ) S e t d = HAPI . LoadDomainFromNet (BN, p a r s e E r r o r s , 1 0 )

65

70

75

80

g e t t h e node w i t h l a b e l " . . . " and name " . . . " from t h e domain Dim NodeLiq As HAPI . Node S e t NodeLiq = d . GetNodeByName ( " L i q u e f a c t i o n " ) Dim N o d e L i q T a b l e As HAPI . T a b l e S e t N o d e L i q T a b l e = NodeLiq . T a b l e Dim Set Dim Set NodeCost As HAPI . Node NodeCost = d . GetNodeByName ( " C o s t " ) N o d e C o s t T a b l e As HAPI . T a b l e N o d e C o s t T a b l e = NodeCost . T a b l e

85

90

95

100

Initialize Dim l a u f As I n t e g e r Dim M As I n t e g e r M = IndexLiq21 F o r l a u f = 0 To 83 S t e p 2 N o d e L i q T a b l e . D a t a ( l a u f ) = 1 pRowSel . V a l u e (M) N o d e L i q T a b l e . D a t a ( l a u f + 1 ) = pRowSel . V a l u e (M) M = M + 1 Next l a u f Dim Dim Dim Dim Dim Dim F a t a l i t y As Double R e b u i l d i n g As Double R e p a i r As Double C o s t T o t a l 1 As Double C o s t T o t a l 2 As Double C o s t T o t a l 3 As Double

105

110

Initialize Dim i n i t 1 As S i n g l e F o r i n i t 1 = 0 To 29 NodeCostTable . Data ( i n i t 1 ) = 0 Next i n i t 1

220

115

120

70%=Occupancy a t t i m e o f EQ (M2) 80 % Occupants trapped (M3) 20% t r a p p e d O c c u p a n t s d i e d i m e d i a t e l y (M4) 80 % Occupants dead after 10 days(M5) LSCS=250000 USD ( GDPpc=10000 USD) F a t a l i t y = pRowSel . V a l u e ( I n d e x O c c u p a n c y ) _ ( 0 . 7 0 . 8 0 . 2 + 0 . 7 0 . 8 ( 1 0 . 2 ) 0 . 8 ) 250000

125

U n i t r e b u i l d i n g c o s t = 200USD / m2 , I m p o r t a n c e f a c t o r f o r h o s p i t a l =10 2% discounting R e b u i l d i n g = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 5 U n i t r e b u i l d i n g c o s t = 200USD / m2 , 15 %Cost of rebuilding for repair, I m p o r t a n c e f a c t o r f o r h o s p i t a l s =10 D i s c o u t r a t e =2% R e p a i r = 5 pRowSel . V a l u e ( I n d e x S t o r y A r e a ) 300 0 . 2 5 C o s t T o t a l 1 = 0 Nu ( S o u r c e ) C o s t T o t a l 2 = R e p a i r Nu ( S o u r c e ) C o s t T o t a l 3 = ( R e b u i l d i n g ) Nu ( S o u r c e )

130

135

140

145

150

155

160

I f ( C o s t T o t a l 1 <= 62 And C o s t T o t a l 1 >= 0 ) Then NodeCostTable . Data ( 0 ) = 1 E l s e I f ( C o s t T o t a l 1 < 134 And C o s t T o t a l 1 > 6 2 ) Then NodeCostTable . Data ( 1 ) = 1 E l s e I f ( C o s t T o t a l 1 < 212 And C o s t T o t a l 1 >= 1 3 4 ) Then NodeCostTable . Data ( 2 ) = 1 E l s e I f ( C o s t T o t a l 1 < 329 And C o s t T o t a l 1 >= 2 1 2 ) Then NodeCostTable . Data ( 3 ) = 1 E l s e I f ( C o s t T o t a l 1 < 557 And C o s t T o t a l 1 >= 3 2 9 ) Then NodeCostTable . Data ( 4 ) = 1 E l s e I f ( C o s t T o t a l 1 < 1863 And C o s t T o t a l 1 >= 5 5 7 ) Then NodeCostTable . Data ( 5 ) = 1 E l s e I f ( C o s t T o t a l 1 < 3073 And C o s t T o t a l 1 >= 1 8 6 3 ) Then NodeCostTable . Data ( 6 ) = 1 E l s e I f ( C o s t T o t a l 1 < 4930 And C o s t T o t a l 1 >= 3 0 7 3 ) Then NodeCostTable . Data ( 7 ) = 1 E l s e I f ( C o s t T o t a l 1 < 8131 And C o s t T o t a l 1 >= 4 9 3 0 ) Then NodeCostTable . Data ( 8 ) = 1 E l s e I f ( C o s t T o t a l 1 < 150000 And C o s t T o t a l 1 >= 8 1 3 1 ) Then NodeCostTable . Data ( 9 ) = 1 End I f I f ( C o s t T o t a l 2 <= 62 And C o s t T o t a l 2 >= 0 ) Then NodeCostTable . Data ( 1 0 ) = 1 E l s e I f ( C o s t T o t a l 2 < 134 And C o s t T o t a l 2 > 6 2 ) Then NodeCostTable . Data ( 1 1 ) = 1 E l s e I f ( C o s t T o t a l 2 < 212 And C o s t T o t a l 2 >= 1 3 4 ) Then NodeCostTable . Data ( 1 2 ) = 1

165

221

C Software Codes
E l s e I f ( C o s t T o t a l 2 < 329 And C o s t T o t a l 2 >= 2 1 2 ) Then NodeCostTable . Data ( 1 3 ) = 1 E l s e I f ( C o s t T o t a l 2 < 557 And C o s t T o t a l 2 >= 3 2 9 ) Then NodeCostTable . Data ( 1 4 ) = 1 E l s e I f ( C o s t T o t a l 2 < 1863 And C o s t T o t a l 2 >= 5 5 7 ) Then NodeCostTable . Data ( 1 5 ) = 1 E l s e I f ( C o s t T o t a l 2 < 3073 And C o s t T o t a l 2 >= 1 8 6 3 ) Then NodeCostTable . Data ( 1 6 ) = 1 E l s e I f ( C o s t T o t a l 2 < 4930 And C o s t T o t a l 2 >= 3 0 7 3 ) Then NodeCostTable . Data ( 1 7 ) = 1 E l s e I f ( C o s t T o t a l 2 < 8131 And C o s t T o t a l 2 >= 4 9 3 0 ) Then NodeCostTable . Data ( 1 8 ) = 1 E l s e I f ( C o s t T o t a l 2 < 150000 And C o s t T o t a l 2 >= 8 1 3 1 ) Then NodeCostTable . Data ( 1 9 ) = 1 End I f I f ( C o s t T o t a l 3 <= 62 And C o s t T o t a l 3 >= 0 ) Then NodeCostTable . Data ( 2 0 ) = 1 E l s e I f ( C o s t T o t a l 3 < 134 And C o s t T o t a l 3 > 6 2 ) Then NodeCostTable . Data ( 2 1 ) = 1 E l s e I f ( C o s t T o t a l 3 < 212 And C o s t T o t a l 3 >= 1 3 4 ) Then NodeCostTable . Data ( 2 2 ) = 1 E l s e I f ( C o s t T o t a l 3 < 329 And C o s t T o t a l 3 >= 2 1 2 ) Then NodeCostTable . Data ( 2 3 ) = 1 E l s e I f ( C o s t T o t a l 3 < 557 And C o s t T o t a l 3 >= 3 2 9 ) Then NodeCostTable . Data ( 2 4 ) = 1 E l s e I f ( C o s t T o t a l 3 < 1863 And C o s t T o t a l 3 >= 5 5 7 ) Then NodeCostTable . Data ( 2 5 ) = 1 E l s e I f ( C o s t T o t a l 3 < 3073 And C o s t T o t a l 3 >= 1 8 6 3 ) Then NodeCostTable . Data ( 2 6 ) = 1 E l s e I f ( C o s t T o t a l 3 < 4930 And C o s t T o t a l 3 >= 3 0 7 3 ) Then NodeCostTable . Data ( 2 7 ) = 1 E l s e I f ( C o s t T o t a l 3 < 8131 And C o s t T o t a l 3 >= 4 9 3 0 ) Then NodeCostTable . Data ( 2 8 ) = 1 E l s e I f ( C o s t T o t a l 3 < 150000 And C o s t T o t a l 3 >= 8 1 3 1 ) Then NodeCostTable . Data ( 2 9 ) = 1 End I f Dim FID , N e t z ( 1 To 3 ) , BPN FID = pRowSel . V a l u e ( I n d e x F I D ) N e t z ( 1 ) = C S t r ( FID ) N e t z ( 2 ) = " _a " N e t z ( 3 ) = BN BPN = J o i n ( Netz , " " ) d . SaveAsNet (BPN) Next S o u r c e pRowSel . V a l u e ( I n d e x C o s t T o t a l 1 ) = C o s t T o t a l 1 pRowSel . V a l u e ( I n d e x C o s t T o t a l 2 ) = C o s t T o t a l 2 pRowSel . V a l u e ( I n d e x C o s t T o t a l 3 ) = C o s t T o t a l 3

170

175

180

185

190

195

200

205

210

215

220

_______________________________________ U p d a t e c u r s o r o f s e l e c t e d DHM f e a t u r e s

222

p C u r s o r . U p d a t e F e a t u r e pRowSel S e t pRowSel = p C u r s o r . N e x t F e a t u r e Loop _______________________________________ END LOOP THROUGH EACH BUILDING OF THE LAYER End Sub _______________________________________

225

223

C Software Codes
Matlab les for generating the BPNs for Example 3
f u n c t i o n BPN_PSHA_Adapazari_BU ( T , X1 , Y1 , X2 , Y2 , a , b , SS )

10

15

% % % % % % % % % % % % % % % %

by Yahya Y. Bayraktarli, 30/08/2009 ETH Zrich bayraktarli@hotmail.com Bayraktarli, Y.Y, Baker, J.W., Faber, M.H., 2009. Uncertainty treatment in earthquake modeling using Bayesian networks, Georisk, accepted for publication. This script reads a Bayesian probabilistic network for probabilistic seismic hazard analysis, calculates the probability distribution of the nodes in the BPN and compiles the BPN with the inference engine of HUGIN.

For each seismic source, Z and each year, Q a set of BPNs are calculated:

20

f o r Z=SS : SS f o r Q= 1 : 5 0

25

% % % % %

For controling HUGIN from MATLAB the ActiveX then the available functions in this library objects created from this library. The HUGIN loaded with the following command and create named bpn:

server is loaded and are used to alter ActiveX Server is a HUGIN API object

bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

% An object domain is created which holds the network:


30

domain = i n v o k e ( bpn , LoadDomainFromNet , C : \ D i s s e r t a t i o n . . . C h a p t e r 4 1\BPN \ E X 3 _ Ty p5 _O _R es _s in gl e_ ep s . n e t , 0 , 0 ) ;

% The nodes to be manipulated are defined:


ndEQ_M= i n v o k e ( domain , GetNodeByName , EQ_Magnitude ) ; ndEQ_R= i n v o k e ( domain , GetNodeByName , E Q _ D i s t a n c e ) ; ndEps_PGA= i n v o k e ( domain , GetNodeByName , Epsilon_PGA ) ; ndEps_SD= i n v o k e ( domain , GetNodeByName , E p s i l o n _ S D ) ; ndPGA= i n v o k e ( domain , GetNodeByName , PGA ) ; ndSD= i n v o k e ( domain , GetNodeByName , SD ) ; ndDamage= i n v o k e ( domain , GetNodeByName , Damage ) ;

35

40

%The number of discrete states of the nodes Magnitude, Distance, %Eps_PGA, Eps_SD, PGA and SD is set:
nM= 6 ; nR = 5 ; nEps_PGA = 1 0 ; nEps_SD = 1 0 ; nPGA= 7 ; nSD = 7 ; nDamage = 3 ; s e t ( ndEQ_M , N u m b e r O f S t a t e s ,nM ) ; s e t ( ndEQ_R , N u m b e r O f S t a t e s , nR ) ;

45

50

224

55

s e t ( ndEps_SD , N u m b e r O f S t a t e s , nEps_SD ) ; s e t ( ndEps_PGA , N u m b e r O f S t a t e s , nEps_PGA ) ; s e t ( ndSD , N u m b e r O f S t a t e s , nSD ) ; s e t ( ndPGA , N u m b e r O f S t a t e s ,nPGA ) ; s e t ( ndDamage , N u m b e r O f S t a t e s , nDamage ) ;

%The probability distribution of node Eps_PGA is calculated:


[ Eps_PGA , dPGA] =EPS_PGA ( nEps_PGA ) ;
60

%The probability distribution of node Eps_SD is calculated:


[ Eps_SD , dSD ] = EPS_SD ( nEps_SD , nEps_PGA , T ) ;

%The probability distribution of node Distance is calculated:


65

[ R , P_R , R l i m i t s ] = Line_EQ_R ( 5 0 0 , nR , X1 , Y1 , X2 , Y2 ) ;

%The probability distribution of node Magnitude is calculated:


[ Nu_Mmin , M_EQ, P_M , M l i m i t s ] = EQ_M_NonPoisson ( 5 , a , b , nM, Q, Z ) ; M=M_EQ;
70

%The discrete probabilities are set for node Magnitude in the %BPN:
f o r i = 1 : (nM) s e t ( ndEQ_M . T a b l e , D a t a , ( i 1) ,P_M( i ) ) ; s e t ( ndEQ_M , S t a t e L a b e l , ( i 1) , [ M= num2str (M( i ) ) ] ) ; end

75

%The discrete probabilities are set for node Distance in the BPN:
80

f o r i = 1 : ( nR ) s e t ( ndEQ_R . T a b l e , D a t a , ( i 1) , P_R ( i ) ) ; s e t ( ndEQ_R , S t a t e L a b e l , ( i 1) , [ R= num2str (R( i ) ) ] ) ; end

%The discrete probabilities are set for node Eps_PGA in the BPN:
85

f o r i = 1 : ( nEps_PGA ) s e t ( ndEps_PGA . T a b l e , D a t a , ( i 1) , Eps_PGA ( i ) ) ; s e t ( ndEps_PGA , S t a t e L a b e l , ( i 1) , . . . [ Eps_PGA= num2str ( dPGA ( i ) ) ] ) ; end

90

%The discrete probabilities are set for node Eps_SD in the BPN:
f o r i = 1 : ( nEps_SDnEps_PGA ) s e t ( ndEps_SD . T a b l e , D a t a , ( i 1) , Eps_SD ( i ) ) ; end
95

%The conditional probability table of the node PGA is initialized %with zeros:
f o r i = 1 : (nMnRnEps_PGAnPGA ) s e t ( ndPGA . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

100

%The conditional probability table of the node SD is initialized %with zeros:


f o r i = 1 : (nMnRnSDnEps_SD )

225

C Software Codes
105

s e t ( ndSD . T a b l e , D a t a , ( i 1 ) , 0 ) ; end

110

%for all combinations of the states in the node Magnitude and %Distance the peak ground accelerations and spectral %displacements are calculated with the Boore Joyner and Fumal %attenuation model
[PGABOORE] =PGA( nEps_PGA ,M, R ) ; [SDBOORE] =SD ( nEps_SD , nEps_PGA ,M, R ) ;

115

%The limits for the discretisation of the nodes SD and PGA are %set:
CoeffPGA = [ 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 1 . 2 max (PGABOORE ) ] ; CoeffSD = [ 0 0 . 0 0 5 0 . 0 2 0 . 0 5 0 . 1 0 . 3 0 . 5 max (SDBOORE ) ] ;

120

f o r i = 1 : ( nSD ) S ( i ) = ( CoeffSD ( i ) + CoeffSD ( i + 1 ) ) / 2 ; SDmm( i ) = S ( i ) 1 0 0 0 ; end

125

%The labels of the states are set for node Eps_SD, PGA and SD %in the BPN:
f o r i = 1 : ( nEps_SD ) s e t ( ndEps_SD , S t a t e L a b e l , ( i 1) , [ Eps_SD= num2str ( dSD ( i ) ) ] ) ; end f o r i = 1 : ( nPGA ) s e t ( ndPGA , S t a t e L a b e l , ( i 1) , [ PGA= num2str ( ( CoeffPGA ( i ) + . . . CoeffPGA ( i + 1 ) ) / 2 ) ] ) ; end f o r i = 1 : ( nSD ) s e t ( ndSD , S t a t e L a b e l , ( i 1) , [ SD= num2str ( ( CoeffSD ( i ) + . . . CoeffSD ( i + 1 ) ) / 2 ) ] ) ; end

130

135

%The discrete probabilities are set for node PGA in the BPN:
140

145

N= 0 ; f o r i = 1 : l e n g t h (PGABOORE) f o r j = 1 :nPGA i f PGABOORE( i ) <= CoeffPGA ( j + 1) & PGABOORE( i ) > CoeffPGA ( j ) s e t ( ndPGA . T a b l e , D a t a , ( j +N 1 ) , 1 ) ; N=N+nPGA ; end end end

150

%The discrete probabilities are set for node SD in the BPN:


K= 0 ; f o r i = 1 : l e n g t h (SDBOORE) f o r j = 1 : nSD i f SDBOORE( i ) <= CoeffSD ( j + 1) & SDBOORE( i ) > CoeffSD ( j ) s e t ( ndSD . T a b l e , D a t a , ( j +K 1 ) , 1 ) ; K=K+nSD ; end

155

226

end end
160

165

[ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R A _ 2 (SDmm ) ; f o r i = 1 : ( nSD 5 2 ) s e t ( ndDamage . T a b l e , D a t a , ( i 1) , P_Damage ( i ) ) ; end

%The BPNs are set for each Source and Year


F i l e n a m e = [ C : \ D i s s e r t a t i o n \ C h a p t e r 4 1\BPN \ EX3_Typ5_O_Res_S . . . num2str ( Z ) _Y num2str (Q) . n e t ] ; i n v o k e ( domain , SaveAsNet , F i l e n a m e ) ; end end f u n c t i o n [ P_Damage ] = F r a g i l i t y _ T y p 5 _ O _ R e s _ R A _ 2 ( SD )

170

%For the two damage states the parameters of the lognormal %distribution are given in Table X.
5

Lambda_Red = [ 4 . 1 6 0 4 . 2 1 5 4 . 2 7 0 ] ; Zeta_Red = [ 0. 3 06 0.346 0 . 3 8 6 ] ;

%For each state of the node SD the probabilities of being in one of the %two damage states are calculated.
10

15

N= 1 ; for i =1:5 f o r k = 1 : l e n g t h ( SD ) p1 (N)=1 l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; p2 (N) = l o g n c d f ( SD ( k ) , Lambda_Red ( i ) , Z e t a _ R e d ( i ) ) ; N=N+ 1 ; end end

20

%The probabilities of being in one of the two damage states form the %conditional probability table of the node Damage
DamageNode1 = [ p1 ; p2 ] ; P_Damage1=DamageNode1 . ( DamageNode1 > 0 ) ; P_Damage=P_Damage1 ( : ) ;

Visual Basic les for evaluating the BPNs in Example 3 within the GIS environment
OpenShape . b a s C r e a t e s a s h a p e f i l e d e v e l o p e d by A d r i e n n e G r e t Regamey A t t r i b u t e VB_Name = " OpenShape " Option E x p l i c i t P u b l i c F u n c t i o n O p e n S h a p e f i l e ( s p a t h As S t r i n g , s F i l e N a m e As S t r i n g ) _ As I F e a t u r e C l a s s

227

C Software Codes
Dim Set Dim Set MxDoc As IMxDocument MxDoc = ThisDocument pMap As IMap pMap = MxDoc . FocusMap

10

15

Get a c c e s s t o F e a t u r e C l a s s Dim pWSF As I W o r k s p a c e F a c t o r y S e t pWSF = New S h a p e f i l e W o r k s p a c e F a c t o r y Dim Set Dim Set pWorkspace As I W o r k s p a c e pWorkspace = pWSF . O p e n F r o m F i l e ( s p a t h , 0 ) p f W o r k s p a c e As I F e a t u r e W o r k s p a c e p f W o r k s p a c e = pWorkspace

20

Set OpenShapefile = pfWorkspace . OpenFeatureClass ( sFileName )


25

Dim p F L a y e r As I F e a t u r e L a y e r S e t p F L a y e r = New F e a t u r e L a y e r Set pFLayer . F e a t u r e C l a s s = O p e n S h a p e f i l e

30

Dim p D a t a s e t As I D a t a s e t Set pDataset = OpenShapefile p F L a y e r . Name = s F i l e N a m e

35

Dim pMxDoc As IMxDocument S e t pMxDoc = ThisDocument pMxDoc . AddLayer p F L a y e r pMxDoc . A c t i v e V i e w . P a r t i a l R e f r e s h e s r i V i e w G e o g r a p h y , pFLayer , N o t h i n g

40

End F u n c t i o n Module1BUTyp5_O_Res . b a s A t t r i b u t e VB_Name = " Module1 " Option E x p l i c i t P r i v a t e Sub BN_Hugin ( ) _______________________________________ A c o l l e c t i o n to hold t he found p a r s e E r r o r s Dim p a r s e E r r o r s As C o l l e c t i o n _______________________________________ Get Map Dim pMxDoc As IMxDocument Dim pMap As IMap S e t pMxDoc = ThisDocument S e t pMap = pMxDoc . FocusMap _______________________________________ Get t h e s h a p e f i l e w i t h a l l t h e d a t a Dim p F e a t u r e C l a s s As I F e a t u r e C l a s s S e t p F e a t u r e C l a s s = O p e n S h a p e f i l e ( " Z: < F o l d e r > " , " Typ5_O_Res " ) Dim p L a y e r As I F e a t u r e L a y e r S e t p L a y e r = pMap . L a y e r ( 0 )

10

15

228

20

25

30

Dim p F e a t u r e C l a s s S e l As I F e a t u r e C l a s s Set p F e a t u r e C l a s s S e l = pLayer . F e a t u r e C l a s s _______________________________________ c r e a t e cursor to loop through " building_type " Dim I n d e x F I D As I n t e g e r I n d e x F I D = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " FID " ) Dim I n d e x O c c u p a n c y As S t r i n g I n d e x O c c u p a n c y = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Occupancy " ) Dim I n d e x S t o r y A r e a As Double IndexStoryArea = pFeatureClassSel . FindField (" FloorArea ") Dim I n d e x L i q 2 1 As Double I n d e x L i q 2 1 = p F e a t u r e C l a s s S e l . F i n d F i e l d ( " Avg_Liq21 " )

35

40

_______________________________________ Define cursor for s e l e c t e d f e a t u r e s Dim p C u r s o r As I F e a t u r e C u r s o r Set pCursor = p F e a t u r e C l a s s S e l . Update ( Nothing , F a l s e ) _______________________________________ C u r s o r t h r o u g h t h e s e l e c t e d rows Dim pRowSel As I F e a t u r e S e t pRowSel = p C u r s o r . N e x t F e a t u r e Dim pOID As Double Dim pQuery As I Q u e r y F i l t e r

45

50

_______________________________________ LOOP THROUGH EACH BUILDING OF THE LAYER Do While Not pRowSel I s N o t h i n g Dim S o u r c e As I n t e g e r F o r S o u r c e = 1 To 9 _______________________________________ I m p o r t BPN from Hugin Dim d As HAPI . Domain Dim BN As S t r i n g Dim N e t z e ( 1 To 3 ) N e t z e ( 1 ) = " EX3_Typ5_O_Res_S " Netze ( 2 ) = CStr ( Source ) Netze ( 3 ) = " _Poisson . n et " BN = J o i n ( Netze , " " ) S e t d = HAPI . LoadDomainFromNet (BN, p a r s e E r r o r s , 1 0 ) g e t t h e node w i t h l a b e l " . . . " and name " . . . " from t h e domain Dim NodeLiq As HAPI . Node S e t NodeLiq = d . GetNodeByName ( " L i q u e f a c t i o n " ) Dim N o d e L i q T a b l e As HAPI . T a b l e S e t N o d e L i q T a b l e = NodeLiq . T a b l e Initialize Dim l a u f As I n t e g e r Dim M As I n t e g e r M = IndexLiq21

55

60

65

70

229

C Software Codes
F o r l a u f = 0 To 83 S t e p 2 N o d e L i q T a b l e . D a t a ( l a u f ) = 1 pRowSel . V a l u e (M) N o d e L i q T a b l e . D a t a ( l a u f + 1 ) = pRowSel . V a l u e (M) M = M + 1 Next l a u f Dim FID , N e t z ( 1 To 3 ) , BPN FID = pRowSel . V a l u e ( I n d e x F I D ) N e t z ( 1 ) = C S t r ( FID ) Netz ( 2 ) = " _ " N e t z ( 3 ) = BN BPN = J o i n ( Netz , " " ) d . SaveAsNet (BPN) Next S o u r c e _______________________________________ U p d a t e c u r s o r o f s e l e c t e d DHM f e a t u r e s p C u r s o r . U p d a t e F e a t u r e pRowSel S e t pRowSel = p C u r s o r . N e x t F e a t u r e Loop _______________________________________ END LOOP THROUGH EACH BUILDING OF THE LAYER End Sub _______________________________________ f u n c t i o n UpdateBPN

75

80

85

90

95

% From GIS Map the ID of the buildings with observed damage state along % with the damage state is loaded
5

l o a d BU_BuildingID_Damage bpn= a c t x s e r v e r ( HAPI . G l o b a l s ) ;

%Initially the nodes for modeling uncertainty are normal distributed


10

P_Lambda_Yellow = [ 0 . 1 5 9 0 . 6 8 2 0 . 1 5 9 ] ; P_Zeta_Yellow =[0.159 0.682 0 . 1 5 9 ] ; P_Lambda_Red = [ 0 . 1 5 9 0 . 6 8 2 0 . 1 5 9 ] ; P_Zeta_Red = [ 0 . 1 5 9 0 . 6 8 2 0 . 1 5 9 ] ; for i =1:203 m= f ( i , 1 ) ; %ID of the building with observed damage state n= f ( i , 2 ) ; %damage state of the building 0=No damage, 1=Damage i f n==1 n=n + 1 ; %State number for red is 2 and not 1, hence the addition of 1 end F i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ S e i s m i c h a z a r d a n a l y s i s \ PhD \ . . . GISBPN O u t p u t \ BU_O_Res \ num2str (m) _BU_O_Res_S3 . n e t ] ; domain = i n v o k e ( bpn , LoadDomainFromNet , F i l e n a m e , 0 , 0 ) ;

15

20

25

ndLambda_Yellow= i n v o k e ( domain , GetNodeByName , Lambda_Yellow ) ; n d Z e t a _ Y e l l o w = i n v o k e ( domain , GetNodeByName , Z e t a _ Y e l l o w ) ;

230

30

ndLambda_Red= i n v o k e ( domain , GetNodeByName , Lambda_Red ) ; n d Z e t a _ R e d = i n v o k e ( domain , GetNodeByName , Z e t a _ R e d ) ; ndDamage= i n v o k e ( domain , GetNodeByName , Damage ) ;

% Initially the nodes for modeling uncertainty are normal distributed, % recursively given the damage state of the i-th building they are % updated
35

40

for i =1:3 s e t ( ndLambda_Yellow . T a b l e , D a t a , ( i 1) , P_Lambda_Yellow ( i ) ) ; s e t ( n d Z e t a _ Y e l l o w . T a b l e , D a t a , ( i 1) , P _ Z e t a _ Y e l l o w ( i ) ) ; s e t ( ndLambda_Red . T a b l e , D a t a , ( i 1) , P_Lambda_Red ( i ) ) ; s e t ( n d Z e t a _ R e d . T a b l e , D a t a , ( i 1) , P_Zeta_Red ( i ) ) ; end i n v o k e ( domain , Compile ) ; i n v o k e ( ndDamage , S e l e c t S t a t e , n ) ; i n v o k e ( domain , P r o p a g a t e , h E q u i l i b r i u m S u m , hModeNormal ) ; for l =1:3 P_Lambda_Yellow ( l ) = g e t ( ndLambda_Yellow , B e l i e f , ( l 1 ) ) ; P_Zeta_Yellow ( l )= get ( ndZeta_Yellow , B e l i e f , ( l 1)); P_Lambda_Red ( l ) = g e t ( ndLambda_Red , B e l i e f , ( l 1 ) ) ; P_Zeta_Red ( l ) = g e t ( ndZeta_Red , B e l i e f , ( l 1 ) ) ; end end f i l e n a m e = [ Z : \ D i s s e r t a t i o n a n a l y s i s \ S e i s m i c h a z a r d a n a l y s i s \ PhD \ . . . GISBPN O u t p u t \ BU_O_Res \ num2str (m) _BU_O_Res_S3_FINAL . n e t ] ; i n v o k e ( domain , SaveAsNet , f i l e n a m e ) ; P_Lambda_Yellow P_Zeta_Yellow P_Lambda_Red P_Zeta_Red

45

50

55

231

232

Curriculum Vitae
Yahya Y. Bayraktarli Born 3 February 1972 in Mannheim (D) Citizen of Germany

1995 1995 1999 1999 2000 2000 2000 2002

Bachelor of Science in Civil Engineering Bogazici University, Istanbul, Turkey Diploma in Structural Engineering University of Karlsruhe, Karlsruhe, Germany Student exchange INSA Lyon, Lyon, France Diploma Thesis University of Karlsruhe, Karlsruhe, Germany Research and teaching assistant Collaborative Research Center 461 University of Karlsruhe, Karlsruhe, Germany Research and teaching assistant ETH Zurich, Zurich, Switzerland Technical scientic staff Risk analyses of existing nuclear power plant Mhleberg, Bern, Switzerland Bernische Kraftwerke AG

2002 2008

2008

233

Das könnte Ihnen auch gefallen