Beruflich Dokumente
Kultur Dokumente
Bri
Snel
Khanh
Dao
Applied Mathematics in Engineering and Reliability contains papers presented at the International
Conference on Applied Mathematics in Engineering and Reliability (ICAMER 2016, Ho Chi Minh City,
Viet Nam, 4-6 May 2016). The book covers a wide range of topics within mathematics applied in
reliability, risk and engineering, including:
Applied Mathematics in Engineering and Reliability will be of interest to academics and professionals
working in a wide range of industrial, governmental and academic sectors, including Electrical and
Electronic Engineering, Safety Engineering, Information Technology and Telecommunications, Civil
Engineering, Energy Production, Infrastructures, Insurance and Finance, Manufacturing, Mechanical
Engineering, Natural Hazards, Nuclear Engineering, Transportation, and Policy Making.
Applied Mathematics
in Engineering and Reliability
an informa business
APPLIED MATHEMATICS IN ENGINEERING AND RELIABILITY
Editors
Radim Bri & Vclav Snel
Faculty of Electrical Engineering and Computer Science
VBTechnical University of Ostrava, Czech Republic
Phan Dao
European Cooperation Center, Ton Duc Thang University, Vietnam
All rights reserved. No part of this publication or the information contained herein may be reproduced,
stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical,
by photocopying, recording or otherwise, without written prior permission from the publisher.
Although all care is taken to ensure integrity and the quality of this publication and the information
herein, no responsibility is assumed by the publishers nor the author for any damage to the property or
persons as a result of operation or use of this publication and/or the information contained herein.
Table of contents
Preface ix
Organization xi
Message from Professor Vinh Danh Le xiii
Introduction xv
vi
Inverse problems
On a multi-dimensional initial inverse heat problem with a time-dependent coefficient 255
C.D. Khanh & N.H. Tuan
vii
Preface
ix
Organization
HONORARY CHAIRS
CONFERENCE CHAIRMAN
CONFERENCE CO-CHAIRMEN
ORGANIZING INSTITUTIONS
xi
SPONSORED BY
xii
Welcome to the 1st International Conference on Applied Mathematics in Engineering and Reliability
(ICAMER 2016), held at Ton Duc Thang University, Vietnam. This Conference aims to offer a forum
for scientists, researchers, and managers from universities and companies to share their research findings
and experiences in the field. In recognition of its special meaning and broad influence, we consider the
organization of this Conference as one of our strategic activities in the development of three decades
applied research university.
Ton Duc Thang University (TDTU) has always described itself as a young, inspiring and dynamically
growing higher education institution in vibrant Ho Chi Minh City.
TDTU is steadily growing to meet the expanding demand for higher education as well as high-quality
human resources in Vietnam. With fifteen faculties and around 25,000 students, the University is now
ranked among the largest and fastest growing universities in Vietnam in all aspects.
On behalf of TDTU, the host institution of ICAMER 2016, I would like to express my sincere appre-
ciation to our great partnersEuropean Safety and Reliability Association (ESRA) and VB-Technical
University of Ostrava (Czech Republic)for their great efforts in organizing this Conference. I would
also like to send my special thanks to conference committees, track chairs, reviewers, speakers and authors
around the world for their contributions to and interest in our event.
I believe that you will have an interesting and fruitful conference in Vietnam. I really look forward to
welcoming all of you at our campus and hope that this Conference will start a long-term partnership
between you and our university.
February 2016
xiii
Introduction
The Conference covers a number of topics within engineering and mathematics. The Conference is
especially focused on advanced engineering mathematics which is frequently used in reliability, risk and
safety technologies.
xv
ABSTRACT: This paper focuses on the Bayesian analysis of the Power-Law Process. We investigate the
possibility of a natural conjugate prior. Relying on the work of Huang and Bier (1998), we introduce and
study the H-B distribution. This distribution is a natural conjugate prior since the posterior distribution
is a HB-distribution. We describe a strategy to draw out the prior distribution parameters. Results on
simulated and real data are displayed.
( ( ) n+a n+a 1
Bayes p{{ log }= .
B
Bayes
1 exp{ l g (9)
n 1 b 1 b t Bayes
n+ 1 n+ a 1 c ti n
i =1
exp{( + )tn } Therefore Bayes can be expressed as a convex
combination of the MLE and the prior expecta-
That is to say a HB distribution with parameters tion of given :
n
( i n) . a
i =1
Bayes (1 ) MLE ,
Assuming a quadratic loss, the Bayes estimators
btn Bayes
are the expectation of the posterior distributions.
Therefore by (4) and (2), we have:
where = 1+bb . This approximation will be used in
n+a the next section to elicit prior parameters.
Bayes = (6)
k Sn
R. Bri
Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science,
VSBTechnical University Ostrava, Ostrava, The Czech Republic
T.T. Thach
Ton Duc Thang University, Faculty of Mathematics and Statistics, Ho Chi Minh City, Vietnam
ABSTRACT: Engineering systems are subject to continuous stresses and shocks which may (or may
not) cause a change in the failure pattern of the system with unknown probability. A mixture of failure
rate models can be used as representation of frequent realistic situations, the failure time distribution
is given in the corresponding case. Classical and Bayesian estimation of the parameters and reliability
characteristics of this failure time distribution is the subject matter of the paper, where particular emphasis
is put on Weibull wear-out failure model.
10
Let t1, t2,..., tn be the random failure times of n 5.1.2 Case 2: When is known
items under test whose failure time distribution is as To find the MLE of p, say p, we consider
given in equation (3). Then the likelihood function is
log (t1, t2 , , | , p
n
= 0. (13)
n p
L(t1, t2 , , tn | , p )( p + (
n
p )tik )
i =1 or
n
1 p k +1
exp pti + ti . (6)
i =1 k +1
ti (1 k
n n
1
tik k + 1 i 1
k
i ) 0. (14)
i =11
5.1 MLEs p+
1 tik
5.1.1 Case 1: When p is known
To find the MLE of , say , we consider
An estimate of p, i.e. p, can be obtained from
equation (14), by using some suitable numerical
log (t1, t2 , , n | , ) iteration method. By using the invariance property
n
log ( p + ( ) of MLEs,
l g
= n log p )tik (7)
i =1 1. The MLE for R(t), say R 2 ( ), will be
n
1 p k +1
pti + ti .
i =1
k +1 1 p k +1
+1
R 2 (t ) exp . (15)
k 1
Now,
2. The MLE for h(t), say h2 ( ), will be
log (t1, t2 , , n | , )
=0 (8)
h2 (t ) (( p (1 p )t k ). (16)
n
= . (9)
MTTF2 = MTTF( , ), (17)
1 p k +1
i =1 pti
n
t
k 1i which can be obtained by installing into formula
(5) and integrating.
By using the invariance property of MLEs,
1. The MLE for R(t), say R ( ) , will be 1 5.1.3 Case 3: When both and p are unknown
To find the MLE of and p, we consider
1 p k +1
+1 log (t1, t2 , , | , )
R1(t ) exp . (10)
=0
n
(18)
k 1
11
and
In view of the prior in equation (25), the poste-
n rior distribution of given tl, t2, ..., tn is given by
n n ti (1 + k tik )
1 , tn | , p ) ( )
tik
i =1
n
1 p k +1
= 0. ( | t1, t2 , , tn , p ) =
L(( 1, t2 ,
i =1
p +
1 tik
( k 1) i k +1 i
pt + t 0 L(( 1, t2 , , tn | , p ) ( )
i =1
T2n T
(21) = n 1
, > 0.
( )
Equation (21) may be solve for p by Newton- (29)
Raphson or other suitable iterative methods and
this value is substituted in equation (20) to obtain Therefore, the Bayes estimate of , say *, under
. By using the invariance property of MLEs, the square-error loss function, becomes
1. The MLE for R(t), say R ( ), will be 3 n
* ( | 1, 2 , , n , ) ( ) (30)
1 p T2
k +1
+1
R3 (t ) exp . (22)
k 1 i.e. it reduces to the usual ML estimator, what is in
agreement with Martz & Waller (1982).
2. The MLE for h(t), say h3 ( ), will be Also, the Bayes estimate of R(t), say R1* (t ), is
h3 (t ) ( p (1 p )t k ).
( (23) R1* (t ) = E(R(t ) | t1, t2 , , tn p )
= e T3 ( | t1 t2 , , tn , p d (31)
3. The MLE for MTTF will be 0
1
= n
T3
MTTF2 = MTTF( , ), (24) 1 +
T
2
which can be obtained by installing into for-
mula (5) and integrating. where
1 p k +1
6 BAYESIAN ESTIMATION T3 pt + t .
k +1
6.1 Case 1: when p is known Similarly, the Bayes estimation of h(t), say
6.1.1 Non-informative prior h1* (t ), is
We are going to use the non-informative prior
h1* t ) = E(hh t ) | t1, t2 , , tn p )
1
( ) = . (25) = ( p (1 p )t k ) ( | t1 t2 , , tn , p d
0
(n )( p (1 p )t k )
)(
= .
The likelihood function in equation (6) may be T2
rewritten as
(32)
12
In view of the prior in equation (33), the poste- The likelihood function in equation (6) may be
rior distribution of A given tl, t2, ..., tn is given by rewritten as
L(t1, t2 , , tn | , p ) =
L(( 1, t2 , , tn | , p ) ( )
( | t1, t2 , , tn , p ) =
n
( pT
T4 T5 ) (39)
0 L(( 1, t2 , , tn | , p ) ( ) n
p n j
( p) j k j e k +1
n+ j =0
( 2)
= n + 1e ( 2)
,
( ) where
, > 0.
(34) k0 = 1 (40)
13
1 ( p | 1, 2 , , n , )
p* E ( p|tt1, t2 , ,tn , ) = p ( p | t t2 ,, tn , )ddp
0 L(( , t , , tn | , p ) ( p )
r = 1 1 2
T 1
n
k + 14 B(n + r j + 3, j kj L(( 1, t2 , , tn | , p) ( p dp
0
r=0 j =0 r! r
T4 1 n + r + a
n
=
T 1
n r
.
k r !
p j 1
( p )b j 1
kj
k 4 r ! B( n + r j + 2, j + 1)k j =
1
r=0 j =0
r
T 1 n++
n
r=0 j =0
0 k + 4
+ a j 1
p (1 p )b + j 1 k j ddp
r=0 j =0 r!
(46)
r
T4 1 n + r + a
n
The Bayes estimate of R(t), say R2* (t ), is k
r!
p j 1
(1 p )b j 1
kj
r=0 j =0
= r
T 1
n
k + 14 r!
B(n + r + a j b + j k j
r=0 j =0
(50)
R2* (t ) = E(R(t ) | t1, t2 , , tn , )
1 1 p k +1
= exp pt + t ( p | 1, 2 , , n , )dp
0
k +1
( )
m
T4 1 t
t k +1
e 1
n
r +1 1 nn++ r + m j +1
k +1
0 k
r! k +1 m !
p (1 p ) j k j dp
d
r=0 m=0 j =0
= r
T4 1
n
(47)
k r!
B(n + r j + 2, j kj
r=0 j =0
( )
m
t k +1 r t k + 1 t k
T 1
e n
1
k +1
k 4 r ! k +1 m !
B(n r m j , j + 1)k j
r=0 m=0 j =0
= r
T4 1
n
k r!
B(n r j ,j kj
r=0 j =0
0
1
(
= p + (1 p ) t k ) ( p | t1, t2 , , tn , ) dp
d
= (1 t ) p + t
k k Therefore, the Bayes estimate of p, say p*, under
the square-error loss function, becomes
= ( p * + (1 p*)t k ),
(48) p* E ( p | t1, t2 , , tn , )
1
where p* is given in equation (46). = p ( p | t1, t2 , , tn , )dp
d
0
r
T 1
n
6.2.2 Informative prior k + 14 r!
B (n + r + a j b + j kj
Let the prior distribution of p be a Beta distribu- r=0 j =0
= r
T4 1
tion with p.d.f. n
k r!
B(n + r + a j b + j k j
1 r=0 j =0
( ) = pa ( p )b 1, a, b > 0, 0 < p < 1.
B(( , b ) (51)
(49) The Bayes estimate of R(t), say R2* (t ) , is
14
( )
m
T4 1 t ( k + 1) t
t k +1 n r k
e 1 1 nn++ r + aa++ m j 1
k +1
0 k
r! k +1 m !
p (1 p )b + j 1 k j dp
r =0 m=0 j =0
= r
T 1
n
(52)
k + 4 r!
B(n + r + a j b + j k j
r =0 j =0
( )
m
t k +1 r t ( k + 1) t k
T 1
e n
1
k +1
k + 14 r ! k +1 m !
B ( n + r + a + m j , b + j )k j
r =0 m=0 j =0
= r
T 1
n
k + 4 r!
B ( n + r + a j b + j )k j
r =0 j =0
= ( p * + (1 p*)t k ),
(53)
where p* is given in equation (51).
15
16
REFERENCES
17
R. Bri
Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science,
VBTechnical University Ostrava, Ostrava, The Czech Republicc
ABSTRACT: Statistical decision problem on the basis of Bayes approach is presented in the paper.
Bayes inference for hypothesis testing is introduced in general and its extension in acceptance sampling
and quality control as well. Common decision problem for a quality characteristic is described taking
into account the cost dependent on the decision. Asymmetric loss function is proposed to quantify cost
oriented decision and mainly, to set the optimal Bayes decision rule for acceptance sampling plan. Appli-
cation of the method on real example from industry is clearly demonstrated.
19
20
21
22
23
r2
where A 0 (22)
t2
1
Using the Bayes approach we consider failure
f | )d (19)
rates and acceleration factor to vary randomly
according to some prior distributions. To meet the
condition (22), possible acceptance sample plans
Of course the relative magnitudes of the losses, for RDT are presented in the Table 1, given real
m1, m3, and k as well as the scale factor p(x), are data: 0 = 8.76e-4 /year, A = 13.09.
important in determining the optimal decision, Conditioned (by t2, r2) posterior distribution of
but in general the sample will be accepted when the 2, derived in (Bris 2000), is given as follows:
likelihood of x for those parameter values > 0,
associated with an unacceptable sample, is small.
b
u
h( 2 2 ,,r
r2 ) exp ( t 2 )
2r exp p ( a2 u ) du
exp d ,
0 (c + u )2 b
4 EXAMPLE FROM INDUSTRY where
a = 7.247e6
4.1 Reliability demonstration testing the failure b = 2.1063
rate of an exponential distribution c = 0.001435
We give good motivation example from industry (23)
(Bris 2002) for the use of the BDR.
Notation Having the knowledge of posterior, we can
MLE Maximum Likelihood Estimator optimize not only the variance Var{2|t2, r2} (dem-
RDT Reliability Demonstration Testing onstrated in Figure 2, in units [hour-2]) and mean
0 requested (by consumer) limit of failure E{2|t2, r2}, moreover we are able to quantify the
rate at normal condition consumers posterior assurance 1 *, given by
2 failure rate in accelerated condition the following relationship:
i index for condition given by temperature;
i = 1, 2 1 * = Pr {1 0 | passing RDT} (24)
1 failure rate tested at given (normal) tem-
perature condition as demonstrated in Figure 3.
ti total test time in condition i during which ri Using the criterion of small variance of 2 at
failures have occurred acceptable test time for RDT, sample plan P2
(ti,ri) parameters of acceptance sample plan in seems to be optimal. The process of optimization
condition i is demonstrated in the Figure 2 which corresponds
* posterior risk, Pr {1 > 0 | passing RDT} with the results in Table 1. Having the variance of
1 *consumer's posterior assurance 7.6 e 13/hour2 (i.e. 0.5832 e-4/year2) requires
A acceleration factor minimal total test time t2 = 1.53 e6 test hours at
Consumer asks for following limitation in reli- maximally 2 allowed failures (r2 = 2). Calculation
ability of delivered Electronic Components (EC): of posterior assurance that 1 0, 1 *, in a test
1 0. MLE of 1 for the time censored data is that has passed is 77% (Figure 3).
given by Above derived optimal Bayes decision rule is to
accept the sample whenever:
r1 +
1 = (20)
2 0 2 | 2 , r2 )d 2 m2
2
t1 3
0
+ (25)
However, total time t1 is usually very large even m2
( 2 0 ) h(( 2 | 2 , r2 )d 2
2
when r1 = 1 is admitted. Let us perform RDT in ( m3 + k )
accelerated condition: 0
24
106 test
Sample plan hours e-2/year e-4/year2 e-2/year
m2
2e-4 1
( m3 k )
(m
( 3 k )2e-4 m2
5 CONCLUSIONS
25
26
ABSTRACT: Particle filter and ensemble Kalman filter are two Bayesian filtering algorithms adapted
to nonlinear statespace models. The problem of nonlinear Bayesian filtering is challenging when the
state dimension is high, since approximation methods tend to degrade as dimension increases. We experi-
mentally investigate the performance of particle filter and ensemble Kalman filter as the state dimen-
sion increases. We run simulations with two different state dynamics: a simple linear dynamics, and the
Lorenz96 nonlinear dynamics. In our results, the approximation error of both algorithms grows at a
linear rate when the dimension increases. This linear degradation appears to be much slower for ensemble
Kalman filter than for particle filter.
27
28
29
30
( )
T
for j { m} , where xt xt , xt ,m R m
2
algorithm MSE n [ n 0 n] and f is a scalar constant parameter. By conven-
1 R tion, xt , xt ,m
m 1 xt ,0 xt ,m xt ,m +1 xt ,1. Equation
|X rn [ n | 0:n ]| ,
2
(12) is discretized with the (fourth order) Runge
R r =1 Kutta method to get the discretetime dynamics
xt h F ( xt ), where h is the discretization step. The
model MSE =
| n |Y0:n ] |2 state dynamics in the statespace model is then
R
1
|X n
R r =1
n |Y :n ]| ,
r 2
Xk F (X k 1 ) Wk ,
where Wk N ( Q ) and X 0 N ( m0 Q0 ) .
where X rn is the algorithm approximation of the The observation is the first component of vector
posterior mean using observations Y0r:n . Xk disrupted by an additive Gaussian noise with
variance 2, i.e. the observation model is the same
as in Section 5.1 above.
5 HIGH-DIMENSIONAL SIMULATIONS The true state sequence is generated thanks to the
discretized Lorenz96 dynamics, without dynamics
5.1 Linear dynamics noise, with a fixed initial condition x0, i.e. X 0r x0
We firstly consider a very simple statespace model and X kr F (X kr1 ) for all r { , , R} and
where the state dynamics and the observation model k { ,, n . Thus, the true state sequence is deter-
are linear with additive Gaussian white noise, ministic here and needs to be generated only once.
Note that, unlike in the linear Gaussian model in
Section 5.1, the exact posterior mean cannot be com-
Xk X k 1 + Wk , puted here. We must therefore approximate it accu-
Yk HX
H X k Vk , rately to get a reference value to compute the MSEs.
The model parameters are set as follows:
where Wk N ( Q ) and Vk N ( 2 ) , and m0 = (0 0 )T , Q0 = 64Im, Q 0 2 I m , = 1, f =
where H = ( ) is a 1 m matrix. The 8, h = 0.005. The number of time iterations is set
initial state X 0 follows the distribution N ( 0) . to n = 2000.
31
Figure 1. Total MSE vs. time for PF (solid line), EnKF Figure 2. Model MSE and algorithm MSE for PF and
(dashed line) and Kalman filter (dotted line), for different EnKF when the state dimension increases (linear dynam-
state dimensions (linear dynamics model). ics model).
32
Figure 3. Total MSE vs. time for PF (solid line) and PF is less robust to high dimension than EnKF,
EnKF (dashed line), for different state dimensions although the algorithm MSE of both algorithms
(Lorenz96 nonlinear dynamics model). increases at a linear rate.
To get the results presented in Figures 3 and
4, the reference approximation of the posterior
Results for the nonlinear Lorenz96 dynamics mean (required to compute the model and algo-
model from Section 5.2 are presented in Figures 3 rithm MSEs) is computed thanks to a particle
5
and 4. Figure 3 displays the time evolution of filter with a large number of particles ( ).
the total MSE for PF and EnKF, when the state We use the optimal particle filter (for this type of
dimension is m = 4 , 20, and 40. Figure 4 displays model), described in (Le Gland, Monbet, & Tran
the evolution of the model MSE and the algorithm 2011, Section 6), that differs from the SIR imple-
MSE, for the two filters, at final time step, when mentation presented in Algorithm 1 in Section 3.
the state dimension increases. The same observa- When the sample size N is large, we preferably use
tions than for the linear model can be made here: PF because EnKF has been proven asymptotically
33
34
V.V. Tai
Department of Mathematics, Can Tho University, Can Tho, Vietnam
C.N. Ha
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
N.T. Thao
Division of Computational Mathematics and Engineering, Institute for Computational Science,
Ton Duc Thang University, Ho Chi Minh City, Vietnam
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: This paper considers the Prior Probability (PP) in classifying two populations by Bayesian
approach. Specially, we establish the probability density functions for the ratio and the distance between
two PPs that be supposed to have Beta distributions. Also, we build the posterior distribution for PPs when
knowing the prior Beta distributions of them. From established probability density functions, we can
calculate some typical parameters such as mean, variance and mode. According to these parameters, we
can survey and also estimate the prior probabilities of two populations to apply for practical problems.
The numerical example in one synthetic data set, four bench mark data sets and one real data set not
only illustrate for the proposed theories but also present the applicability and feasibility of the researched
problem.
1 INTRODUCTION
Pe = min {q f x q f x } dx
Rn
Classification is an important problem in multivar- =1 gmax ( x ) dx,
Rn
iate statistical analysis and applied in many fields,
such as economics, physics, sociology, etc. In litera-
ture, there were many different methods proposed in which gmax ( x ) = max {q f ( x ) q f x )}. There-
to perform the classification problem like logistic fore, in Bayesian approach, classifying a new
regression method, Fisher method, Support Vec- observation and computing its error depend on
tor Machine method, Bayesian method, etc., in two factors: pdfs and PPs. From the given data,
which Bayesian approach was especially interested we have many methods to determine the pdfs.
Mardia, Kent, & Bibby 1979, Webb 2003, Ghosh, This problem was studied excitedly in theoreti-
Chaudhuri, & Sengupta 2006). In classifying by cal aspect and had many good applications with
Bayesian approach, we often study the case of two real data (Pham-Gia, Turkkan, & Vovan 2008,
populations because it can be applied in many prac- Vo Van & Pham-Gia 2010). In fact, when know-
tical problems and it is also the theoretical founda- ing the exact pdfs, determining suitable PPs is a
tion for the case of more than two populations. We significant factor to improve the performance in
suppose to have two populations Wi , i = 1, 2 with Bayesian classification. Normally, depending on
qi is the prior probability and fi x ) is the Prob- known information about the researched problem
ability Density Function (pdf) of the variable X or the training data, we can determine the prior
for ith population, respectively. Acording to Pham- probabilities. If there is none of information, we
Gia, (Pham-Gia, Turkkan, & Vovan 2008), classify- usually choose prior probabilities by uniform dis-
ing a new observation x0 by Bayesian method was tribution. When basing on training data, the prior
performed by the rule: if max {qi fi ( x0 )} q1 f1( x0 ) probabilities are often estimated by two main
n +1
then x0 is assigned to W1 , in contrast, we assign it methods: Laplace method: qi = Ni n and ratio of
ni
to W2 . Pham-Gia (Pham-Gia, Turkkan, & Vovan samples method: qi = N , where ni is the number
2008) also identified the misclassification in this of elements in Wi , n is the number of dimen-
approach that be called as the Bayes error and be sions and N is the number of all objects in training
calculated by formula data (James 1978, Everitt 1985). There were many
35
( )
Give the variable X and two populations W1 W2 , y 1
with pdfs are f1 x ) and f2 x ) , respectively. q = 1
B ( , ) y
is the PP for W1 and 1 q is the PP for W2 . Let
( ) y 1
q 1
y 1 q z =| 1 q | . If q is a random variable, then y ( y +1)2
1
y and z are also random variables. In this sec- y
= 1
B ( , ) ( y ) +
.
tion, we build the prior pdfs for y and z when q
has Beta distribution. The posterior distributions
of y and z are also established when we con- b. We call A to be the event for obtaining m obser-
sider the data samples. When posterior pdfs are vations of W1 when collecting n observations:
computed, we will have a general look about the
difference between two PPs and also find them via m
some representing parameters of y and z (e.g., P ( A) = q m ( q )n m .
n
mean or mode). This ideal can be also performed
in a similar way when q has other distributions
Then we have
in [0,1].
M (q ) = 1
B ( )
q 1( q ) 1.
2.1 Distribution of the ratio between
m m
two prior probabilities n q ( q )n m
Theorem 1 Assuming that q have the prior distri- m
bution Beta ( ) , we have following results for n
pdf of variable y. = B ( )
q m ( q ) 1+ n m
.
36
37
38
Table 1. The studied marks of 20 students and the BayesU (0.5;0.5) 0.0353 2 0.0538
actual result. BayesB (0.421;0.579) 0.0409 2 0.0558
BayesL (0.429;0.571) 0.0403 2 0.0557
Objects Marks Group Objects Marks Group BayesR (0.5545;0.454) 0.0365 1 0.0517
BayesD (0.572;0.428) 0.0383 1 0.0503
1 0.6 W1 11 5.6 W2
2 1.0 W1 12 6.1 W2
3 1.2 W1 13 6.4 W2 Table 3. Summary of four bench mark data sets.
4 1.6 W1 14 6.4 W2
5 2.2 W1 15 7.3 W2 Data No of objects No of dimensions
6 2.4 W1 16 8.4 W2
7 2.4 W1 17 9.2 W2 Thyroid 185 5
8 3.9 W1 18 9.4 W2 Seed 140 7
9 4.3 W1 19 9.6 W2 Breast 70 9
10 5.5 W2 20 9.8 W2 Users 107 5
39
T.D. Nguyen
Vietnam Aviation Academy, Ho Chi Minh, Vietnam
T.T.D. Phan
HCMC University of Food Industry, Ho Chi Minh, Vietnam
ABSTRACT: This paper aims to present the combination of chaotic signal and Self-Organizing Migrat-
ing Algorithm (SOMA) to estimate the unknown parameters in chaos synchronization system via the active
passive decomposition method. The unknown parameters were estimated by self-organizing migrating
algorithm. Based on the results from SOMA, two Rikitake chaotic systems were synchronized.
43
44
45
46
Figure 8a. Synchronization of xd and xr without using Figure 8e. Synchronization of yd & yr without using
ADP. ADP.
47
5 CONCLUSIONS
Figure 8i. Synchronization of zd and zr without using In this paper, the ADP method is applied to syn-
ADP. chronize two identical Rikitake chaotic systems.
Parameter estimation for chaotic system was
Figure. 8 (d,h,l) displays that the trajectories of e(t) formulated as a multidimensional optimization
tends to zero after t > 12, and trajectories of xr,yr,zr problem. Self-Organizing Migration Algorithm
converged to xd,yd,zd absolutely when ADP was (SOMA) was used to find the unknown values
applied as shown in Figure. 8 (b,f,j). Its proven that of chaotic parameters. Based on the results from
48
49
V.V. Tai
Department of Mathematics Can Tho University, Can Tho, Vietnam
N.T. Thao
Division of Computational Mathematics and Engineering, Institute for Computational Science
Ton Duc Thang University, Ho Chi Minh City, Vietnam
Faculty of Mathematics and Statistics Ton Duc Thang University, Ho Chi Minh City, Vietnam
C.N. Ha
Faculty of Mathematics and Statistics Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: Basing on the L1 -distance between Probability Density Functions (pdfs) in a cluster and
its representing pdf, the L1 -distances between representing pdfs of different clusters, this article proposes
two new internal validity measures for clustering for pdfs. Then, we apply Genetic Algorithm coded for
solving integer optimization problems to minimize these internal validity measures so that establish the
suitable clusters. The numerical examples in both synthetic data and real data will show that the proposed
algorithm gives the better results than those of existing ones while testing by internal validity measures
and external validity measures.
51
52
SF index 1
The IntraF index can compute the compact- a b log(ui ) iif ri
2
=
ness of clusters but cannot assess the separation a + b log(u ) iif 1
between different clusters. Therefore, we propose ri >
i
2
the new index to measure this separation. This
index is called as SF and it is defined as follows: where a is location parameter and b > 0 is scaling
parameter, ui , ri [ , ] are uniform random num-
i= f
k 2
Ci
f ffvi bers. For CDF problem, in each above individual,
SF = 2 n is the number of pdfs and 2 xi k with k is
n i fv
fi fv j
fv the number of clusters.
(5)
IntraF
= 2
Mutation
mini j fvi fv j The mutation operator used in MI-LXPM is the
Power mutation. By it, a solution x is created in
the vicinity of a parent solution x in the following
where fvi fv j is the L1 distance between repre- manner.
senting pdfs of cluster i and cluster j .
The SF index compute the pairwise L1 - distance x s x xl if t r
between all representing pdfs of all clusters. Then x=
their minimum is considered as the separation
measurement. The more separate between the clus-
(
x + s x x
u
) if tr
Firstly, we have had to encode the solution in clus- Truncation procedure for integer restriction
tering problem to the chromosome before applying In order to ensure that after crossover and muta-
Genetic Algorithm to optimize the internal validity tion operations have been performed, the integer
measures. Each individual is presented by a chro- restrictions are satisfied, the following trunca-
mosome having the same length with the number tion procedure is applied. For all i 1,, n, xi is
of pdfs. The value l j in each gene in the chromo- truncated to integer value xi by therule: If xi is
somes represents the label of cluster to which jth integer then xi = xi , otherwise xi is equal to [ i ]
pdf is assigned. For example, the clustering result or [ i ] + 1 with the probability is 0.5, [ i ] is the
with C1 { f1, f4 } C2 = { f2 f5 f7 } C3 = { f3 f6 } is integer part of xi .
presented by the chromosomes: 1 2 3 1 2 3 2.
The Genetic Algorithm for solving the integer Selection
optimization problems (Deep, Singh, Kansal, & MI-LXPM use the tournament selection that
Mohan 2009) is called MI-LXPM and presented as presented by Goldberg and Deb (Goldberg & Deb
follows. 1991).
53
4 NUMERICAL EXAMPLES
54
Figure 3. Two classes of pdfs g1 (black) and g2 (red) in cases of = 0.1 and = 0.3.
55
5 CONCLUSION
56
57
L. Ho-Nhat
Hochiminh University of Technology, Ho Chi Minh City, Vietnam
ABSTRACT: In this paper, the Reliability Based Design Optimization (RBDO) problem of truss struc-
tures with frequency constraints under uncertainties of loading and material properties is presented.
Moreover, a double loop method with a new combination of an improved differential evolution algo-
rithm which is proposed recently and an inverse reliability analysis is used for solving this problem. Three
numerical examples including a welded beam, a 10-bar and 52-bar trusses are considered for evaluating
the efficiency and application ability of the proposed approach.
59
60
61
62
1 14
2 58
3 916
4 1720
5 2128
6 2936
7 3744
8 4552
63
64
65
Phan Dao
Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: The paper introduces the calculate plan with better periods for flood control, calculation
methods based on dynamic programming in the way of irregular mesh. The program was applied to cal-
culate the Hua Na Hydropower, with two different operating models. The objective function to generate
competitive hydroelectric power suggests the maximum revenue.
1 INTRODUCTION
67
2 METHODOLOGY
2.2 Dividing power output for peak hours water level (240 m); Irregular mesh has two parts: per
0.1 m from 230 to 240 and 0.5 m from 215 to 230 m
According the rules, Vietnam power market has 5
peak hours in working days. Every week has 5 work-
ing days. Thus a month has about 108 peak hours. 2.6 Step-wise procedure of the algorithm
Monthly power output is divided into two parts:
Depending on the natural inflow, release capacity,
One with high price and other with mean price.
and boundary conditions of reservoir, the maxi-
mum value of revenue for all reservoirs (in case
2.3 Selection calculation term of multiple reservoir system) at every time step of
operating horizon are found out.
Following the new conditions, such as flood control,
Considering the maximum revenue as in the
the water level in reservoir always less than 235 m
code Visual Basic 2010 bellows:
from 01/07 to 30/11; the time to calculate the opti-
mal plan suggests to start from 01/12 years to 30/11
next years. But in Article 11, from 15/10, the water
level can be rise to 240 m with condition good fore-
cast hydrology. Two different operating models are
store from 16/10 and store from 1/12 every year.
68
4 CONCLUSIONS
69
70
P. Do
Research Centre for Automatic Control, Lorraine University, Nancy, France
73
74
75
xi
Cip Cic ri (t )dt
4 DYNAMIC GROUPING APPROACH Ri ( xi ) =
CR 0
(6)
WITH TAKING INTO ACCOUNT xi
xi + 0 ri (tt dt + ip
c
i
THE REPAIR TIME
The consideration of the repair time does not lead The optimal replacement interval of the com-
to the complete change in the above grouping ponent i (denoted by Ti ) is then determined by
approach. Indeed, to take into account the repair minimizing its long-term expected maintenance
time, only phase 2 and phase 3 are developed and cost rate.
presented in this section. The other phases of the
approach remain unchanged and can be found xi g i C
CRRi ( xi ) (7)
in more detail in Vu et al. (2014) and Vu et al. xi
(2015).
and
4.1 Individual optimization
xi
As mentioned above, in this phase, the long-term Ti xi + ic ri (t )dt (8)
0
maintenance plan is separately determined for
each component. To do this, age-based replace- The corresponding minimal maintenance cost
ment strategy (Barlow & Hunter 1960) is usually rate is
chosen thanks to its high performance at compo-
nent level. When the repair time is considered, the xi
replacement decisions based on the components Cip Cic ri (t )dt
age will face many difficulties in maintenance Ri
CR =
0
(9)
Ti + ip
modeling and maintenance optimization due to
the unpredictability of the failures. For this reason,
the calendar-based replacement strategy is used in 4.2 Grouping optimization
this paper (Fig. 2).
Individual maintenance plan. Based on the replace-
According to the calendar-based replacement
ment intervals obtained in the previous phase, in
strategy, the component i is replaced at fixed-time
this phase, the individual maintenance dates of
intervals Ti and minimally repaired at its failures.
components in a specific short-term horizon are
The long-term expected maintenance cost rate
determined.
of the component i is calculated based on the
In details, consider a planning horizon PH
renewal theory as follows
between a and b. The first PM activity of the com-
ponent i in PH, denoted by ti1 , is then determined
Cip Cic i ( Ti ) as follows
Ri (Ti ) =
CR (4)
Ti + ip
ti1 Ti di (a ) + a (10)
where i ( Ti ) is the mean number of failures of
component i on ( i ] . Under the minimal repair, where di (a ) is the time between a and the last PM
i ( Ti ) is equal to activity of the component i before a.
The other PM activities of the component i in
i (Ti ) xi
i ( Ti ) ( )
i
ri (t )dt 0 ri (t )dt (5) PH can be determined as
76
(18) i (bv ) i av ) if i Av .
77
78
1 1, 4, 5, 6, 7, 10 1 367.08 2 326.29
2 2, 9, 11 0 373.38 3 62.77
3 3, 8, 12 0 458.74 1 95.98
In order to find the optimal grouping solu- There are eight wide and deep fjords along the
tion, Genetic Algorithm is used with the following route. The fjord crossings will require massive
parameters: crossover probability = 0.8, mutation investments and huge bridges than previously
probability = 0.02, population size = 60, number of installed in Norway. Figure 4 describes the sub-
generations = 500. The program is implemented by merged floating tunnel bridge concept for crossing
the use of Matlab R2015a on a DELL computer the Sognefjord, which is both deep and wide, and is
(Intel core i7, 2.6 Ghz, 16 GB RAM). The com- considered challenging to cross. The more informa-
putational time is approximately 40 minutes. The tion about the E39 project can be found on http://
optimal grouping solution obtained by the pro- www.vegvesen.no/Vegprosjekter/ferjefriE39/.
gram is reported in Table 3. The construction and operation of such super-
The total economic profit of the optimal group- structure system face with many technological
ing solution is challenges included maintenance planning prob-
lems. This research is then partially funded by the
3 E39 project, and motivated by the following main-
CS (GS ) = CS (G k ) = 485.03 (25) tenance planning problems.
k =1
The bridge is a complex system containing a
The total maintenance costs of the system in large number of components which are inter-
PH when the PM activities are grouped, denoted dependent. More over, the bridge availability is
TCG, is calculated as follows highlighted due to the important impacts of its
closures on the traffic, the environment, and the
TC G = TC G CS (GS ) = 14463.65 (26) people safety. For this purpose, the developed
dynamic grouping approach based on the ana-
lytical method can help to improve the bridge
From the obtained results, we can conclude
availability, and prevent the maintenance opti-
that the maintenance grouping helps to reduce the
mization from the computational time problem.
total maintenance costs of the system in PH from
The taking into account of the repair time in
14948.68 to 14463.65. The reduction is equal to
the maintenance modeling is necessary when the
3.24% of the total maintenance costs of the sys-
estimated repair time is considerable.
tem in PH when all PM activities are individually
performed. Given the above efforts, many challenges related
to the maintenance planning of the superstructure
system still exist such as the uncertainties of the
6 THE LINK BETWEEN THE PROPOSED data and their impacts on the maintenance per-
RESEARCH AND THE COASTAL formance, the imperfect preventive maintenance,
HIGHWAY ROUTE E39 PROJECT the components maintainability, etc. These chal-
lenges will be our objectives in the future research.
In this section, we will shortly discuss about the
link between the presented research and the E39
project. Norways coastal highway E39 is part of 7 CONCLUSION
the European trunk road system. The route runs
from Kristiansand in the south to Trondheim in In this paper, a dynamic grouping approach is
central Norway, a distance of almost 1100 km. developed for the maintenance planning of the
79
80
C. Brenguer
Universit Grenoble Alpes, GIPSA-lab, Grenoble, France
CNRS, GIPSA-lab, Grenoble, France
ABSTRACT: Nowadays, the health prognosis is popularly recognized as a significant lever to improve
the maintenance performance of modern industrial systems. Nevertheless, how to efficiently exploit prog-
nostic information for maintenance decision-making support is still a very open and challenging ques-
tion. In this paper, we attempt at contributing to the answer by developing a new parametric predictive
maintenance decision framework considering improving health prognosis accuracy. The study is based on
a single-unit deteriorating system subject to a stochastic degradation process, and to maintenance actions
such as inspection and replacement. Within the new framework, the system health prognosis accuracy
is used as a condition index to decide whether or not carrying out an intervention on the system. The
associated mathematical cost model is also developed and optimized on the basis of the semi-regenerative
theory, and is compared to a more classical benchmark framework. Numerical experiments emphasize
the performance of the proposed framework, and confirm the interest of introducing the system health
prognosis accuracy in maintenance decision-making.
81
82
L = inf {t X t L} . (3)
83
84
85
E Ni ( ) E N p ( )
C Ci + Cp
E [ ] E [ ]
E Nc ( ) E W ( )
+ Cc + Cd , (9)
E [ ] E [ ]
Figure 3. Illustration of the ( ) strategy.
( )
ance of the considered maintenance frameworks L
f y ( z y) dz
d ( y ) dy
d (11)
E C (t )
y
C = lim , (8) + f , ( x ) ( y ) dy
d ,
t t L
C (t ) denotes the total maintenance cost includ- where f ( y), () and f , () are derived from
ing the downtime cost up to time t : C (t) = (1). Given (11), we can adapt the fixed-point itera-
Ci Ni (t ) C p N p (t ) + Cc Nc (t ) Cd W (t ), where tion algorithm to numerically evaluate ( ) . Many
Ni (t ), N p (t ) and Nc (t ) are respectively the number numerical tests were carried out, and they have
of inspections, PR and CR operations in [0, t], and shown that the algorithm converges very quickly to
W(t) is the system downtime interval in [0, t]. In the true stationary law.
86
L
E N p ( ) F ( y), ( L y ) ( y ) dy. (13)
general conclusions on the performance these
L
E Nc ( ) F ( y), ( L y ) ( y ) dy
d strategies are given in Section 5.
L
+ F , ( L ) ( y) dy +0 F ( ) ( ) dy.
5 PERFORMANCE ASSESSMENT
(14)
This section aims at seeking a general conclusion on
L the effectiveness of the new predictive maintenance
E W ( ) ( y) dy
d F u ( L ) du
d decision framework. To this end, we compare the
0
( y) performance of the ( ) strategy to the ( )
+ y ) du (
L
0
F u (L
) dy strategy under various configurations of mainte-
+
0 ( ( ) ) ( ) dy.
nance operations costs and system characteristics.
The so-called relative gain in the optimal long-run
expected maintenance cost rate (Huynh et al. 2015)
(15) is resorted for this purpose
In the above expressions, F ( ) () = 1 F ( ) ()
and F ( ) () is given from (2).
C (% ) =
(
C opt optt ) (
C opt opt , opt ) 100%.
(
C opt opt )
4.3 Maintenance optimization
Using (11), and introducing (12), (13), (14) and If C ( %) > 0 , the ( , ) strategy is more profit-
(15) into (9), we obtain the full mathematical cost able than the ( ) strategy; if C ( % ) = 0, they
model of the proposed predictive maintenance have the same profit; otherwise, the ( ) strat-
decision framework. The classical framework is a egy is less profitable than the ( ) strategy.
particular case of the new one, its cost model is At first, we are interested in the impact of the
also derived from (9) by taking = 0 . Given the maintenance costs on the performance of the new
cost model, optimizing the strategies ( ) and predictive maintenance decision framework. This
( , ) returns to find the set of decision varia- is why we fix the system characteristic at L = 15
bles of each strategy that minimizes the associated and = 0.2 (i.e., m = 1 and 2 = 5 ), the
long-run expected cost maintenance rate practical constraint Ci C p < Cc also leads us to
take Cc = 100 and consider the three cases
C ( opt oopt ) (
i {C ( ,
, )
)} , varied inspection cost: Ci varies from 1 to 49
with step equals 1, C p = 50, and Cd = 25,
C ( opt oopt oopt ) = (min
)
{ , ,
( , )} , varied PR cost: Ci = 5, C p varies from 6 to 99
with step equals 3, and Cd = 25,
varied downtime cost rate: Ci = 5, C p = 50, and
where 0 , 0 , L and 0 . The general-
Cd varies from 10 to 190 with step equals 5.
ized pattern search algorithm presented in (Audet
et al. 2002) can be resorted to find the optimal For each of above cases, we sketch the relative
maintenance cost rate and the associated decision gain C ( %) , and the results are obtained as in
variables. Applying this algorithm to the system Figure 4. Not surprisingly, the ( , ) strategy
and the set of maintenance costs presented at the is always more profitable than the ( ) strategy
beginning of Section 3, we obtain optimal quan- (i.e., C ( %) > 0). Thus, there is no risk when using
tities as in Table 1. Based on the optimal values the proposed maintenance decision framework
of cost rates, the ( , ) strategy is more profit- (i.e., it returns to the classical one in the worst
able than the ( ) strategy. However, this is just case). This is a significant advantage of this new
the conclusion for the present special case. More framework. Moreover, it is especially profitable
87
88
89
91
92
W (t tc )= i Wi , (12)
i =1,2 ,3
where H ( y(t )) is the loss function which describes
losses when equipment state differs from normal
W1 W1(t,tc , ), (13)
one; VT is the operating expenses.
We may obtain a globally optimal strategy
S(t ) by stepwisely minimizing criterion WT W2 in W1(t,t
, A)) + + mi W ( ,t
,ttc ), (14)
t t tc A
based on Bellmans optimality principle. Algo-
rithms applied are simple enough and can be
implemented in recurrent form. We should build a W3 in W1(t,t
, A)) + + mi W (
, t ,ttc ). (15)
t t tc r R
state space to form recurrent relations for solving
problem on the basis of the optimality principle. The value of i for which minimum in (12) is
In other words, we should find a coordinate set achieved and values t
, r for which minimum of
which contains all information about the engi- minima in (14) and (15) are achieved are functions
neering system in a given time interval regard less of ( c ) and describe a sought optimal predic-
of its past behavior. When y(t ) is described by tive maintenance strategy S(t ) .
(1) a sought set can be represented as a collection As a matter of fact, solving equations (12), (13),
( c ) where A is a vector whose members (14), (15) is a problem of dynamic programming. To
define ranges of coefficients a j j = 0, n in model form S(t ) we can make use of space approximation
(1), t,tc T t tc . technique. With that for finding A it is necessary
Let the function W ( t tc ) describes limiting to use minimax algorithms of predicting the state, in
losses associated with the optimal servicing of the particulary, it can be the algorithm from Section 4.
engineering system in state ( c ) . Predictive
maintenance consists in inspecting and adjust-
ing y(t ) (it is not difficult to show that replacing 6 CONCLUSIONS
units, assemblies and components of the engi-
neering system is equivalent to the adjustment of The problem of designing of predictive mainte-
y(t )). nance of engineering systems is solved for the case
93
94
Do T. Tran
Inria LilleNord Europe, Villeneuve dAscq, France University of Lille, CRIStAL (CNRS UMR 9189),
Lille, France
ABSTRACT: Due to market flexibility, repair and replacement costs in reality are often uncertain and
can be described by an interval that includes the most preferable values. Such a portrayal of uncertain
costs naturally calls for the use of fuzzy sets. In this paper, we therefore propose using fuzzy numbers
to characterize uncertainty in repair and replacement costs. The impact of fuzzy costs on the optimal
decision is then investigated in the context of an industrial problem: optimizing water pipe renovation
strategies. Here we examine specifically the risk of violating a budget constraint imposed on the annual
cost associated with pipe failure repairs. This risk is evaluated using the mean chance of the random fuzzy
events that represent random fuzzy costs exceeding a given budget. The benefit of taking account of cost
uncertainty is then validated through various numerical examples.
95
96
is defined as:
97
Z
= X
+ Y
; Z
: xL yL , xR yR (8) P( N ) 1 r,
98
99
DMA I DMA II
Tan My Street Cu xa Ngan Hang
n
n nTo
si + 1 exp(
=0 (19)
i =1 o)
Figure 2. Accumulated number of pipe repair events
from January 2008 to September 2016.
n
= (20)
exp( o ) 1
Table 2. Values of fuzzy costs.
Similarly, the maximum likelihood estimates C
b ($) C
r ($)
and of and , respectively, in Eq. (4) are given
by: (232 525 734) DMA I: (300000, 375000, 450000)
DMA II: (300000, 320000, 450000)
n
= (21)
i =1 ln
n
n ln
l To l si
paper,
p we employ TFNs, C
r (cr1 , cr2 , cr3 ) and
n C
b (cb1 , cb2 , cb3 ) to solve the problem.
=
(22) On the other hand, the risk that the annual cost
To associated with repair events exceeds $20,000 is
recommended to be lower than 10%. In this section,
Considering Figure 2, we observe that both we will study how fuzzy costs impact on the opti-
models (the exponential and Weibull ROOF) are mal decision in the case without this budget con-
appropriate for the repair data of both DMAs straint and in the case with this budget constraint.
from January 2008 to September 2016. Among
them, the coefficient of determination R2 of the 4.2.1 Optimal renovation time
Weibull model is higher than that of the exponen- The detailed values of the fuzzy costs are presented
tial model. Hereafter, the Weibull model is chosen in Table 2. As it is generally recommended that the
to characterize the counting process of pipe-repair life time of an uPVC main pipe should not exceed
events for both DMAs. 25 years, we run the grid search over the interval
(0, 25) years with the step of one month to find
the optimal renovation time for both DMAs. Four
4.2 Impact of fuzzy costs on the optimal
cases will be examined:
renovation time without budget constraint
Case A: not considering fuzzy costs and budget
The pipe repair or replacement cost depends on
constraint
the the pipe material/diameter and especially on
Case B: considering fuzzy costs but not taking
the road types such as alley or road/route, asphalt
into account budget constraint
or dirt road. In addition, the variation of the pipe-
Case C: not considering fuzzy costs but taking
break detection time and maintenance time can
into account budget constraint
lead to different damage cost, including water
Case D: considering both fuzzy costs and budget
loss, disruption in service, and so on. Therefore,
constraint
it is difficult to expect a precise value of the ren-
ovation cost for the overall DMA or of the cost Table 3 presents the optimal renovation time
associated with only pipe repair activities. Clas- corresponding to the above cases for both DMAs.
sical approaches normally use the most probable In detail, if we do not take into account the budget
value in the calculation and optimization. In this constraint, the optimal renovation time in Case B
100
101
102
103
ABSTRACT: Credit risk refers to the risk of losses due to unexpected credit events, as a default of a
counterparty. The modelling and controlling of credit risk is a very important topic within banks. Very
popular and frequently used tools for modelling credit risk are multi-factor Merton models. Practical
implementation of these models requires time-consuming Monte Carlo (MC) simulations, which signifi-
cantly limits their usability in daily credit risk calculation. In this paper we present acceleration techniques
of Merton model Monte Carlo simulations, concretely parallel GPU implementation and Importance
Sampling (IS) employment. As the importance sampling distribution we choose the Gaussian mixture
model and for calculating the IS shifted probability distribution we use the Cross-Entropy (CE) method.
The speed-up results are demonstrated using portfolio Value at Risk (VaR) and Expected Shortfall (ES)
calculation.
{
eracy of probability is caused by a relatively high
1, if the exposure n is in the default
difference between the IS distribution chosen from Dn = .
the normal distribution family and the optimal IS 0, otherwise
distribution and it also limits the achievable vari- (1)
ance reduction. As a correction to this problem we
use the Gaussian mixture model for the IS family
of distributions. This new approach limits the level We assume that the probabilities PDDn = P( Dn = )
of the observed degeneracy of probability as well are given as a portfolio parameter.
as increases the variance reduction. The portion of the exposure n which can be
The other significant part of this paper is the lost in the time of default is called the exposure
implementation of discussed models and IS pro- at default denoted by EAD Dn . For simplicity we
cedures via CUDA on the the GPU devices. The assume EAD Dn is constant in the whole time interval
GPU implementation of the model enables very [ 0 ] and it is given as the portfolio parameter.
fast calculation of the observed parameters (VaR The portion of EAD Dn representing the real loss
or ES) with or without the use of the IS. in the case of default, is given by a random vari-
First we present a short recapitulation of the GDn [ , ] . The distri-
able loss given at default LGD
multi-factor Merton model and the terminology bution, the expectation ELGD GDn and the standard
used, then we state a detailed specification of the deviation VLGD Dn of LGDGDn are given as the port-
tested model. For a deeper understanding of the folio parameters. The portfolio loss LN is than
Merton model see (Ltkebohmert 2008). defined as a random variable
107
Vn (t ) = Sn (t ) + Bn (t ), 0 t T . (5)
Rp ( LN ) = min L(Ni )
VaR { ( L(Ni ) ) ( p ) M }
M p
= LN , (8)
In the Merton model a default can occur only at
the maturity T, which leads into two possibilities [ j]
where LN j =1 ( LN LN ), LN is the j-th
(i ) M ( j) (i )
1. Vn (T ) > Bn (T ) : obligor has sufficient asset to loss in the ascendant sorted loss sequence L(Ni ) , and
fulfil debt, Dn = 0. ES as
2. Vn (T ) Bn (T ) : obligor cannot fulfil debt and
defaults, Dn = 1. M
L[N] .
1
Let rn denote the n-th obligors asset-value log-
S p ( LN ) =
ES
M
M p j
j
(9)
M p
return rn l (Vn (T ) / Vn ( )). The multi-factor
Merton model assumptions to resolve correlations
between exposure defaults are: 1.2 Tested portfolio structure specification
1. rn depends linearly on K standard nor- The most important part of the multi-factor Mer-
mally distributed risk (systemic) factors ton model is the structure of the portfolio (exposure
X ( X1 , , X K ) . dependence on the risk factors). To obtain a portfolio
2. rn depends linearly on the standard normally with a realistic behaviour we use a natural risk factor
distributed idiosyncratic term n , which is construction considering the region-industry (sector)
independent of the systemic factors X k . and the direct (hierarchy) links between exposures.
3. single idiosyncratic factors n are uncorrelated. Hierarchy links are represented by Hierarchy
4. asset-value log-return random variable can be Systemic Factors (HSF), which can be interpreted
represented as rn n Yn + n2 n , where as direct links between the exposures (for example
Yn K k= n , k X k represents exposure compos- two subsidiary companies with a common parent
ite factor, n represents exposure sensitivity company), each of these systemic factors usually
to systemic risk and weights n,k represents has impact only on a small fraction of the port-
dependence on single factors X k . folio exposures. Sector links are represented by
108
0, x <V
VaR
Rp ( LN )
H p (x ) = x (15)
, x V
VaR
Rp ( LN )
1 p
ES
S p ( LN ) E f ( H p ( LN ))
= H p ( LN ( y f (yy dy , (16)
Figure 1. Example of group correlation tree.
109
(
= H p LN ( y )
g
f
( y
)
y)
)
g ( y ) dy
d (17)
g ( y ) =
(
H p LN ( y ) ) f (y)
(21)
= E f ( H p ( LN )) = ES p ( LN ). E f H p ( LN )
{ }
therefore we can replace the optimization problem
Rpg ( LN ) = min LN (Yi )
VaR Yi ( p) M , with the following equation
(19)
M : N (y f (yy l (yy ; dy = 0. (24)
(LN (Yi ) ) p
where Yi : LN (Y j ) w(Y j ) .
j =1
2.1 Cross-Entropy method To solve the problem (24) we use the Monte
Carlo simulation:
We already know the principles of the IS and have
the IS estimators of VaR and ES, but a new IS pdf
M
g ( y ) is still unknown. The most straightforward
method for the estimation of g ( y ) is to minimize
p
N( i )) l ( i ; ) = 0, (25)
i =1
the variance of the ES IS estimator:
this is called the Stochastic Counterpart (SC) of the
( ( )) (( ))
f Y
problem (24). Note that (25) is usually a system of
g ( y ) = arg min Sv2 H LN Y , (20) non-linear equations, but for some pdfs results into
v ( y )X v Y
an explicit solution.
In this paper we focus mainly on the IS of idi-
where Sv2 (X ) denote variance according to pdf v ( y ) osyncratic terms of the systemic factors (HSF and
and X is an arbitrary system of the pdfs fulfilling the SSF). Therefore to simplify the notation of the ran-
110
{g (x; }.
still part of Y , but the IS wont affect them.
Now if we consider X as a system of K S K H X Y
, , ): j = , ji > (30)
independent normally distributed random vari-
ables parametrized by mean and variance, we will
Because the support of the pdf of the normal
get the following solution of problem (25):
distribution is R , the condition f x ) > 0 gY ( x ;
M p, , ) > 0 is fulfilled. Since the components of
H p ( LN (Yi )) Yi j gY ( x; p, , ) are independent, the problem (24)
= i =1 reduces into K S K H systems of non-linear equa-
j , jj , (26)
M tions. Therefore together with the condition p j = 1
H p ( LN (Yi )) 1
i =1 we will receive j = 1, , K S + K H ,
, i = 1, ,n :
M 2 M
H p ( LN (Yi )) Yi
j
j
H p LN Yk k , j i Yk j
j =
2 i =1 k =1
M
, j , (27) j i = M
,
H p ( LN (Yi )) H p LN Yk k , j ,i
i =1 k =1
M 2
where 2 is the SC approximation of mean,
j j
H p LN Yk k , j ,i Yk j j ,i
k =1
variance of j-th component of Y and Yi is j-th 2j ,i = M
, (31)
j
component of i-th MC sample. H p LN Yk k , j ,i
k =1
M
2.2 Gaussian mixture model
In the end of previous part we presented formulas
H p LN Yk k , j ,i
k =1
p j ,i = ,
for calculating the optimal IS distribution in the M
family of normal distributions. This approach is H p LN Yk
commonly used for the IS in the multi-factor Merton k =1
model, see for example (Glasserman & Li 2005). The
choice of the IS family of distributions as normal where
distributions is not always optimal and can improved
by more complex IS family of distributions.
p j i fN Yk ; j ,i j ,i
The IS family of distributions examined in this j
paper is the family of the Gaussian mixture dis-
k j ,i := n
. (32)
tributions, the same approach in different appli- pj i fN Yk ; j ,i , j ,i
j
cation can be found in (Kurtz & Song 2013). The i =1
Gaussian mixture random variable is defined as
a weighted sum of different normal random vari- We obtain K S K H systems, each represent-
ables. The pdf of the Gaussian mixture random ing a problem of approximation of the Gaussian
variable can be expressed as mixture from data sample. This sub-problems
can be solved for example by EM or K-means
n
algorithm see (Bishop 2006, Redner & Walker
g ( x; p, , ) pi f N ( x; i i ), (28)
1984).
i =1
But the computation effort of the system (31)
where fN ( x; i , i ) is the pdf of the normal distri- will be significantly smaller if we have an informa-
bution with the mean i and the variance i2 and tion from which component of g x j ; j , j j ( )
p 1 = in=1 pi = 1. New IS Gaussian mixture joint was Yk generated. Let zk j denote Bernoulli
j
pdf of Y will be vector of identificators, such as
gY ( x; p,, , )
KS K H
(
g xj; pj, j j , ) (29) ( z ) = 10,,
k j i
Yk j (
fN x; j i , j i ). (33)
j =1 otherwise
111
112
(37)
where t denotes iteration, ( k , j )i denote if the
i-th component of j-th systemic factors Gaus-
sian mixture was the source of the sample k,
H p (Yk ) := ( LN (Yk ) VaR
Rp ( LN )) and
f Yk
w Yk = , (38)
gY Yk ( t t t )
where f Yk is the pdf of nominal distribution
(joint distribution of the independent
p normal dis-
tributions) and gY Yk ; t , t (
t is the pdf of )
IS Gaussian mixture distribution given by param-
eters approximated in the iteration t 1 .
113
114
115
116
Char. Portf. idx. 0.99995 0.9995 0.995 0.99995 0.9995 0.995 0.99995 0.9995 0.995
VaR 1 4.95e-07 6.21e-08 4.88e-09 6.40e-09 2.42e-09 6.96e-10 6.71e-10 4.93e-10 1.90e-10
2 3.41e-07 2.10e-08 2.95e-09 4.98e-09 1.01e-09 6.39e-10 5.47e-10 1.59e-10 1.50e-10
3 1.14e-08 7.63e-10 5.73e-11 8.62e-11 2.33e-11 6.14e-12 2.81e-11 1.21e-11 4.79e-12
4 7.34e-09 6.02e-10 4.64e-11 8.31e-11 2.23e-11 5.77e-12 2.31e-11 9.36e-12 4.25e-12
ES 1 8.64e-07 1.02e-07 1.05e-08 4.24e-09 1.17e-09 4.65e-10 5.06e-10 3.69e-10 2.02e-10
2 6.99e-07 6.03e-08 5.25e-09 2.78e-09 9.70e-10 3.37e-10 3.66e-10 1.53e-10 8.00e-11
3 2.81e-08 1.89e-09 1.42e-10 6.88e-11 1.90e-11 5.17e-12 2.05e-11 1.00e-11 4.01e-12
4 1.61e-08 1.37e-09 1.13e-10 7.12e-11 1.99e-11 4.52e-12 1.73e-11 7.84e-12 2.96e-12
Figure 8. Variance reduction achieved by IS: Gaussian mixture and normal distribution.
Lets proceed to the testing of the variance Table 3. Variance reduction ratio Gaussian mix./nor-
reduction. In Table 2 we can see the variance of mal dist.
all combinations of tested confidence levels and
portfolios for the plain (crude) MC simulation, the Confidence level p
IS using the normal distribution and the IS using Characteristic Portf. idx. 0.99995 0.9995 0.995
the Gaussian mixture. The variance is calculated
as an empirical value of 1000 simulations consisted VaR 1 9.54 4.90 3.65
of 106 samples. 2 9.10 6.35 4.26
For more illustrative view of achieved variance 3 3.06 1.91 1.28
reduction see Figure 8. Figure shows a compari- 4 3.58 2.38 1.35
son of the variance reduction between the stand- ES 1 8.37 3.16 2.30
ard and the Gaussian mixture approach for all 2 7.59 6.34 4.20
confidence levels and portfolios combinations. 3 3.35 1.88 1.28
Clearly the IS using the Gaussian mixture achieve 4 4.11 2.54 1.52
better variance reduction in every test, this was
evident because the normal distributions family
is a subset of the Gaussian mixture distributions reduction between the IS using the normal distri-
family. bution and the IS using the Gaussian mixture.
For exact comparison of the two IS approaches, The improvement of the IS by using the Gaus-
see Table 3. Table shows ratios of the variance sian mixture is given by the presence of systemic
117
5 CONCLUSION
REFERENCES
The objective of this paper was to speed-up the
multi-factor Merton model MC simulation. This Bishop, C.M. (2006). Pattern recognition and machine
learning. Springer.
was fully accomplished by the GPU implementa-
Dubi, A. (2000). Monte Carlo applications in systems
tion and the IS application. engineering. Wiley.
We presented three different GPU implemen- Glasserman, P. & J. Li (2005). Importance sampling
tations, each better for different purpose. Two of for portfolio credit risk. Management science 51(11),
the GPU implementations solve the general multi- 16431656.
factor Merton model with speed-up against serial Kroese, D.P., T. Taimre, & Z.I. Botev (2013). Handbook
model in range of 19 to 287 depending on of Monte Carlo Methods. John Wiley & Sons.
structure of portfolio, see section 4.1. Third GPU Kurtz, N. & J. Song (2013). Cross-entropy-based adap-
implementation was specialized, taking input in tive importance sampling using gaussian mixture.
Structural Safety 42, 3544.
form of structure described in section 1.2. This
Ltkebohmert, E. (2008). Concentration risk in credit
implementation achieves speed-up in range of 209 portfolios. Springer Science & Business Media.
to 1001 depending on the portfolio structure. NVIDIA (2015). Cuda c best practices guide. http://docs.
For the IS we proposed a new approach using nvidia.com/cuda/cuda-c-best-practicesguide/. Version
the Gaussian mixture distribution. Using this 7.5.
approach we achieved a significant variance Redner, R.A. & H.F. Walker (1984). Mixture densities,
reduction improvement for the certain portfolio maximum likelihood and the em algorithm. SIAM
structures, see section 4.2.2. In comparison to the review 26(2), 195239.
standard IS approach we got from 9.5 to 1.3 Rubinstein, R.Y. & D.P. Kroese (2011). Simulation and
the Monte Carlo method. John Wiley & Sons.
better results. The total achieved variance reduc-
Rubinstein, R.Y. & D.P. Kroese (2013). The cross-entropy
tion was up to 1911 for the ES calculation and up method: a unified approach to combinatorial optimiza-
to 737 for the VaR calculation. tion, Monte-Carlo simulation and machine learning.
The combination of the IS and the GPU imple- Springer Science & Business Media.
mentation can bring a speed-up of the standard
serial MC simulation in orders of hundreds of
thousands for portfolios with high dependence on
systemic factors.
118
ABSTRACT: Highly reliable systems simulation is a complex task that leads to a problem of rare
event probability quantification. The basic Monte Carlo method is not a sufficiently powerful technique
for solving this type of problems, therefore it is necessary to apply more advanced simulation methods.
This paper offers an approach based on the importance sampling method with distribution parameters
estimation via the cross-entropy method in combination with the screening algorithm. This approach is
compared to another one based on the Permutation Monte Carlo method particularly in terms of the
achieved variance reduction. The paper also explains, how to apply these simulation methods to systems
with independent components, that can be represented by the use of the adjacency matrix. A new gen-
eralized algorithm for the system function evaluation, which takes into account an assymetric adjacency
matrix, is designed. The proposed simulation method is further parallelized in two ways, on GPU using
the CUDA technology and on CPU using the OpenMP library. Both types of implementation are run
from the MATLAB environment, the MEX interface is used for calling the C++ subroutines.
119
Its an unbiased estimator of , E ( MC ) = , how- see for example (Rubinstein & Kroese 2011,
ever this approach is not suitable for highly reliable Kroese, Taimre, & Botev 2013). The principle of
systems (Kleijnen, Ridder, & Rubinstein 2010). this technique is simple, but it can be difficult to
For the variance of MC we can easily y obtain find an appropriate IS pdf, which will lead to a
Var ( MC ) = 2 / N , where 2 = Var ( H ( )). massive variance reduction.
Using the central limit theorem we can determine It is not a rule, but it is usual to select an IS pdf
a 1 confidence interval for as from the same family of distributions as the nomi-
120
121
122
(
IN, ck1 R, ckd , OUT R, ) (12)
The IS-based simulation method presented in sec-
tion 2 in combination with the function H evalu-
i { , }: ation forms a useful tool for the simulation of
, ki ,,c
cki +1
R
systems specified by the adjacency matrix. Its
principle is based on the generation of independ-
holds and failed otherwise. ent samples, therefore we can easily reduce the
This representation is suitable for example for simulation time using parallel computing.
reliability networks studied in (Kroese, Taimre, There are many ways to implement the method.
& Botev 2013), which are usually represented as For comfortable work with the simulation results
undirected graphs with components represented in the form of graphs it is convenient to use the
by edges. This reference also describes a simulation Matlab environment. However, to reduce the
method based on the Conditional Monte Carlo computing time it is better to focus on lower-level
designed for reliability networks. This method will programming languages. There is a possibility to
be modified for the more general system represen- combine the advantages of both approaches, to use
tation and compared with the simulation method Matlab for the user-friendly work with results and
based on the importance sampling. implement the most important algorithms in other
languages. The MEX interface of Matlab allows to
3.2 System function evaluation algorithm call functions written in C or C++ from Matlab as
easily as if they were usual Matlab functions.
For the system function evaluation we created After consideration of possible solutions the
the Algorithm 2 based on our previous results in two following alternatives of Matlab implementa-
(Bri & Domesov 2014). Even though the former tion acceleration were chosen:
algorithm was originally intended for systems with
symmetric adjacency matrix, it can be modified to 1. parallel computing on CPU using the OpenMP
reflect the more general case, i.e. the asymmetric library (via the MEX interface),
adjacency matrix. 2. parallel computing on GPU using the CUDA tech-
nology (via the Parallel Computing Toolbox).
In the first alternative the source codes of the
accelerated functions are written in the C++ lan-
guage and for random numbers generation the Boost
library, version 1.56, is used. The second alternative
uses source codes written in the CUDA C extension
of the C language, random numbers are generated
via the cuRAND library [NVIDIA 2014].
123
124
1 N
C =
N k =1
(Y | k ), (20)
125
ck
1,1 1 k +1, j = k , j , (25)
c k c j +1
k
k , k +1 1 k j (26)
j =1
126
127
128
ACKNOWLEDGEMENT
REFERENCES
129
Xuan-Xinh Nguyen
Wireless Communications Research Group, Ton Duc Thang University, Ho Chi Minh City, Vietnam
Dinh-Thuan Do
Ho Chi Minh City of Technology and Education, Ho Chi Minh City, Vietnam
ABSTRACT: An issue of energy consumption in the wireless network communication systems has
attracted much attention from the researchers in recent years. The problem of effective energy consump-
tion for cellular networks becomes an important key in the system design process. This paper proposes
a new protocol for power tranfer, named Time Switching Aware Channel protocol (TSAC), in which the
system can be aware channel gain to adjust the proper time for the power transfer. This paper also inves-
tigates the throughput optimization problem of energy consumption in Decode and Forward (DF) based
cooperative network approach in which an allocated relay power transmission is proposed. By assuming
that the signal at a relay node is decoded correctly when there is no outage, the optimal throughput effi-
ciency of the system is analytically evaluated.
133
134
hi
Figure 2. Energy harvesting system model. yR,i = PS xi + nR,i (6)
d1m
1
where R,i, D,i are the performances of the first and For the channel gain, source and relay node has to
obtain them CSI. At the beginning of each block (e.g.
second hop, given by (3) and (4) respectively.
block time ith), the CSI acquisition is achieved in two
steps. In the first step, the source transmits its pilot signal
2.2 Energy harvesting protocol to the relay, and the relay estimates hi. In the second step,
the relay R feeds back hi, to the source node, S. In order
In the Wireless Power Transfer (WPT) time slot, we to reduce the feedback overhead, the relay can feed back
focus the performance of scheme that uses the new their quantized version to the source.
135
THROUGHPUT ANALYSIS S P P S S
is the exponential integral function as eq. 8.211 in
In this section, the throughput and outage prob- (Jeffrey and Zwillinger, 2007).
ability of the proposed protocol are analyzed. In Proof: the proposition 1 can be derived as
this mode, the outage probability occurs when below
the system performance, i.e. e2e,i, drops below the Substituting (9), (11) into (12) the average throu-
threshold value 0, it is defined as 0 = 2Rc 1 with Rc ghput of system can be expressed as
is transmission rate of system. So that the expres-
sion of outage probability can be obtained by t E
gi
2 { { Di > }}
Pi = P
OP { e ei < } E 2 { { > }}
{ ( )< } hi
(10) fi
2 Ri
= Pr Ri Di
{P { } }
(16)
E
E 2 2 R ,i >0
Because R,i independent with D,i, so that the hi fi
= t1 ( 2 )
outage can be rewritten as
3
Pi = 1 Pr
OP { Ri > } Pr { Di > } (11)
The first item could be given by
In this transmission mode, i.e. delay-constrained
mode, the throughput efficiency of system at the t1 E
gi
2 { { Di > 0 }}
time slot ith, is the function of outage probability
and the EH duration, which is formulated by =E
gi
2 { {P R gi
2 m
0d2
2
D }} (17)
ti ( OPi ) (
OP i ) (12)
0d2m D2
= exp
And the average throughput efficiency of sys- g PR
tem is
And the second item is
t E {ti } (13)
0 PR d1m x x
is determined via resolve of function below ( ) d m R2 1
= exp
h PS f exp h PS
dx
f
PRopt ag a
PR
{t (PR )} (14)
0
136
X + X + X + (19)
Ei Ei
f h h
4 NUMERICAL RESULT
137
138
Tam Nguyen Kieu, Tuan Nguyen Hoang, Thu-Quyen T. Nguyen & Hung Ha Duy
Ton Duc Thang University, Ho Chi Minh City, Vietnam
D.-T. Do
Ho Chi Minh City of Technology and Education, Ho Chi Minh City, Vietnam
M. Vozk
VSB-Technical University of Ostrava, Ostrava, Czech Republic
ABSTRACT: In this paper, we compare the impact of some relay parameters on two relaying schemes:
Amplify-and-Forward (AF) and Decode-and-Forward (DF) in full-duplex cooperative networks.
Especially, closed-form expressions for the outage probability and throughput of the system is derived.
Furthermore, we evaluate the dependence of system performance, in term of the outage probability and
throughput, on the noise at nodes, transmission distance and relay transmission power.
139
2
PS
K 1 =
2
g1 PR g r I (4)
l1m
Figure 2. The parameters of TSR protocol. K yR [i ] with AF
xR (i ) = P (5)
R xS [i ] with DF
PS
The information process is separated into two
stages. At first, the energy is transferred from the source where accounts for the time delay bred by relay
to the relay within a duration of T ,(0 1) . The processing.
remaining time, (1 )T , is employed to convey It is costly seeing that harvested power then
information, where is time switching coefficient assist operation for the next stage transmission,
and T is the duration of one signal block. PR is advanced by
During the energy harvesting phase, the received
signal at the relay node can be expressed as 2
Eh g
PR = = PS 1m (6)
PS ( )T l1
yR = g1xS + nR (1)
l1m where is defined as / (1 ) .
Therefore, the received signal at the destination
where PS is the source transmission power and m is given by
is the path loss exponent.
In this work, we assume a normalized path loss g2
model in order to show the path loss degradation yd (k) xr [k] + nd [k] (7)
effects on the system performance. For simplic- l2m
ity, nR and nd are the zero mean Additive White
Gaussian Noise (AWGN) with variance 1. With AF, we have
Regarding wireless received power, the harvested
energy at the relay is given by g2 g1 PS g2
yD ( k ) = K PR xS + K PR g r xR
2 lm l1m l2m
Ps g1 2
Eh T (2) signal RSI
S
d1m
g2
+ K PR nR + nD
where is the energy conversion efficiency. lm
For the information transfer phase, assume that 2
noise
the source node transmits the signal xS to R and (8)
R forwards signal xR to the destination node.
Both
ot signals
gn have unit energy and zeromean, i.e, With DF we obtain
E xi = 1 and E [ xi ] = 0 , for i {S, R}. There-
2
fore, the signal received signal received at the relay
under self-interference source is rewritten as yD (t ) = PR g2 xS (t ) + nd (t ) (9)
140
141
1 P x
DF = min , m Sm (17)
y l1 l2 I
=(
( )T
)R T
(18)
4 SIMULATION RESULTS
142
143
ABSTRACT: A wireless network using a relay node to harvest energy and process information simulta-
neously is considered in this paper. The relay node uses the harvested energy from the source signal then
it amplifies and forwards that signal to destination node. Based on two receiver architectures, namely time
switching and power switching, this paper introduces stochastic model for analysis of the Time Switch-
ing based Relaying protocol (TSR) and the Time Power Switching based Receiver (TPSR), respectively.
To determine the throughput at destination node, the analytical expression for the outage probability is
derived for the delay-limited transmission mode. The numerical results confirm the effect of some system
parameters to the optimal throughput at destination node for the network, such as the time fraction for
energy harvesting, the power splitting ratio, the source transmission rate, the noise power, and the energy
harvesting efficiency. More particularly, we compare the throughput at destination node between TSR
protocol and ideal receiver, TSR protocol and TPSR receiver for the delay-limited transmission mode.
145
1
by D), through a relay node (denoted by R). The yr (t ) = Ps h s(t ) + a[[r] (t ) (1)
l1
distance from source to relay node and from relay
node to destination node are denoted by l1 and
l2, respectively. The channel gain source to relay where h is the channel gain from the source to relay
node and from relay node to destination node are node, l1 is the distance from the source node to relay
denoted by h and g, respectively. node, Ps is the power transmitted from the source,
Based on the system model and the time switch- is the pathloss exponent, and s(t) is the normal-
ing receiver architecture, this paper introduces the ized information signal from the source node.
time switching-based relaying and the time power The energy harvested at relay node, denoted by
switching based relaying receiver for energy har- Er, is defied by:
vesting and information processing from source
to destination at the relay node with delay-limited Ps | h |2
transmission mode. The delay-limited transmis- Er = T (2)
l1
sion mode means that the destination node has to
decode the received information block-by-block
and as the result, the code length can not be larger where is the energy conversion efficiency and
than the transmission block time (Liu, Zhang & 0 1.
Chua 2012). The figure of merit for the system After harvesting energy from yr(t), then informa-
under study is the throughput at the destination tion receiver converts yr(t) to baseband signal and
node, which is defined as the number of bits are processes it, this introduces an additive noise due
successfully decoded per unit time per unit. to conversion from RF signal to baseband signal,
denoted as c[ r ] (t) . The sampled baseband signal at
relay node after converted is given by:
3 TIME SWITCHING-BASED
RELAYING (TSR) PROTOCOL 1
yr k ) = k)) + a,r ((k
Ps h s( k k ) c,r ( k ) (3)
l1
The time switching-based protocol (TSR) for
energy harvesting and information processing at
relay node can be seen in Figure 2. where k is the index value, s(k) is the sampled-
In Figure 2, T is the block time to transmit normalized signal from the source, a,r (t) is the
information from S to D, is the fraction of T AWGN noise introduced by receiving antenna at
146
l2
s
(1 )
1 2 ( s
)
D =
r
2 2 2
where a,d ( k ) and c,d ( k ) are the AWGN noises 2 s
h|
2
+
r
2 4 2
2 P |h| |g|
from R to D. =
s
r
2
| l1 l 2
2
d
(1 )
2 2 2
l1 l 2
r d
(1 )
(10)
g Prl1 yr ( k )
yd k ) = + a , d ( k ) where 2d 2d ,a + 2d ,c .
l2 Ps | h |2 l1 (2a ,r 2
c ,r )
(6)
The throughput at destination node, denoted
+ c,d ( ) by , is determined by evaluating the outage
probability, denoted as out , given a constant
And by substituting yr(k) in (3) into (6), we transmission rate from source node R bits/sec/Hz,
have: ( + 0 ) , 0 is the threshold value of SNR
R = log 2 (1
for data detection at destination node. The outage
probability at destination node for TSR protocol
Pr Ps h g s( k )
yd k ) = + is given by:
l2 Ps | h |2 l12r
(7) out = p ( < ) (11)
Pr l1 g r (k )
+ d ( k )
l2 Ps | h |2 +2r where 0 = 2R 1 .
The outage probability at destination node, can
where r(k) is defined as r (kk a,r k ) + c,r ( k ) , be expressed analytically by:
and d ( k ) is defined as d (k
k a,d ( k k)) + c,d ( k ) ,
are the overall AWGN noises at the relay and des- z a z+b
+
tination node, respectively, 2r 2a ,r + 2c ,r is the 1 h
( )
overall variance at relay node.
out = 1
h e dz (12a)
z =d /c
From (2), we can calculate the power transmit-
ted from relay node as:
d
Er 2 P | h | 2 1 e c h
u K1 ( ) (12b)
Pr = = s (8)
( )T / 2 l1 ( )
where:
147
c 2 Ps (13c)
out = p (( ) < ( + ))
2 a | h |2 + b
p| g | < , | h |2 /c
d Ps l12r 0 (13d) c | h |4 d | h |2
=
2 a | |2 + b
p | g | > c | |4 d | |2 = 1, | h |
2
/c
4a
u= (13e)
c h g
(A2)
h is the mean value of | | , g is the mean value
2
The second leg in (A2)) is due to the fact that
of | |2 and K1 () is the first order modified Bas-
if | |2 d / c , then c | h |4 d | h |2 is a negative
sel function (Gradshteyn & Ryzhik 1980).
number and the probability of | g |2 greater than
Finally, the throughput at destination node is
negative numbers is always 1. Because of (A2),
given by:
out is given:
=(
( ) /2 ( ) ( )
)R = (14)
az + b
d /c
T 2
out = fh ( ) p | g |2 > dz
This is based on the fact that the transmission rate z =0 cz 2 dz
from the source is R bits/sec/Hz and (1 ) / 2 is az + b
+ f h ( ) p | g |2 < 2 dz
the effective time to transmit information from the
z =d /c cz dz
source node to the destination node. The throughput g
is depended on Ps , , , l1, l2 , R, 2r and 2d .
azz b
(cz dz ) g
d /c
2 ( ) ( )
Following is the demonstration (Ali Nasir, = h
+ h f 2 1 e
dz
Xiangyun Zhou, Salman Durrani & Rodney z =00 z =d /c
Kennedy 2013) for equation (12) and (13).
Substituting the value of SNR in (10) into (11), (A3)
we got:
2 s2 | |4 | g |2
D = p < 0
2 s h |
2 2 2
l r s
2 2
d (1 ) + l12 l22r 2
d ( )
2 Ps l1 l2 2 0 (1 ) | |2 + l12 l2 2 2 0 ( )
= p g < d
2
r d
2 P s
2
| |4
2 P l
s 1 r 0 | |2
2 a | h |2 + b
=p g <
(A1)
c | h |4 d | h |2
where:
148
( )
x
e 4 ddx
K1 [17] Overall noiise
s
d
( ) + l1 l 2
2 2
d
2
r
( )
149
150
151
152
ABSTRACT: Harvesting energy from Radio-Frequency (RF) signals is an arising solution for prolong-
ing the lifetime of wireless networks where relay node is energy-constrained. In this paper, an interference
aided energy harvesting scheme is proposed for cooperative relaying systems, where relay harvests energy
from signals transmitted from source and co-channel interferences, and then consumes that energy for
forwarding the information signal to the destination. A Time Switching-based Relaying Protocol (TSR)
is proposed to allow energy harvesting and information processing at the relay. Applying the proposed
approach to an amplify-and-forward relaying system with the three-terminal modelthe source, the relay
and the destination, the approximated analytical results expressed in closed-form of the outage prob-
ability is derived to analyze the performance of the system. Furthermore the ergodic capacity, which
expressed in integral-form, is derived in order to determine the achievable throughputs. In addition, the
achievable throughput of the system is investigated.
153
154
M
2 Poutage P( TSR
th ) S ( tth )
FTSR (8)
hS +
2 S D
Eh e S i li rT (3) S 2D
i =1
At high SNR, the outage probability at the des-
where e, with 0 e 1 is the energy conversion tination node is approximately given by
efficiency, its value depends upon the harvesting
1/ 2
circuitry, PS = E{ s (t ) } and Pi = E{ si (t ) } , is the
2
N R / D th th
2
2 th g
Poutage = 1 exp K1 2
transmit power of the source and the interference g 1 1
1 g
sources, respectively.
j j
The transmit power of the relay ( A) i ( A) ( j ) i th 1
i j ( ) +
i =1 j =1 ( )! 1 i
Eh 2re M
2
Pi li
2
PR PS hS (4) (9)
( r )T / 2 1 r i =1
where A diag ( ) , i Pi
i , ( A) ND
Before forwarding yRTSR k ) to D, the relay is the number of distinct diagonal elements of A,
amplifies the received signal by multiplying it with 1 > 2 > ... > ( ) are the distinct diagonal
the gain, G, which can be expressed as: elements in decreasing order, i ( A ) is the multi-
plicity of i , and i j ( ) is the (i, j)th character-
istic coefficient of A (Gu & Aissa 2014).
PR
G= (5) When the interfering signals are statistically
M independent and identically distributed (i.i.d.),
Pi li
2 2
PS hS + NR i.e., i , i 1, 2,..., M , then ( A) = 1 and
i =1
i ( A) M , the outage probability, Poutage , is then
reduced to
where NR Na[R ] + Nc[R ] are the variances of the
overall AWGNs at the relay.
2 th g
1/ 2
th N R / D th
Hence, the received signal at the destination Poutage 1 K1 2 exp
node after the sampling process, yTSR D ( k ) is g 1 1 g 1
given by M
( )
M
th 1
+
( ) ! 1
D ( k ) = yR
yTSR TSR
( k ) hDG + na[D] ( k )
+ nc[D] ( k )
(6)
(10)
D (k )
nnTSR
where
where na[D] ( k ) and nc[D] ( k ) are the AWGNs at the
PS
destination node due to the antenna and conver- 1 = S (10a)
sion, both with zero mean and variances of Na[D] ND
and Nc[D] ( k ) , respectively, and nTSR D ( k ) is the
overall AWGNs at the destination. By substituting 2re
D ( k ) is given by
yRTSR k ) from (2) into (6), yTSR g = D (10b)
1 r
D ( k ) = hS s (k
yTSR k hDG
NR
M NR / D = (10c)
+ li si ( k ) nRTSR ( k ) hDG TSR
nD (k ) ND
i =1
(7) where 1 is defined as the average signal-to-noise
ratio (SNR).
As a result, the SINR of the decision is given by Proof: See Appendix A
(8) (see next page).
3.3 Ergodic capacity and the achievable
3.2 Outage probability throughput
g probability is defined as
In this paper, the outage The second parameter used to evaluate the per-
the probability that TS R
S 2 D drops below an accept- formance of the cooperative network is the
155
2 2
M
Pi li
2 2
2re PS hD
hS PS hS
i =1
TSR
S 2D =
2 2 2
M M M
hS + Pi li Pi li + NR ND (1 ) Pi li +( ) N D NR
2 2 2
2re hD S r PS hS r
i =11 i =1 i =1
(13)
method, the expression in (13) can be rewritten as unless stated, the variances are assume to be identi-
cal, that is, Na[R ] Nc[R ] Na[D] = Nc[D] , number of
E =
( r )T / 2
CE
1 r
CE (16)
T 2
156
APPENDIX A
1
In this paper, an interference aided energy harvest- F
S 2D
( th ) P
1 th y + + NR / D
z (A.2)
ing amplify-and-forward relaying system was pro- 00
posed, where the energy-constrained relay harvests f g z ) f INNF ( y )dydz
energy from the received information signal and
Co-Channel Interference (CCI) signals, then uses where f z ) and f y ) denotes the probability
g INF
N
that harvested energy to forward the signal to des- density function (PDF) of g and INF, respectively.
tination after multiply it the gain. The time switch- The PDF of INF is given by (for details on this anal-
ing-based relaying protocol was adopted here for ysis, see Bletsas, H. Shin, and M. Z. Win (2007))
circuit simplicity.
The achievable throughput of the system is v( A ) i ( A ) i j y
numerically derived from the ergodic capacity and
analytically derived from the outage capacity. The
f IINF
N
y) = i j ( A)
(j )!
y j 1 exp
i
outage probability is calculated approximately at
i =1 j =1
high SNR for simplicity. (A.3)
157
158
A. Aissani
Department of Computer Science, University of Science and Technology Houari Boumediene (USTHB),
El Alia, Bab-Ez-Zouar, Algiers, Algeria
ABSTRACT: The purpose of this paper is to provide a method for finding the probability distribution
of the virtual waiting time of a customer in a closed queueing network with two stations in tandem and
unreliable servers. We obtain the joint probability distribution of the server state (busy or out of order)
and the residual work in each station. Then we derive the probability distribution of the virtual waiting
time of a customer in the network in terms of its Laplace-Stieltjes transform. These results are interesting
to provide some performance metrics such as the utilization or the load of a central node (or base station)
in physical networks such as mobile or wireless networks (WSN), data bases and other telecommunication
or computer systems.
1 INTRODUCTION
159
160
161
Here 1( ) is the number of unavailable servers ijk ( s ) 0 e st
Fijk ( x )dx,
in the system S2 and G1k ( ) is the k -th order con-
volution of the function G1( ).
These equations can be obtained by the usual way Fijjk
dF
aijk = ( ),
(Gnedenko & Kovalenko 1989). The idea is to observe ddx
the evolution of the basic stochastic process during
an infinitesimal small interval of time ( ). bijk Fijk ( ).
The random event e(t h ) , (t
(t h ) , (t +
h) = 1 with ( ) < x (which probability is
F01( 0 ) ( x t h ) ) occurs if and only if one of the fol- Applying the Laplace transform to the system
lowing events holds of section 4, we obtain
(010 ) ( )
This is a simple application of the well known
formula of total probabilities. ( (0) (1)
01 )(
)( 2 ( 11
2)
(0) (1)
11 1 ( s ))
= 01 ,
Now, we can divide both sides of the above s[( s 2 2 )( s 2 ) 2 2 ]
equation by h and take the limit when h 0 . So, 11
(0)
( )
we obtain the first equation. The other equations
( (0) 01 ) 2
( 2 2 )(
(1) ( ) ( )
11 1 ( s ))
can be obtained by the same physical arguments. = 01 11
.
s[( s 2 2 )( s 2 ) 2 2]
162
X( 2 ) 2 2 s0
Since the functions (010 ) ( ) and 11
(0)
( ) are 2 2
01 ) 2 2
( ) ( )
( 01 ( 0
( )
2 )( 11
( )
11 1 (s
( s0 )) = 0.
6 VIRTUAL WAITING TIME
Similarly, since for s = 2 , the denominator of Now, we are able to derive the distribution of the
the function 11 (1)
( ) equal zero, we have virtual waiting time of an arbitrary request in the
station S1 .
{[ 2 2 2 ( 1 ( 2 ))]( 11
( )
11 1 ( 2 )
( )
Denote by ( ) the virtual waiting time of such
1b10 ( 1( 2 )) ) (a01 a01 r1( 2 )
( ) N ( ) ( ) a request i.e. waiting time of a request which will
2b01
( )
g1( 2 ) 1b00( )
( g1( 2 ))N 1 )} = 0 be arrived at time t . Its the period between the
time t until the departures of all requests arrived
before t . If the server is available andfree of
Next, consider the function
requests, then ( ) = 0 .
Also denote by F ( x ) = limt P{ (t ) x} the
f s) = s 2 2 ( g1( 2 )).
limiting probability distribution of the virtual
waiting time and
It is not difficult to see that for s = 0, we have
( ) = limt e
sx
f ( ) = 2 and lims f s ) = +. So, the function t 0
ddP{ (t ) }
f ss) has at least one root in the domain e ( ) > 0,
s1 say. is Laplace-Stieltjes transform.
Consequently, we have a system of four equations The structure of such a stochastic process is the
following. Let t1 t2 ,t3 .... the instants of requests of
pure service (regular ones) and/or impure serv-
(1)
a01 (0)
a01 r1( s1 ) 2b01
(0)
g1( s1 )
ice (renewal of components failures, virus elimi-
1b00
(1)
( g1( s1 ))N 1 = 0, nation etc.) ( t1 t2 < t3 < .... . Then for tn t < tn +1
the process { (t)) } can be defined as
0 a01 r1 ( 1 ) 2 b001 g1 ( 1 )
(1) (0) (0)
a01
( 0, if ( n ) t tn , ]
1b00
(1)
( 1( s1 ))N 1 = 0.
)
( ) ( n) ( n ), if ( n ) t tn
Consider now the linear system of algebraic
equation which can be written under the form For t tn , we have ( n ) ( n ) + n ,
Ax = b, where where n is the service time of a regular customer
and/or the renewal period of an interruption (due
x = (a01 , a11 , a01 , a11 )
( ) ( ) ( ) ( ) to a physical breakdown or a computer attack)
b ( , ,bb1, b2 ) which had occurred at time tn . Moreover, we
b1 ( )
))N 1 2 01 ( ) assume the initial condition ( ) 0 .
1b00 ( g1 ( 1 )) 1 (s
( s1 )
The process { (t)) } has stepwise linearly
b2 (
1 1 1 (
(ss )) N 1 ( )
{ 10 ( 2 2 2 (1 g1( 2 )) decreasing paths as shown in Figure 2.
b00 } 2b01 g1( 2 )}
1 (0)
We have in the pprevious section derived the joint
distribution { ij( ) ( x )} of the server state in S1 ,
and the matrix A has the following form S2 and the variable ( ) in stationary regime.
163
1 ( )
( ) = [ a + a11
( ) ( )
a01 a11
( )
2 01 Figure 4. Effect of on the utilization U .
1
+ [ 2 (a01 ( )
a01
( )
) ( 11 ( )
11( )
)( 2 + 2 )]]
2
1 N 1
An interesting performance metric is the utiliza-
+ ( ( ) 11 1 ( s ) 1b10 [ 1 ( s )]
( ) (1)
) tion of the central node (base station) S1 against
s 2 11
the rest of the network U 1 F (0 + ) = ( + )
+ [ 1( s )]N [ 0 0 ( p1 + q1 ) 1( s )] + (1) . This metric is plotted as a function of N in
1 N 1 Figure 3 for the following cases:
+ (a (1) a11
(0)
r1(ss (1)
1b10 [ g1 ( s )] )
s 2 11 1. 1 = 0.5 : short dashed line;
+ [[gg1( s )]N [ p0 q0 ( p1 + q1 )r1( s )] + 2. 1 = 2 : long dashed line;
1 3. 1 = 10 : gray level line;
+ 1 2
s 2 s 2 2 ( g1( s ) where 1 is the breakdown rate in the central node
[ a01
( )
a01
( )
r1( ) (0)
2 b001 ( ) g1 ( )
S1 . For this experiment, we set 1 = 1 , 2 = 2 ,
2 = 1 , 1 = 1 , 2 = 2 .
We see how the utilization of the central node
1b00
(1)
[ g1( s )]N 1 ] (2) increases while N increases and 2 decreases.
Denote by mi the total sojourn time of a cus-
We need in the above computations to take into tomer in the node Si (i = 1,2). Figure 3 shows the
m
account the normalization condition. effect of the ratio = m1 on the utilization U for
2
different values of N :
1. N = 10 : short dashed line;
7 APPLICATIONS AND NUMERICAL
2. N = 5 : long dashed line;
ILLUSTRATIONS
3. N = 15 : gray level line;
In this section, we give an application of the above For this experience we take the same numeri-
results with some numerical illustrations. cal values while the breakdown rate in the Central
164
165
ABSTRACT: Biogas represents an alternative source of energy with versatility of utilization. This arti-
cle summarizes information about production and properties of biogas, storage possibilities and utiliza-
tion of biogas. For the assessment of the risks of biogas were established major-accident scenarios. These
scenarios were complemented by an analysis of sub-expressions of biogas namely fire and explosion,
because biogas is formed mainly from methane, which is highly flammable and explosive gas. For analysis
of methane were used Fire & Explosion Index and modelling program ALOHA.
Figure 1. Overview of the number of biogas plants in Europe, according to the EBA (EBA 2015).
169
170
Table 3. Representation of the gases in biogas and Is generally known that biogas is unbreath-
biomethane (Derychova 2014, Peterson et al. 2009). able gas. Density of biogas is approximately
1.2 kg/m3. It is slightly lighter than air, this means
Component Raw biogas Upgraded biogas that biogas will goes up rapidly and mixed with
Methane 4075 vol.% 9599 vol.% air (Derychova 2015). Biogas is a flammable gas
Carbon dioxide 2555 vol.% 5 vol.% which is also explosive, under certain conditions.
Water vapor 010 vol.% Conditions for an explosion, which must be com-
Nitrogen 05 vol.% 2 vol.% plied with, are the concentrations of explosive
Oxygen 02 vol.% 0.5 vol.%
gas between the upper and lower explosive limit
Hydrogen 01 vol.% 0.1 vol.%
(biogas consisting of 65% methane and 35% car-
Ammonia 01 vol.% 3 mg/m3
bon dioxide has explosive limit 612 vol.%), the
presence of an explosive mixture in an enclosed
Hydrogen sulfide 01 vol.% 5 mg/m3
space and reaching the ignition temperature (for
biogas it is 620700C). Under the lower explo-
sion limit does not occur to ignite of the biogas
and above the upper explosion limit can biogas
for short and long term storage. On Figures 2 are only burn a flame. (Schulz et al. 2001) Critical
shown various types biogas tanks. pressure of biogas is in the range of 7.58.9 MPa
Utilization of biogas energy is versatile. Biogas and critical temperature is 82.5C. Biogas has
can be used to produce heat, cooling, electricity; a very slow diffusion combustion, the maximum
further can be used to cogeneration and trigenera- progress speed of the flame in air is 0.25 m/s,
tion (not often used). For the use of biogas in the because of CO2 (Rutz et al. 2012).
transport and distribution to the natural gas grid Biogas has an ability to separate itself into its
it is necessary to modify the biogas. Publications compounds (thermodiffusion). Therefore, it is
(Petersson et al. 2009) present a various types of appropriate and necessary to know the proper-
biogas purification. Biogas plants utilize about ties of the individual components of biogas, these
2040% of produced heat to heat up the digesters gases have its characteristic physical-chemical
(process heat) and other 6080% of heat is called properties. E.g. carbon dioxide is heavier than air
waste heat, it is farther used for additional elec- (1.53 kg/m3), thats why it decreases and adheres
tricity production (Rutz et al. 2012). to the ground. Methane, which is lighter than air
The most often is biogas converted directly (0.55 kg/m3) rises into the atmosphere (Delsinne
in cogeneration units into electricity or heat in a et al. 2010, Rutz et al. 2012).
biogas plants. Surplus of electricity can be deliv-
ered into the electric power grids or distributed
through a pipelines as a heat or gas, or can be 3 HAZARDOUS MANIFESTATION
transported down the road too. OF BIOGAS
171
the shaft, most likely was intoxicated by methane. Possible consequences of the accidents caused
Even abroad, there are accidents which have its by biogas are the heat radiation in case of fire, blast
casualties. In Germany in 2009 exploded biogas (shock) wave with any flying fragments in case of
plants, one worker was killed and two others were explosion and toxic effects of gases into the atmos-
injured (Derychova 2015). The Graph 1 shows a phere scattering (Derychova 2015).
comparison the number of events in biogas sta-
tions in the Czech Republic and in Europe since
1995 till 2013 (Casson et al. 2015, Ministery of the 4 RISK ANALYSIS OF
Interior 2014). From the graph can be read events BIOGAS PLANTS
that occurred on biogas stations in a given period,
incidents like leaks, fire, explosion and other events Safety of biogas plants has to be focused on the
(of unknown cause). most commonly occurring risks which are an explo-
The possible scenarios for accidents caused sion (fire), leakage (poisoning, suffocation) and
by biogas are identified in the following diagram environmental pollution. As was mentioned, com-
(Figure 3). (Derychova 2015). position of biogas depends on the type of biogas
172
173
Bioreactor Bioreactor
4.2.2 Scenario 2
The second modelled situation was methane leak-
age from the stationary source (bioreactor) by
short pipe with diameter 8 inches. The param-
eters of bioreactor and atmospheric conditions
are in Table 6. Graphical output for leaking tank,
when methane is not burning as it escapes into the
atmosphere can be seen in Figure 5. Potential haz-
ard of this methane leakage it can be downwind
toxic effects, vapor cloud flash fire or overpressure
from vapor cloud explosion.
Graph shows two regions where is presence
of flammable vapor cloud in different concen-
tration. In red dotted area is concentration of
methane about 30,000 ppm. In yellow area is
concentration of methane at 5,000 ppm, explo-
sion range of mixture methane and carbon diox- Figure 6. Range of explosion of mixture methane and
ide in air. carbon dioxide in air (Schroeder et al. 2014).
174
175
176
P. Kuera
VSBTechnical University of Ostrava, Ostrava, Czech Republic
H. Dvorsk
OSH FM, FrydekMistek, Czech Republic
ABSTRACT: Many civil facilities commonly include active fire safety systems that help to create favour-
able conditions in the event of a fire. One of these active fire safety systems is equipment that removes
smoke and heat. The article therefore focuses on a variant solution for forced fire ventilation in a concrete
sports hall and the use of mathematical modelling of a fire (Fire Dynamics Simulator) to verify the effec-
tiveness of the designed forced fire ventilation system, including simulations of the logical consequences
of the system under consideration.
177
60 m 30 m. The seating area reaches a height of Figure 3. Floorplan drawing of the ZOKT design,
9.2 m above the ice surface. The maximum dimen- option 1.
sions of the skin circumference above the seat-
ing area are 95 m 65 m. The building is roofed
with a steel framed structure. A rendering of the from the outside on the first underground floor. In
roof cladding and the halls exterior is shown in the event of a fire, the air supply and wall fans are
Figure 2. activated automatically by the electrical fire alarm
system. For the needs of energy balance, we con-
sider the possible simultaneous operation of two
3 DESCRIPTION OF THE EQUIPMENT ventilated sections.
FOR FORCED SMOKE AND The occurrence of a fire is expected only in one
HEAT REMOVAL smoke section; therefore, the calculation is per-
formed for representing smoke section no. 1.
According to the requirements for fire safety in In each smoke section, the suction power
construction, the sports hall must be fitted with will be provided by twelve wall fans with
equipment for removing smoke and heat. The class F200 120 (200C/120 min.), Vo,l = 11.75
structural system of the building is non-combus- m3s1 = 42.30 m3hour1 fire resistance. p = 200
tible; the structural components are of the DP1 Pa. The E 15 DP1 smoke wall has a D600 30 fire
type. The entire building is equipped with an elec- resistance.
trical fire alarm system. Permanent fire extinguish-
ing equipment is not considered.
3.2 Option 2
The equipment for forced smoke and heat
removal is designed with a forced gas outlet and a For the purpose of smoke and heat removal, the
natural air intake. hall areas consist of five smoke sections. The first
Fire ventilation uses fire wall fans that corre- four sections are designed identically to option
spond to operating temperatures of 200C for 120 number one. The fifth section is situated in the
min with fire resistance class F200 120. middle of the hall and forms a ring with a radius
The forced fire ventilation is designed as a of about 13 m.
forced self-acting ventilation system according to Assuming that the electrical fire alarm system
the requirements of (SN 73 0802, 2009), (SN 73 activates the relevant group of ZOKT fans, the
0831, 2011) in connection with (SN EN 121015, ZOKT design does not contain smoke barriers in
2008). the space under the roof structure.
Fire ventilation of sections nos. 14 is provided
by fire wall fans installed on the third aboveground
3.1 Option 1
floor. Section No. 5 uses fire roof radial fans
For the purpose of fire ventilation, the halls fire installed at the roof of the hall. Details of the loca-
zone areas are divided into four smoke sections. tion of the fans and smoke barriers are shown in
Smoke barriers (partitions) separating different Figure 4.
smoke sections are building structures that meet The supply of fresh air is provided through
the requirement of E 15 DP1, i.e. the criterion for inlets from the outside on the first underground
properties of D600.smoke barriers Wall fans for floor. In the event of a fire, the air supply and wall,
forced fire ventilation are installed on the third as well as the roof fans, are activated automatically
aboveground floor. Details of the location of the by the electrical fire alarm system. For the needs of
fans and smoke barriers are shown in Figure 3. energy balance, we consider the possible simulta-
The supply of fresh air is provided through inlets neous operation of two ventilated sections.
178
179
Figure 6. Graph of the time intervals in the first ZOKT Figure 7. Graph of the time intervals in the second
design option. ZOKT design option.
180
6 A COMPARISON OF THE
ZOKT OPTIONS
181
182
of 40C, we provided the following comparison point. Although this project meets the conditions
imagessee Figure 11. They demonstrate the dis- of evacuation, it does not meet the conditions for
tribution of temperatures in the area (i.e. tempera- effective fire-fighting activities. Thick smoke in the
tures of gases), in the middle of the sports hall and hall area at the time of the arrival of fire brigades
the height (12 m above the playing surface) where would result in the slowing and hindering of such
the smoke barriers start to occur. fire-fighting activities. It could also lead to changes
Figure 11 illustrates temperatures in the area in the material properties of steel structures due to
along a longitudinal plane. Temperature distribu- their higher thermal stress.
tion in other planes is comparable; therefore, only
one demonstration was selected. Nevertheless,
despite this one-sided view, the temperatures are 8 CONCLUSION
very distinct. In the first ZOKT design option, the
space temperature reaches 115C (in red). In the Verification of the effectiveness of forced fire
second ZOKT design option, the reached tempera- ventilation of a sports hall through mathematical
ture is slightly higher, i.e. 120C. modelling shows the first ZOKT design option to
be optimistic, more practical, safer and more effec-
tive than the second ZOKT option.
7 DISCUSSION Fire modelling is certainly a promising area,
which will find its application in many practical
Using the Fire Dynamics Simulator simulation situations and may also explain many ambiguities,
model, we looked at two different ZOKT design especially in the case of interactions between fire
options for the protected area of the sports hall. safety systems. A combination of standard compu-
Table 1 below presents the basic questions that tational techniques and modelling seems to be the
were asked during the assessment of the two fire optimal approach that ultimately leads to financial
ventilation options. savings and optimizations of project designs.
The first ZOKT design option, where the
guarded area is divided into four smoke sections
using smoke barriers and where each section is ven-
tilated by 12 fire fans, meets all the requirements REFERENCES
of the proper project of equipment for smoke and
SN 73 0802. 2009. Fire protection of buildings
heat removal. The only unsatisfactory aspect was Non-industrial buildings. Prague: Czech Office for
the location of 12 fire fans in the wall of the roof Standards, Metrology and Testing, 122 p. (in Czech).
structure. The ZOKT system could achieve higher SN 73 0831. 2011. Fire protection of buildings Assembly
efficiency if the fire fans were installed in the ceiling rooms. Prague: Czech Office for Standards, Metrology
of the roof structure where the building structures and Testing, 36 p. (in Czech).
are most thermally stressed. Such placement of fire SN EN 12101-1. 2006. Smoke and heat control systems -
fans is advisable to consult with experts through Part 1: Specification for smoke barriers. Prague: Czech
static drawings. Office for Standards, Metrology and Testing, 44 p.
In the second ZOKT design option, the guarded (in Czech).
SN EN 12101-3. 2003. Smoke and heat control systems
area was divided into five smoke sections with- - Part 3: Specification for powered smoke and heat
out using any smoke barriers; each section was exhaust ventilators. Prague: Czech Office for Stand-
ventilated by 5 fire fans; section 5 had a circular ards, Metrology and Testing, 32 p. (in Czech).
shape and its fire fans were installed in the ceil- SN EN 1991-1-2. 2004. Eurocode 1: Actions on struc-
ing of the roof structure, in the central highest tures - Part 1-2: General actions - Actions on structures
183
184
ABSTRACT: The methods of parametric synthesis (parameter sizing) for providing parametric reliabil-
ity using acceptability region discrete approximation with a regular grid are discussed. These methods are
based on parametric optimization using deterministic criterion for the case of lack of information on par-
ametric deviation trends and their distribution laws. A volume of a convex symmetrical figure inscribed
into acceptability region is considered as an objective function of this optimization task. The methods
of inscribing these figures into discrete approximation of acceptability region based on multidimensional
implementation of Moore and von Neumann neighbourhoods are proposed in this work.
X = X( x,
x ). (4)
2 THE PARAMETRIC SYNTHESIS
PROBLEM The stochastic processes of parameter vari-
ations X mean random manufacturing realiza-
Suppose that we have a system which depends on tion of systems components and thier upcoming
a set of n parameters x = ( 1,..., n )T . We will say degradation. Therefore, the conditions (1) can be
that system is acceptable if y(x) satisfy the condi- met only with a certain probability
tions (1):
187
188
189
190
5 CONCLUSIONS
191
192
ABSTRACT: We consider a one-memory self-exciting point process with a given intensity function.
Properties of this point process are studied and the Mino distribution is introduced as the interarrival
times distribution of the process. This distribution has a hazard function which decreases or increases over
a period of time before becoming constant. A maximum likelihood procedure is driven to obtain the MLE
of the Mino distribution parameters. Quality of the estimates is investigated through generated data.
193
x
( ; ) = z a 1 e z dz. (4)
0
Setting = and assuming > 0, the mean of
X can be computed as:
( )
E{X } e . (5)
( )
Figure 2. Intensity of an inhibited one-memory self-
exciting point process with: = 0.5 , = 100 , = 250 If < 0, the mean can be expressed as:
et T = 0.1 ms.
( )
E{X } e . (6)
(|| | )
and the probability density function is:
where ( ) is the lower incomplete gamma func-
tion that is:
t t
f t) = ( e ) e p [t + ( e )] (3)
x
( ) = u a 1e u du.
0
Proof: The proof follows from Snyder and Miller
(1991), theorem 6.3.4 p.314 and its corollary p.316. Some examples of expectation values are given
We have: in Table 1.
194
( + e )
n
log ( , ) log + log
log ti
i =1 (7)
(1 )
n n
ti e ti
Figure 4. Probability density function of a Mino distribu- i= i =1
tion with = 100 , = 0.8 and different values of .
Remark that this expression is similar to the
expression obtained considering the logarithm of
the sample function for a self-exciting point proc-
ess given by theorem 6.2.2, p. 302. from Snyder and
Miller (1991).
From (7), the likelihood equations are:
n n n
(1 e i ) + = 0
t
( 8)
i =1 i =1
n n ti
+ e =0 (9 )
ti
i =1 i =1 1 + e
n
n
2 1 e ti e ti
i
(10 )
i =1
i 1
n
ti e ti
ti
=0 (11)
i =1 1 + e
Figure 5. Hazard rate function of a Mino Distribution
with = 100 . A Newton-Raphson algorithm is used to solve
this system of equations which does not admit
Table 1. Examples of expected-value for = 500. explicit solution. Existence and uniqueness of
the MLE can be proven applying theorem 2.6
E(X) p.761 from Mkelinen et al. (1981). One needs to
prove that the gradient vector vanishes in at least
1 0.011828
one point and that the Hessian matrix is negative
= 100 0.5 0.010872
definite at every point where the gradient vanishes.
1 0.007937
The Hessian matrix is:
1 0.006697
= 200 0.5 0.005777
2 log L 2 log L 2 log L
1 0.003770
2
2 log L 2 log L 2 log L
4 PARAMETERS ESTIMATION H=
2
Mino (2001) proposes to use an EM algorithm 2
to estimate the parameters. He introduces arti- log 2 log L 2 log L
ficially two data models; one representing point 2
195
196
REFERENCES
6 CONCLUSION
Bowsher, C. (2007). Modeling security market events in
In this work we have investigated a particular self- continuous time: intensity based, multivariate point
process model. J. Econometrics 141, 876912.
exciting point process. We suggest to consider this proc-
Hawkes, A. (1971). Spectra of some self-exciting and
ess as a renewal process and we define the interarrival mutually exciting point process. Biometrika 58, 8390.
times distribution that we dename Mino distribution. Johnson, D. H. (1996). Point process models of single-
Some properties of this distribution are explored. Sta- neuron discharges. J. Comput. Neurosci. 3, 275299.
tistical inference is driven via the maximum likelihood Mkelinen, T., K. Schimdt, & G. Styan (1981). On the
approach. Results are obtained on simulation data. existence and uniqueness of the maximum likelihood
Further work will be conducted to develop a Bayesian estimate of vector-valued parameter in fixed-size sam-
approach and to consider goodness of fit test. ples. Ann. Statist. 9, 758767.
Mino, H. (2001). Parameter estimation of the intensity
process of self-exciting point processes using the EMal-
Appendix A: Simulation of a Mino distribution gorithm. IEEE Trans. Instrum. Meas. 50(3), 658664.
To obtain realisation of a r.v. having a Mino distribu- Ogata, Y. (1999). Seismicity analysis through pointprocess
modelling: a review. Pure Appl. Geophys. 155, 471507.
tion, we use the following well-known result: Let F be
Ruggeri, F. & R. Soyer (2008). Advances in Bayesian Soft-
a cumulative distribution function. Then the cdf of the ware Reliability Modeling. Advances in Mathematical
r.v. F1(U) where U is a uniform r.v. on [0,1], is F. For Modeling for Reliability, T. Bedford et al. (Eds), IOS
the Mino distribution the inverse of the cdf cannot Press, 165176.
be obtained in closed-form since it is supposed to be Snyder, L. & I. Miller (1991). Random Point Processes in
deduced expressing x with F(x) from equation (12). Time and Space. Springer.
197
M. epin
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
ABSTRACT: The fault tree is a method for identification and assessment of combinations of the unde-
sired events that can lead to the undesired state of the system. The objective of the work is to present a
mathematical model of extended fault tree method with the house events matrix to enable integration of
several system models into one model. The mathematical model of the fault tree and of its extension with
the house events matrix is presented. The theory is supported by simple examples, which facilitates the
understanding of the approach.
1 INTRODUCTION integrate the top event with logical gates and which
integrate the logical gates with other logical gates
The fault tree is a widely used method for evalu- and with the primary events (Ren, Dugan, 1998,
ation of reliability and safety (Vesely, 2002, epin, Mavko, 2002).
Kumamoto, Henley, 1996). It is applied in various The primary events are the events, which are
industries and fields of application (epin, 1997, not further developed and represent the com-
epin 2005). Its repute is gained primarily when ponents and their failure modes or human error
integrated with the event tree analysis as a part events. Not all the component failure modes are
of the probabilistic safety assessment related to considered. The analysis is focused to those, which
nuclear safety (epin, 2002, Martorel et al. 2006) can cause failure of the system. In other words,
and related to aircraft safety. the analysis is focused to those which can cause
the top event.
The primary events can be basic events or
1.1 Objective house events. The basic events are the ultimate
The objective of the work is to present a mathe- parts of the fault tree, which represent the unde-
matical model of extended fault tree method with sired events on the level of the components, e.g.
the house events matrix to enable integration of the component failures, the missed actuation
several system models into one model. signals, the human errors, effects of the test and
Integration of several system models into one maintenance activities, the common cause analy-
model may facilitate the extension of probabilistic sis contributions.
safety assessment, which was originally made for The house events represent the logic switches.
power plant full power operation, to consideration They represent conditions set either to true or to
of other modes of operation (Kiper 2002, epin, false, which support the modelling of connections
Mavko, 2002, epin, Prosen, 2008). between the gates and the basic events and enable
that the fault tree better represents the system
operation and its environment.
The fault tree is mathematically represented by a
2 METHOD
set of boolean equations.
The fault tree is a method for identification and Gi = f(Gp, Bj, Hs); i, p {1..P},j {1..J}, s {1..S} (1)
assessment of combinations of the undesired
events that can lead to the undesired state of the Gi logical gate i
system, which is either a system fault or a failure Gp logical gate p
of specific function of the system (Kumamoto, Bj basic event j
Henley, 1996, PRA Guide, 1982). The undesired Hs house event s
state of the system is represented by a top event. p number of gates
The fault tree can be represented in graphical j number of basic events
form or in the form of Boolean equations, which s number of house events
201
Z
Z number of minimal cut sets
PTOP PMCCSi PMCCSi MCSj
C +
House events disappear from the results equa- z =1 i j
tion, because their values such as 0 or 1 are used in + PMCCSi MCSj
C j MCSk
C (4)
the Boolean reduction of equations. i j <k
n
In theory, different house events values in a set of
house events may change the model significantly. ... + ( )n1 P MCSi
C
i =1
This is the key of the idea behind the house
events matrix. Namely, in probabilistic safety
PTOP probability of a top event (failure prob-
assessment as it was initially used, it is possible
ability of a system or function, which is defined in
that for a single safety system, several fault trees
a top event)
are needed. They may differ because of different
PMCSi probability of minimal cut set i
success criteria. For example, in some configura-
Z number of minimal cut sets
tion we can rely only to one out of two system
n number of basic events in the largest mini-
trains, if the other is in maintenance. In the other
mal cut set (related to number of basic events rep-
configuration, we have two system trains available.
resenting minimal cut set)
Fault trees may differ due to different boundary
The probabilities of minimal cut sets should be
conditions, as they are linked to different scenarios
calculated considering their possible dependency.
with different requirements.
For example: auxiliary feedwater system with C =
PMCSi
two motor driven pumps and one motor driven
P 1 PB 2 P 1 PB 3 P PB 2 ... (5)
pump is available in nuclear power plant with pres-
surized reactor. The complete system can be con- P m PB1 PB ... PBm 1
sidered in majority of conditions. In the conditions
of station blackout, electrical power is not avail- Under assumption that the basic events sets are
able and motor driven pumps are not applicable, mutually independent, the equation is simplified.
but turbine pump and related equipment is appli-
cable. So, the model is much more applicable, if the m
motor driven pumps and their related equipment P C
CSi PBj (6)
are cut off from the model for the station blackout j =1
condition.
The house events matrix is introduced to list PMCSi probability of minimal cut set i
the house events and their respective values for the PBj probability of basic event Bj
202
203
Table 1. House event matrix for 4 modes of operation of a nuclear power plant for human failure events included in
the fault trees of safety systems.
Human HFE01 1 1 0 0
Failure HFE01A 0 0 1 1
Events HFE02 1 1 0 0
HFE02A 0 0 1 1
HFE03 1 1 0 0
HFE03A 0 0 1 1
HFE04 1 1 0 0
HFE04A 0 0 1 1
HFE05 1 1 0 0
HFE05A 0 0 1 1
HFE06 1 1 0 0
HFE06A 0 0 1 1
HFE07 1 1 0 0
HFE07A 0 0 1 1
HFE08 1 1 0 0
HFE08A 0 0 1 1
HFE09 1 1 0 0
HFE09A 0 0 1 1
204
Table 2. House events matrix for 4 modes of operation of a nuclear power plant for initiating events only.
205
206
Tahani Coolen-Maturi
Durham University Business School, Durham University, Durham, UK
Gero Walter
School of Industrial Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
ABSTRACT: The survival signature has been introduced to simplify quantification of reliability of
systems which consist of components of different types, with multiple components of at least one of
these types. The survival signature generalizes the system signature, which has attracted much interest in
the theoretical reliability literature but has limited practical value as it can only be used for systems with
a single type of components. The key property for uncertainty quantification of the survival signature,
in line with the signature, is full separation of aspects of the system structure and failure times of the
system components. This is particularly useful for statistical inference on the system reliability based on
component failure times.
This paper provides a brief overview of the survival signature and its use for statistical inference for
system reliability. We show the application of generalized Bayesian methods and nonparametric predic-
tive inference, both these inference methods use imprecise probabilities to quantify uncertainty, where
imprecision reflects the amount of information available. The paper ends with a discussion of related
research challenges.
207
k k
( )
There are ml k state vectors x k with i =k1 xik lk .
Let Slk denote the set of these state vectors for
m forward. Such methods have the advantage that
imprecision for the system survival function reflects
the amount of information available. The next
components of type k and let Sl1 , ,lK denote the set two sections briefly discuss such methods of sta-
of all state vectors for the whole system for which tistical inference for the system failure time. First
i =k1 xik lk , k , , K . We also introduce the
m
we show an application of generalized Bayesian
notation l (l1, , lK ) . Due to the exchangeability methods, with a set of prior distributions instead
assumption for the failure times of the mk com- of a single prior distribution. This is followed by a
ponents of type k, all the state vectors x k Slkk are brief discussion and application of nonparametric
equally likely to occur, hence (Coolen & Coolen- predictive inference (Coolen 2011), a frequentist
Maturi 2012) statistical method which is based on relatively few
K 1 assumptions, enabled through the use of imprecise
mk probabilities, and which does not require the use
( ) =
l ( )
of prior distributions. The paper ends with a brief
k =1 k x Sl1 , ,lK
discussion of research challenges, particularly with
regard to upscaling the survival signature method-
Let Ctk { , , , mk } denote the number of ology for application to large-scale real-world sys-
components of type k in the system that function tems and networks.
at time t > 0 . Then, for system failure time TS ,
m1 mK K 2 IMPRECISE BAYESIAN INFERENCE
P (TS t) (l )P ( {Ctk = lk })
l1 = 0 lK = 0 k =1 The reliability of a system, for which the survival
signature is available, is quite straightforwardly
There are no restrictions on dependence of the quantified through its survival function, as shown
failure times of components of different types, as in the previous section. We briefly consider a sce-
the probability P ( K {Ctk = lk }) can take any form nario where we have test data that enable learn-
k =1
of dependence into account, for example one can ing about the reliability of the components of
include common-cause failures quite straight- different types in the system, where we assume
forwardly into this approach (Coolen & Coolen- independence of the failure times of components
Maturi 2015). However, there is a substantial of different types. The numbers of components
simplification if one assumes that the failure times in the system, of each type,
y that are functioning
of components of different types are independent, at time t, denoted by Ctk for k ,, K , are the
and even more so if one assumes that the failure random quantities of main interest. One attrac-
times of components of type k are conditionally tive statistical method to learn about these ran-
independent and identically distributed with CDF dom quantities from test data is provided by the
Fk (t ) . With these assumptions, we get Bayesian framework of statistics, which can be
208
209
210
m1 mK K
P (TS t) (l ) D(Ctk lk )
l1 = 0 lK = 0 k =1 Figure 2. System with 2 types of components.
211
Table 4. Lower and upper survival functions for the sys- for the case with the test failure times ordered as
tem in Figure 2 and two data orderings. t11 t12 < t21 t22 .
For the ordering t12 t11 < t22 t21 , in the first
t12 t11 < t22 t21 interval in Table 4 we have not yet seen a failure
in the test data, so the NPI upper probability that
P (Ts t) P (Ts t) P (TS t) the system will function is equal to one, which is
logical as we base the inferences on the data with
2 0.553 1 few additional assumptions. In the second inter-
( 1) val, one failure of type 2 has occurred but we do
0.458 1 not have any evidence from the data against the
( 12 1
1) possibility that a component of type 1 will cer-
0.148 0.553 tainly function at times in this interval, so the NPI
( 11 2
2) upper survival function remains one. In the fourth
0.100 0.458 interval, both type 2 components have failed but
( 22 1
2) only one component of type 1 has failed. In this
0 0.148 interval, to consider the lower survival function
( 21 ) the system is effectively reduced to a series system
consisting of three components of type 1, with
t11 t12 < t21 t22 one success and one failure as data, denoted by
(2, 1). As such a series system only functions if all
t P (TS t) P (TS t) three components function, the NPI lower sur-
vival function within this fourth interval is equal
1 0.553 1 to S TS (t ) = 13 24 53 = 0.100 , which follows by
( 1) sequential reasoning, using that, based on n obser-
0.230 0.667 vations consisting of s successes and n s failures,
( 11 2
1) denoted as data (n, s), the NPI lower probability
0.148 0.553 for the next observation to be a success is equal
( 12 1
2) to s / ( n + ) (Coolen 1998). The NPI lower prob-
0 0.230 ability for the first component to function, given
( 21 2
2) test data (2,1), is equal to 1/3. Then the second
0 0.148 component is considered, conditional on the first
( 22 ) component functioning, which combines with the
test data to two out of three components observed
212
4 DISCUSSION
213
214
ABSTRACT: There are different methods for the calculation of indices and measures in reliability anal-
ysis. Some of most used indices are system availability/reliability and importance measures. In this paper
new algorithms for the calculation of system availability and some of importance measures are developed
based on the parallel procedures. The principal step in the development of these algorithms are construc-
tion of matrix procedures for the calculation these indices and measures.
215
Values of variables,
2.3 Binary decision diagram x1, x2, x3 Function values, (x)
A BDD is a directed acyclic graph of a Boolean 000 0
function representation. This graph has two 001 0
010 0
011 0
100 0
101 1
110 1
111 1
Figure 1. Truth vector of the structure function. Figure 2. A simple series-parallel system.
216
7
( ) A( ) = a( ) k1 k2 k3
x1 x2 x3
k =0
= a ( ) x10 x20 x30 + a ( ) x10 x20 x31 + a ( 2 ) x10 x12 x30
+ a (3) x10 x12 x31 + a ( 4 ) x11x20 x30 + a (5 ) x11x20 x31 (10)
+ a ( ) x11x12 x30 + a ( ) x11x12 x31
= a ( ) + a ( ) x3 + a ( ) x2 + a ( ) x2 x3 + a ( ) x11
+ a ( ) x1 x3 + a ( ) x1 x2 + a ( ) x1 x2 x3
( ) A( ) = x1 x3 + x1 x2 x1 x2 x3 (11)
217
2 n 1
A a( k ) p1k
1
p2k2 ... pnkn (14) 4 IMPORTANCE ANALYSIS
k =0
The availability is one of the most important char-
where a(k) is coefficients polynomial (3); pi (i = 1, acteristics of any system. It can also be used to
..., n) is probability of the i-th component working compute other reliability characteristics, e.g. mean
state, and piki = 1 if ki = 0 and piki pi if ki = 1. time to failure, mean time to repair, etc. (Barlow &
Proof. According to (Kucharev et al. 1990) Proschan 1975, Schneeweiss 2009). But they do not
the arithmetical polynomial form is canoni- permit to identify the influence of individual sys-
cal form of Boolean function representation. tem components on the proper work of the system.
Therefore all elements of the polynomial form For this purpose, there exist other measures that
(3) a (k ) x1k1 x2k2 ... xnkn are mutually independ- are known as Importance Measures (IM). The IMs
ent events. And variables of Boolean function are are used in part of reliability analysis that is known
interpreted as independent events according to as importance analysis. The comprehensive study
Kumar & Breuer 1981. Therefore in case of proba- of these measures has been performed in work [4].
bilistic analysis the Boolean function variables can IMs have been widely used for identifying system
be replaced by probabilities of this events. weaknesses and supporting system improvement
For example, compute the availability of the activities from design perspective. With the known
system in Fig. 1 based on the arithmetical polyno- values of IMs of all components, proper actions
mial form of this system structure function (11). can be taken on the weakest component to improve
According to the Theorem 1 the Boolean variables sys-tem availability at minimal costs or effort.
of this form are replaced by the probabilities pi of There exist a lot of IMs, but the most often used
the system components functioning: are the Structural Importance (SI), Birnbaums
Importance (BI), Criticality Importance (CI)
A p1 p3 + p1 p2 p1 p2 p3 . (15) (Table 2).
Different mathematical methods and algorithms
In comparison, compute this system availability can be used to calculate these in-dices. Ones of them
according to traditional way based on the structure are Direct Partial Boolean Derivatives (DPBDs)
function AND-OR-representation (2): that have been introduced for importance analy-
sis in paper (Moret & Thomason 1984). In paper
A {AND( x1,OR( x2 , x3 ))} p1( p2 + p3 p2 p3 ) (Zaitseva & Levashenko 2013), the mathematical
= p1 p2 + p1 p3 p1 p2 p3 . background of DPBDs application has been con-
sidered. But efficient algorithm for computation
(16) of DPBDs has not been proposed. In this paper,
218
219
where is a number of nonzero values of DPBD Probability of component state, pi 0.90 0.70 0.65
(10)/xi(10) and 2n-1 is a size of the DPBD. SIi 0.75 0.25 0.25
Similarly, the modified SI, which takes into MSIi 1.00 0.50 0.50
account the necessary condition for component BIi 0.90 0.32 0.27
being critical, can be defined as follows (Zaitseva CIi 0.46 0.49 0.49
& Levashenko 2013, Zaitseva 2012):
4.3 Parallel algorithm for the calculation of direct
i( )
SI i =
MSI . (19) partial Boolean derivatives
i
One of possible way for the formal development
of parallel algorithms is transform mathematical
where i is a number of state vectors for which
background into matrix algebra. Therefore, consider
(1i, x) = 1.
DPBD (17) in matrix interpretation. As the first step
The BI of component i defines the probability
in such transformation, the initial data (structure
that the i-th system component is critical for sys-
function) has to be presented as a vector or matrix.
tem failure. Using DPBDs, this IM can be defined
The structure function is defined as a truth vec-
as the probability that the DPBD is nonzero
tor (Fig. 1) in matrix algorithm for calculation of
(Zaitseva & Levashenko 2013)
DPBD. The truth vector of DPBD (derivative vec-
tor) is calculated based on the truth vector of the
BII i = { xi }. (20) structure function as:
Zhu 2012):
where P(i,l) is the differentiation matrix with size
q 2n-1 2n that is defined as:
CII i = BI i i . (21)
U
P( l)
= M (i 1)
l l M ( n i ) . (23)
where qi is component state probability (1) and U is
the system unavailability.
and M(w) is diagonal matrix with size 2w 2w, l l
To illustrate the calculation of all IMs using
is the vector for which l = s for the matrix P( ,a )
DPBDs consider the system in Fig. 1. Values of IMs
and l = a for matrix P( ,a ) , and is the Kronecker
for this system are computed in Table 3. Accord-
product (Kucharev et al. 1990).
Note that the calculation ( j ) and ( j )
ing to these IMs, the first component has the most
influence on the system failure from point of view
in (22) agrees with the definition of state vectors for
of the system structure, because the values of the
which the function value is j and j , respectively.
SI, MSI and BI are greatest for this component. The
The matrices P( ,a ) and P( ,a ) allows indicating
CI is maximal for the second and third components
variables with values a and a , respectively. The
and, therefore, it indicates the first component as
operation AND () integrates
g these conductions.
non-important taking into account the probability ( j j ) i ( )
DPBD does not
of failure of this component (it is minimal for this
depend on the i-th variable (Bochmann & Post-
component, i.e. q1 = 0.10). The FVIs implies that
hoff 1981). Therefore, the derivative vector (22)
the second and third components contribute to sys-
( j j ) i ( ) has size of 2n1.
tem failure with the most probability.
Consider an example for calculation of deriva-
So, DPBDs are one of possible mathematical
tive vector x(10)/x1(10) for the structure
approaches that can be used in importance analy-
function with the truth vector x = [0 0 0 0 0 1 1 1]T
sis, and they allow us to calculate all often used IMs
(it is the truth vector of the structure function of
(Table 2). Mathematical background of its applica-
the system depicted in Fig. 1). According to (22),
tion for the definition of IM has been considered
the rule for the calculation of this derivative is:
in papers (Zaitseva & Levashenko 2013, Zaitseva
2012). In this paper new algorithm for the calcu-
( ) 1( )
lation of DPBD based on a parallel procedure is
developed. = ( ( )) ( ( )) = [ 0111]T .
(24)
220
P( 1)
= M ( 0)
0)
[1 0] M ( 2 )
and
P( 1)
= M ( 0)
0)
[0 1] M ( 2 ) .
221
222
L.K. Ha
Faculty of Mathematics and Computer Science University of Science,
Vietnam National University, Ho Chi Minh City, Vietnam
ABSTRACT: In this paper, we provide an extension of the Hlder regularity result by Range in (Range
1978) to a certain class of strict finite and infinite type convex domains in 2. A new notion of type is
introduced for arbitrary convex domains in 2 with smooth boundaries. This type generalizes the notion
of strict finite type in the original theory (Range 1978) as well as consists many cases of infinite type in
which Ranges method is fail to be applied.
225
so t
known that for each b , the complex hyper- 2015), we recall the weak f-Hlder space on
surface { ( , z ) } and the complex tangent as
space to b at are actually the same. Moreover,
the Leray map has the following properties: f ( ) = {u :|| u || f :=|| u ||
1. is of C1-class in ( ) . + sup
u z ,zz h f (| h |
1
). | u(z
u( z + h ) u(( z ) |< }.
2. ( ) is holomorphic on .
3. | ( , ) | A > 0 for z , | | c for some When f t ) = t , for 0 1 , we obtain the
constant c > 0. standard Hlder space H ( ) .
Definition 2.1. Let Type be a set of all smooth, The main results in this paper are follows.
increasing functions F : [ ,
, ) [ , ) such that
Theorem 2.3. Let be a bounded convex domain
( = 0;
1. F(0) in 2 with smooth boundary b. Let F T Type and
2. 0 | l ( 2 ) | dr < for some > 0; assume that is convex of maximal type F.
3. F ( r ) is increasing. Then, for every (0, 1) form
f whose coefficients
r
The convex domain is called of admitting belong to L ( ) and = 0 on in the weak
an maximal type F T Type at P bb if on the sense, there exists a function u L ( ) such that
neighborhood 0 | P |< c , for some 0 < c c,
we have =
226
1 l
where d is the surface measure of b.
T =
4
2 S [ ,1, ] k,
[ 0 ,1]
2 ,0 . (4)
Since | ( , ) | A > 0 for z fixed in , | | c
k =1 k
for some constant c > 0, it is enough to estimate the
Then, if = 0 on , we have integral over S B(B z , c ) . Based on Henkins tech-
niques, we re-introduce the following real coordi-
nate system t (t ,t
t3 ) (t(t1,tt2 ,t3 ) R 2 [ , )
( ) = on .
t1( z ) = t2 ( z ) = 0, here z b
Moreover,
f s | z z |= dist( z b ),
satisfie
t3 =| Im ( , ) |,
[ 0,1] ,
f
|| ||
227
G (| ( z ) |)
I1(| ( z ) |) | n((| ( z ) |) |2 (7) for all 0 R . As a consequence, F (t ) | l t |
| (z) | is finite for all 0 t F ( R ) and lim t | ln
2
l F ( ) | is
t 0
zero. These facts and the second hypothesis of F
for any G satisfying Lemma 3.3. imply
On the other hand, we also have
d F (t ) F (d )
I 2 (| ( z ) |)
R 1
dr 0 t
dt 0 y(ln F ( y2 dy
| ( z ) | F (r )
0 2
F (d )
F ( | ( z )| ) 1 = F (d ) ln d 0 ( n F ( y2 ))dy
d
= dr <
0 | ( z ) | F (r 2 )
R 1
+ dr, for d > 0 small enough.
F ( | ( z )| ) | ( z ) | F ( r 2 )
Hence, we have the conclusion that u f ( ) .
F (r )
Since r
is increasing, we have
REFERENCES
F (r 2 ) r2
for all r F (| ( z ) |) . Ahn, H. & H.R. Cho (2000). Optimal Hlder and Lp esti-
| ( z ) | F (| ( z ) |)
mates for b on boundaries of convex domains of
finite type. J. Math. Anal. Appl. 286(1), 281294.
Hence, Bruma, J. & J. del Castillo (1984). Hlder and Lp-estimates
for the -equation in some convex domains with real-
analytic boundary. Math. Ann. 296(4), 527539.
R 1 F (| ( z ) |)
F ( | ( z )| ) | dr . Chen, S.C. & M.C. Shaw (2001). Partial Differential
( z ) | F (r ) 2
4 | (z) | Equations in Several Complex Variables. AMS/IP,
Studies in Advanced Mathematics, AMS.
Fornaess, J. E. & L. Lee, Y. Zhang (2011). On supnorm
It is easy to see that estimates for on infinite type convex domains in
2
. J. Geom. Anal. 21, 495512.
F ( | ( z )| ) 1 F (| ( z ) |) Ha, L.K. & Khanh, T.V. & A. Raich (2014). Lp-estimates
0 | ( z ) | F (r ) 2
dr
| (z) |
. for the -equation on a class of infinite type domains.
Int. J. Math. 25, 1450106 [15pages].
Ha, L.K. & T.V. Khanh (2015). Boundary regularity of
Thus, the solution to the Complex Monge-Ampre equation
on pseudoconvex domains of infinite type. Math. Res.
Lett. 22(2), 467484.
F (| ( z ) |) Henkin, G.M. & A.V. Romanov (1971). Exact Hlder
I 2 (| ( z ) |) . estimates for the solutions of the -equation. Math
| (z) | USSR Izvestija, 5, 11801192.
228
229
J.I. Daz
Instituto de Matemtica Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
ABSTRACT: We study the global and non-global existence of solutions of degenerate singular para-
bolic equation with sources. In the case of global existence, we prove that any solution must vanish iden-
tically after a finite time if either the initial data or the source term or the measure of domain is small
enough.
1 INTRODUCTION t u + u u> }
{u> = f (u, x,t ) i ( , ),
This paper is to study the nonnegative solutions u = ( , ), (2)
of the source types of one dimensional degenerate u ( x, 0 ) = u ( x ) i ,
parabolic equations with a singular absorption 0
231
232
233
1 1 1 q
E (t ) = | ux (t ) | p + u (t ) u (t ) dx. Figure 2. Evolution of the maximal solution of equa-
I p 1 q tion (1).
3 SIMULATION RESULTS
Figure 3. Evolution of the maximal solution of equa-
In this part, we will illustrate our theoretical results tion (1).
with some numerical experiences. In the sequel, we
consider equation (1) and equation (3) for the case:
q = p = 2.3, = 0.8, I = (0, L), and u0(x) = x(L x), REFERENCES
and f(u) = .uq1.
We fix L = 3.1273. It follows then from (8) that Aris, R. (1975). The Mathematical Theory of Diffusion
I = 0.9999. and Reaction in Permeable Catalysts. Oxford Univer-
With = 1 > I (just a little bit difference), the sity Press.
unique solution of equation (3) blows up after Bandle, C. & C.M. Brauner (1986). Singular perturbation
t = 4286, see Figure 1. While = 1.269, the maxi- method in a parabolic problem with free boundary.
mal solution of equation (1) vanishes after t = 7.6, Boole Press Conf. Ser. 8, 714.
see Figure 2. Banks, H.T. (1975). Modeling and control in the bio-
With = 1.270, the maximal solution of equa- medical sciences. lecture notes in biomathematics.
Springer-Verlag, Berlin-New York 6.
tion (1) blows up at t = 23, see Figure 3. Intuitively, Biezuner, R.L., G.E. & E.M. Martins (2009). Computing
the absorption u X{ } supports the nonlin- the first eigenvalue of the p-laplacian via the inverse
ear diffusion an amount 0.up1, with 0 = 1.269 power method. Funct. Anal. 257, 243270.
0.9999 = 0.2691. By this reason, for any (0, Boccardo, L. & F. Murat (1992). Almost everywhere con-
1.269), the solutions of equation (1) exist globally vergence of the gradients of solutions to elliptic and
and they vanish after a finite time. parabolic equations. Nonlinear Anal. Theory, Methods
and Applications 19(6), 581597.
Boccardo, L. & T. Gallouet (1989). Nonlinear elliptic and
parabolic equations involving measure data. Funct.
Anal. 87, 149169.
Coddington, E. & N. Levinson (1955). Theory of Ordi-
nary Differential Equations. New York: McGraw-Hill.
Dao, A.N. & J.I. Diaz. A gradient estimate to a degen-
erate parabolic equation with a singular absorption
term: global and local quenching phenomena. To
appear Jour. Math. Anal. Appl..
Dao, A.N., J.I. Diaz, & P. Sauvy (2016). Quenching phe-
nomenon of singular parabolic problems with l1 initial
data. In preparation.
Dvila, J. & M. Montenegro (2004). Existence and
Figure 1. Evolution of the unique solution of equation asymptotic behavior for a singular parabolic equation.
(3). Transactions of the AMS 357, 18011828.
234
235
M.-P. Tran
Faculty of MathematicsStatistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
T.-N. Nguyen
Department of Mathematics, HCMC University of Education, Ho Chi Minh City, Vietnam
ABSTRACT: The main goal of this paper is to present a spectral decomposition of velocity and pres-
sure fields of the Stokes equations outside a unit ball. These expansions bases on the basis of vector
spherical harmonics. Moreover, we show that this basis diagonalises the Neumann to Dirichlet operator.
1 INTRODUCTION 1
G( ) =
8 r
(Id + er er ).
We consider the Stokes problem in the domain
0 B(0,1) where 0 := R3 B(0,1) . Given a
velocity field g defined on S 2 : B(0,1) , we seek We refer the readers to (Nguyen 2013) for more
the velocity and pressure fields ( ) satisfying detail properties of these operators. The main goal
of this paper is to give a spectral decomposition of
Neumann to Dirichlet operator in basis of vector
u + p = 0 i 0 B ( 0,1),
spherical harmonics.
u = 0 i 0 B ( 0,1), (1.1) The sequel of this paper is organized as follows.
u = g on S 2 . In the next section, we describe the basis of vec-
tor spherical harmonics. We follow the notation in
The well-posedness and regularity results of this (Ndlec 2001) where these objects are introduced
equation can be found in the book of Galdi (Galdi in the context of electromagnetism. In Section 3,
1994). If ( ) is sufficiently smooth, the surface we present the decomposition of the solution of the
density of forces applied by the boundary of B(0, Stokes problem. Eventually, using the decomposi-
1) is defined by tion of the velocity and pressure field, we obtain an
expansion of the corresponding Dirichlet to Neu-
(
f = u + u ) n. mann operator DN in vector spherical harmonics.
237
S 2Yl m + l (l + )Yl m = 0. S 2
| Tl ,m ( x ) |2 d l (l + 1),
S | I l ,m ( x ) | d
2
2
(l + 1)(2l + 3),
The eigenvalue l (l + ) has multiplicity 2l 1 .
By Theorem 2.1 and the Greens formula, we have S 2
| Nl ,m ( x ) |2 d = l( l ).
S 2 m 2
= l (l + 1). Let u L
L2 (S 2 , 3
) , then u decomposes as
l
L2
l l +1
We consider the Legendre polynomial Pl : u( ) il ,mTl ,m ( ) + jl ,m I l ,m ( )
l 1 m = l l 0 m = l 1
l 1
( )l d l
Pl ( x ) = ( x 2 )l , x [[ 1 1]. + kl ,m Nl ,m ( x ).
2 l! dx l
l
l m =
= l +1
238
we obtain
3 DECOMPOSITION OF VELOCITY AND
PRESSURE FIELD divu( ) ( )r (( )
(rjrjl ,m ( )
) jl ,m Yl ,m
u( ) glT,mr (
Tl ,m + glI,m r
) ( )
I l ,m +
and
(
)( )
glI 2,m (r 2 1 + glN,m r l 1)
Nl , m ,
2l u( ) = r ( + )
(riil,m 2liil,m )Tl ,m
l 1 l ( 2l 1)glI ,m
p( x ) = l +1
I l ,m ( x / r ) Nl + ,m ( x / r ) er . + r( + ) (rjl, ljl,m )Il ,m
m = l (l )r
+ r(l + ) (rkl ,m lkl ,m )Nl ,m .
(3.2)
Proof. We recall that by the regularity results in Identifying these expansions in vector spherical
(Galdi 1994), the velocity and pressure field can be harmonics, we obtain
decomposed in the basis of vector spherical har-
monics. Since p = 0 , we put
riil,m 2li
lil,m = 0, l , | m | l, (7)
l
p( x ) = l ,mr
(l )
( r ).
Yl ,m (x
l m = l
rjjl,m 2ljj
jl,m = 0, l , | m | l + 1, (8)
We decompose u in the form
u( ) il ,m (r ( )Tl ,m ( )) rk kl,m = l
kl,m 2lk ,m r, l 1, | m | l 1. (9)
+ jl ,m (r (( ) I l ,m ( ))
+ kl ,m (r ( l +1) Nl ,m ( )) .
We deduce from (3.5), (3.6), the boundary con-
dition u g on 0 and the condition of decay
at infinity that
This form is chosen because
r ( l + )Tl ,m ( x ), r (l( l + ) I l ,m ( x ) and r ( l + ) Nl ,m ( x ) are
harmonics. Using vector spherical harmonics and il ,m (r ) = glT,m l 1,| m | l ,
the formula
239
DN jumpg er
u + ut + per 2 . After simplifying and using again the formula
|S
of vector spherical harmonics, we obtain,
We have
)( ) = gl ,
( r )( lglI,m I l ,m
DN jumpg = DN exttg + DN iinttg, (4.2) l ,m
+ ( + )glN,m Nl ,m .
where DN N ext and DN N int correspond to the exte-
rior and interior solutions. With the same kind of computation, we obtain
Let us first decompose DN N ext . We have (see L. Halpern in (Halpern 2001))
240
2l 2 + 1
DN exttg( x ) (l per ( + T
)glT,mTl ,m ) er = Nl , m .
l 1
2
l +3 I (4.4)
+ gl ,m I l ,m + 2(l
2ll
(l
(l )glN,m Nl ,m ,
l+2 We remark that the coefficient of N1,0 is zero.
Eventually, DN
N int writes as:
To decompose DN N int , we solve the inte-
rior problem (1) in the unit ball B( ) with DN
N int g(x
x l )glT,mTl ,m + 2lglI,m I l ,m
g Tl , g = I l ,m and then g = Nl ,m .
2l 2 + 1 N
For g = Tl ,m , since + gl ,m Nl ,m .
l 1
x r lTl ,m ( x / r ) (4.5)
Finally, the decomposition of the Dirichlet to
is harmonic and divergence free, we have a solution Neumann operator is obtained by (4.2), (4.4) and
of the form u = lTl ,m ( / r ) and p = 0 . Using the (4.5),
above formulas, it is easy to check that
per ( + T
) er l )Tl ,m .
DN
N jump g( x ) ( l )glT,mTl ,m
2
l +3 I 4l 2 1 N (4.6)
+ gl ,m I l ,m +
4ll 4l
gl ,m Nl ,m .
For g = I l ,m , we still have a solution of the form l+2 l 1
u = l I l ,m ( / r ) and p = 0 . Similarly, we calculate
This is the desired decomposition.
per ( + T
) er l l ,m .
lI
REFERENCES
For g = Nl ,m , the mapping
Galdi, G.P. (1994). An introduction to the mathematical
x r l Nl , m ( x / r ) theory of the Navier-Stokes equations. Vol. I, Volume
38 of Springer Tracts in Natural Philosophy. New
York: Springer-Verlag. Linearized steady problems.
is not divergence free. Proceeding as in the case of Halpern, L. (2001). A spectral method for the Stokes
the exterior domain, we look for a solution of the problem in three-dimensional unbounded domains.
form, Math. Comp. 70(236), 14171436 (electronic).
Ndelec, J.C. (2001). Acoustic and electromagnetic equa-
u = r l Nl , m + r l 2 ( r 2 ) I l ,m ,
tions, Volume 144 of Applied Mathematical Sciences.
New York: Springer-Verlag. Integral representations
p = r l 1Yl ,m . for harmonic problems.
Nguyen, T.N. (2013). Convergence to equilibrium for dis-
The condition u = 0 yields crete gradient-like flows and An accurate method for
the motion of suspended particles in a Stokes fluid. Dis-
sertation. Ecole Polytechnique.
l( l + )
= .
(l )
2(l
241
M.-P. Tran
Faculty of MathematicsStatistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
(
1 INTRODUCTION 1 2
D( ) = d | m | dx
d H d ( ) mdx
d
2
)
Recently, some convergence results for solution
of gradient-like system have been studied by may 2 d Q (
mdx ) dx .
2
authors in both continuous and discrete problems,
such as (Haraux & Jendoubi 1998, Haraux 2012, We use the same notations as in (Alouges,
Haraux & Jendoubi 2015) for continuous problem Kritsikis, Steiner, & Toussaint 2014), i.e., the vec-
and (Grasselli & Pierre 2012, Alaa & Pierre 2013, tor field H ext models an applied magnetic field,
Merlet & Pierre 2010, Merlet & Nguyen 2013) for H aniso = Q(e m )e denotes the anisotropy field,
discrete problem. These results have many applica- the stray field H d is the magnetic field, d is the
tions in partial differential equations. Continue the exchange constant and Q is the anisotropy con-
works in (Merlet & Nguyen 2013), we apply some stant. It is supplemented with initial and boundary
convergence results in this paper to a discretization conditions
of the Landau-Lifschitz-Gilbertequations using a
precise finite element scheme which was proposed
m
by F. Alouges et al. in (Alouges, Kritsikis, Steiner, & = ,
Toussaint 2014). n
The Landau-Lifschitz-Gilbert equations was first m( x, 0 ) = m0 ( x ) S 2 .
proposed by Landau and Lifschitz in (Landau &
Lifschitz 1935). These equations describe the evo- Notice that, at least formally, this evolution sys-
lution of the magnetization m : ( , ) S2 tem preserves the constraint | ( , t ) | = 1, x .
inside a ferromagnetic body occupying an open We will consider a discretization of the follow-
region R3 . This system of equations reads ing variational formulation of (1.1),
t m t m = m H efff , in , (1.1)
t 2
,
(1.2)
where > 0 is a damping parameter and m( m) +
d ( m) ext + aniso ( m )) ,
denotes the three dimensional cross product. The
so-called effective magnetic field H efff is given by for every H 1( , R3 ) which furthermore satis-
the functional derivative of micromagnetic energy fies ( ) m( ) = 0 a.e. in . It is known that for
D, more precisely every initial data m0 H 1( , S 2 ) , this variational
formulation admits a solution for all time (see
D (Alouges 2008)).
H efff ( ) d2 Hd ( )
m The main idea comes from the fact that the Dirich-
+ H ext + Q(( )e,
)e let energy function D of Landau-Lifschitz-Gilbert
equation is a Lyapunov function for (1.2). Indeed,
where the energy D is given by considering a smooth solution m( x t ) , we compute,
243
In this section, we recall some abstract convergence Moreover, we also assume that the projec-
results in recent study in (Nguyen 2013). Let M be tion acts only at second order, that is there exists
a Riemannian manifold embedded in R d and the , R > 0 such that
inner product on every tangent space Tu M is the
restriction of the euclidian inner product on R d . || M ( )( ) || R || v ||2 , f || v ||< . (2.7)
We consider a tangent vector field G C( C M ,T
TM )
and a function F C C 1(M,
M ). We say that G and
F satisfy the angle and comparability condition Theorem 2.4. (Nguyen 2013) Let un be the sequence
if there exists a real number > 0 such that for all defined by the project -scheme (2.5) and assume
u M, that these above conditions (2.6) are satisfied. Then
the sequence un converges to .
G (u ), F (u ) || G (u ) ||2 || F (u ) ||2 . (2.1)
3 SPACE DISCRETIZATION
We assume that F is a strict Lyapunov function
for the gradient-like system We discretize the problem in space using P1-Finite
Elements. Let us introduce some notation. Let
u(t ) = G (u(t )), u(t ) M . (2.2) ( h )h be a regular family of conformal triangu-
lations of the domain parameterized by the
Theorem 2.1. (ojasiewicz 1971) If F : R d R space step h. Let ( ih )i be the vertices of h and
is real analytic in some neighborhood of a point (ih )1ii N( ) the set of associated basis functions of
then F satisfies the Lojasiewicz inequality at , that the so-called P1( h ) discretization. That is to say
means: there exist > 0 and [ 0, 1/ ) such the functions (ih )i are globally continuous and
that linear on each triangle (or tetrahedron in 3D) and
satisfy ih hj ij . We define
| (u ) ( ) |1 || F ( ) ||, u B(
B(( , ) . Nh
(2.3) Vh
m
miih : i, mi R3 ,
i =1
Theorem 2.2. (Nguyen 2013) Assume that G and
F satisfy the angle and comparability condi-
Mh : {m V h
i, mi S . }
tion 2.1 and let u be a global solution of 2.2 and
there exists such that F satisfies the Lojasiewicz Notice that Mh is a manifold isomorphic to
( 2 )Nh . For any m = i =1 miih M h , we intro-
N
inequality 2.3 at . Then u(t) converges to as t
goes to infinity. duce the tangent space
244
Nh
Remark 3.1. We have replaced the term bm ( p h , h ) ph h
((m
mih pih ) ih ih
( n n ) h in the original scheme of [?] by i =1
(3.4)
Nh
( min pin ) ih ih .
has a positive symmetric part. Using
we see that bm ( p h , p h )
= 0,
|| p h ||2L2 ( )2 and bm h is
pih pih
i =1
coercive on Tm h M h Tm h M h . So, by definition,
This modification is equivalent to using the quad-
m h C (R+ , M h ) solves the variational formula-
rature formula:
tion (3.1) if and only if
Nh
f ddx f ( xih ) ih , d h
m G ( m h ) t > 0,, m h ( ) m0h .
i =1 dt
for the computation of this integral. The conver- We now check that the hypotheses of Theo-
gence to equilibrium results below are still true with rem 2.2 hold.
an exact quadrature formula, but the proof is slightly
more complicated, see Remark 4.2. Theorem 3.2. The functions G and F defined above
We now interpret this variational formulation as satisfy the angle and comparability condition (2.1).
a gradient-like differential system of the form (2.2). Moreover, the Lyapunov function F satisfies a
For this we introduce the Lyapunov functional ojasiewicz inequality (2.3) in the neighborhood of
F : M h H 1( , 3 ) R defined by any point mh of the manifold M M h .
mh ( d (m
h
)+ ext + aniso ( m
h
)) h Moreover, we can obtain the same estimate for
= : A m , h
h h
2
. applied field H ext and anisotropy field H aniso,
L
where the constant C depends on Q and | | . Then
(3.2) the Cauchy-Schwarz inequality, the identities
245
246
247
N
Some results on the viscous Cahn-Hilliard equation in
L.T.T. Bui
Faculty of Mathematics and Computer Science, University of Science, Vietnam National University,
Ho Chi Minh City, Vietnam
N.A. Dao
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: We study existence and uniqueness of solution for the viscous Cahn-Hilliard equation
under weak growth assumptions on the nonlinearity and in whole domain N. We also address a priori
estimates which are sufficient to investigate singular passage to the limit over different small parameters.
which is parabolic if |w| < 1 and backward para- vut = [ u u + ut ] , ,v > ), (4)
bolic if |w| > 1. Similarly, the equation
choosing either = or = ; here (u) = u3 u
or ( ) = 1+ u2 for equation (2), whereas in general
u
u
ut = (2)
1 + u 2 it denotes a non-monotonic function.
Equation (4) has been derived by several authors
is parabolic if |u| < 1 and backward parabolic if using different physical considerations (in par-
|u| > 1. Observe that in one space dimension the ticular, see (Gurtin (1996), Jckle & Frisch (1986),
above equations are formally related setting u = wx. Novick-Cohen (1988))). It is worth mentioning the
A different well-known equation of application in wide literature concerning both the relationship
theory of phase transitions is between the viscous Cahn-Hilliard equation and
phase field models, and generalized versions of the
equation suggested in (Gurtin (1996)).
ut (u ) (3) Concerning equation (4) with v = 1 , the
existence results were obtained under suitable
where the famous choice of nonlinearity is (u) = nonlinearity in bounded smooth domain of N
u3 u. (see (Carvalho & Dotko (2007)), (Elliott &
Clearly, forward-backward parabolic equations Stuart (1996)), (Bui et al. 2014b)). Moreover, in
lead to ill-posed problems. Often a higher order the latter reference authors give us the rigorous
term is added to the right-hand side to regularize proof of convergence to solutions of either the
the equation. Two main classes of additional terms Cahn-Hilliard equation, or of the Allen-Cahn
249
250
251
ABSTRACT: In this paper, we solve an initial inverse problem for an inhomogeneous heat equation.
The problem is ill-posed, as the solution exhibits unstable dependence on the given data functions. Up to
now, most of studies are focused on the homogeneous problem, and with constant coefficients. Recently,
we solved the heat problem with time-dependent coefficients in 1-D for a homogeneous heat equation.
This work is a continuous expansion of previous results (See (Quan 2011)) (N.H. Tuan & Triet 2013).
Herein we introduce two efficient regularization methods, the quasi-boundary-type and high frequency
truncation methods. Some error estimates between the regularization solutions and the exact solution are
obtained even in the Sobolev space H1 and H2.
Keywords: ill-posed problems; boundary value method; truncation method; heat equation; regularization
255
( )
2 MATHEMATICAL INITIAL INVERSE
T
PROBLEM OF HEAT CONDUCTION exp p b s ds
=1
Let b : [ , T ] R be a continuous function on
( )
2
T
T
[ 0 ] satisfying 0 1 b( ) 2, [ 0,
0 T ]; it is
assumed to be differentiable for every t and satisfy
gp
p s b d f p ( s )dds < , (5)
0
0 < b(t ) C1 for t ( ,T ).
Throughout this article, we denote the L2-norm where gp = g(x)Xp(x)dx, fp(x) = f(x, s)Xp(x)dx.
by || . || , and the inner product on L2 ( ) by Proof: Suppose the problem (1) ( ) has an exact
<, >. We also suppose that f L2 (( , T ); L2 ( )) solution u C ([ 0, T ]];; H 01 ( )) C 1((0, T ); L2 ( )).
and g L2 ( ) . First, we state a few properties of We have
the eigenvalues of the operator on the open,
bounded and connected domain with Dirichlet u
< , Xp > = b(t ) < u, X p (.) >
boundary conditions, which can also be referred to t
Section 6.5 in (Evans 1997). +< f t ), X p (.) > . (6)
256
( )
(7) T
= p < u( t ), X p (.) >. T
gp
p p s b( ))d f p ( s )dds X p ( x ). (14)
0
Combining (6) and (7), we obtain
up (t ) + b t ) pu p (t ) = f p t ) which is equivalent to It is easy to see that v L2 ( ). Then, we con-
sider the problem of finding u from the forward
heat problem
p tT b )d
e u p (t ) (t )
u
t = b(t )u + f ( x, t ),
T
b d
=e f (t ). (8)
p u | = , t ( , T ) (15)
where upon u( x, 0 ) = v( x ), x .
The problem (15) is the forward problem so it
T p sT b )d
t e
u p ( s ) ( s )ds
has a unique solution u (See (Evans 1997)). We
have
( )
T
T
=
( )
p s b d f p s) , (9)
t
t u ( x, t ) [e p p b( ))d ) X p (x ) >
v( x ),
p =1
( )
or t
t
+ exp
e p
p s b( ) f p ( s)) ]X p ( ).
gp ( p s
T
b )
d u p (t )
0
(16)
( )
T
T
= p s b d f p s )ds. (10) Thus
t
( )
e
T
Hence u ( x, T ) p p b( )d ) X p (x ) >
v( x ),
v(x
p =1
( ) ( )
T
T T
u p (t ) = < u( , t ), X p (.) e p p t b( )d + exp
e p p s b( ) f p ( )ds X p ( ).
( )
0
T
T
gp
p
p s b( )d f p s )ds .
(11) (17)
t
Combining (14), (17), and by a simple computa-
Letting t = 0 in (11), we have tion, we get
( )
T
up p 0 b( s )d
ds u( x T ) g pX p ( x ) = g( x ). (18)
p =1
( )
T
T
gp
p p s b( ))d f p ( s )dds .
(12)
Hence, u is the unique solution of the problem (1).
0
Remark 1:
Then 1. When b(t ) = 1, f x t ) = 0 , the problem (1) has
a unique solution if and only if g satisfies the fol-
( )
lowing strong regularity assumption
T
(., ) ||2
|| u(., exp
e p p 0 ( s )ds
p =1
e2TT | < g(.), X p (.) > |2 < .
( )
2
T
T (19)
gp
expp p s ( ))d f p s )ds < .
(13) p =1
0
This assumption is also given by Lemma 1 in
If (5) holds, then we define (G.W. Clark & S.F. Oppenheimer 1994).
257
( )
We have
T
u( x t ) e p p t b( )d
p =1
C1
( )
T
T G (t )G (t ) (G (t ))2 G (t )G (t )
gp
p p s b( ))d f p ( s )dds X p ( x ). (20)
B1
t = 4 w dx
d wt2dx
2b(t ) w 2dx (w( x, t ))2 dx
In spite of the uniqueness, the problem (1) is still
( )
2
ill-posed and some regularization methods are nec-
4 w x t wt x t dx
essary. In next sections, we propose two approxi-
mating problems. C1
+2 b(t ) w 2dx ( ( x, ))2 dx
Proof: The proof of this Theorem is divide into B1
d 4( )
two steps. 2
= 4 wt dx
2
( , ). ( , )
Step 1. The problem (1) has at most one solution.
Let u(x, t), v(x, t) be two solutions of C
+ 2 1 b( ) b( ) w dx ( ( x, )))2 .
the problem
p (1) such that u, v C([0, T]; B1
(0 T ); L2 ( )) . Put w(x, t) = u(x, t)
H 01 ( )) C 1((0
v(x, t); then w satisfies the problem (27)
Using the Hlder inequality, we have
wt b(t )w = 0,
w | = 0, t ( , T ), (21)
w( x, T ) 0, x . 4 2
wt dx
2
( )
2
4 0. (28)
Now, setting G (t ) = w (x ( x t )ddx ( t T ), 2
and by taking the derivative of G(t ), we have C
Since 0 1 b(t 0 < b(t ) C1 , we get B1 b(t )
b(t) 0. Then
1
G(t ) = 2 w( x, t ) . wt ( x, t )dx
= 2b(t ) w( x, t ) . w( x, t )dx (22) C1
G (t )G (t ) (G (t ))2 G (t )G (t ) 0. (29)
= 2 w( x, t ) t ( x , t )dx.
B1
C1
t
Using the Green formula, we obtain We define the function m(t ) = e B1 , and then
regard G as a function of m. Let us introduce an
auxiliary function
G(t ) = 2b(t ) ( ( , t ))2 ddx. (23)
F ( m ) = ln[G (t( m ))]. (30)
Hence
B1
Since t C1
l m , we have
G(t ) = 4b(t ) ( , )
wt ( x, t )dx
G(t( m ))t ( m ) B G(t( m ))
2b(t ) (w
( x, t ))2 dx
d . (24) F ( m ) = = 1 . (31)
G (t( m )) C1m G (t( m ))
Moreover, using the integration by parts and
and
wt ( x t ) b(t )w(x
( t ), we get
B1 G(t( m )) B2
( ) w ( x , t )
wt ( x, t )dx F ( m ) = + 21 2
2
C1m G (t( m )) C1 m
= 4b(t ) w( x, t ) wt ( x, t ) (25)
G (t( m ))G (t( m )) [G
G (t( m ))]2
= 4b (t ) (
2
( x, )) 2
. 2
G (t( m ))
258
( )
F ( m ) =
( )
2 T
exp p b d
u ( x, T ) =
( )
C (37)
B1 G (t( m ))G (t( m )) p =1 k p p
+ exp
T
1 (32)
( )
2 0
C
G (t( m )) B m g pX p ( x ), x .
m ) 0. Hence
Using (29) and (32), we obtain F ( where 0 1, k 1 and f p t ), g p are defined by
F is a convex function on the interval 1 m m1
f p t ) = f x t )X p ( x )ddx,
C1
T
with m1 e B1 . According to the convex property (38)
of function F(m), we have gp g(x d .
( x )X p ( x )dx
m 1 m m
F (m) F (m ) + 1 F ( ). (33) The idea of the problem (36) has a long mathe-
m1 1 m1 1 matical history going back to (Showalter 1983,
G.W. Clark & S.F. Oppenheimer 1994, Denche &
In addition, from (30), inequality (33) is equiva- Bessila 2005). Adding an appropriate corrector
lent to into the given data u( x T ) is a key idea in the
theory of the quasi-boundary value method (or
m 1 m1 m
modified quasi-boundary value method). Using
G (t ) [G (T )] m11 [G (0 )] m11 . (34)
this method, Clark and Oppenheimer (G.W.
Clark & S.F. Oppenheimer 1994), and Denche
Since G(T ) = 0, we conclude that G(t ) = 0 for and Bessila (Denche & Bessila 2005), regularized
0 t T . This implies that u( x t ) v( x t ) . The a similar backward problem by replacing the given
proof of the step 1 is completed. condition by
Step 2. The problem (1) has a solution which is
u(T ) + u( ) g (39)
defined in (20).
Using (11), we have (20).
In spite of the uniqueness, the problem (1) is still and
ill-posed and some regularization methods are nec-
essary. In next sections, we propose two approxi- u(T ) u ( ) g, (40)
mating problems.
respectively. Tuan and Trong (Trong & Tuan
2008) presented a different perturbation of g
3 A MODIFIED QUASI-BOUNDARY by a new term u( x, T ) A(, T )g , where A( , g )
VALUE METHOD AND ERROR satisfies some suitable conditions. The prob-
ESTIMATES IN L2 lem (36) is a generalized version of the
regularized problem given in (Trong & Tuan
In practice, we get the given data g by measuring at 2008).
discrete data. Hence, instead of g, we shall get an In the next Theorem, we shall study the exist-
inexact data g L2 ( ) satisfying ence, uniqueness and stability of a (weak) solution
of the problem (36).
|| g g || . (35)
Theorem 3.1 The problem (46) has a unique solu-
In this section, we shall regularize the problem tion u C([0,( T]; L2()) L2((0, T); H01())
(1) in the following one C 1((0, T ) H 01 ( )). The solution depends continu-
ously on g in L2 ( ).
In Step 2, the stability of the solution is given.
u First, we state the following Lemma
= b(t )u
t
+
exp ( )
f p t )X p ( x ) Proof:
The proof is divided into two steps. In Step 1,
( )
, (36)
T the existence and the uniqueness of a solution of
p =1 p p
k + exp
0 (36) is showed; the (unique) solution u of prob-
( x, t ) (0, T ) lem (36) is given by
259
1
( kM )k
k . (42) +
(
exp )
( )
Mx f p (t )
x + e
k
ln k ( M )
k k + ex
e p ( )
(48)
Proof: Let f x ) = 1
, we have = b(( ) < u (., ), p (.) >
( )
k Mx
x
x e T
k Mx
exp p 0 ( )d
kx Me +
( )
f x) = . (43) T
f p (t )
(x k + e M
Mx
x 2
) k exp
p p ( )d
)
The equation f x ) = 0 gives a unique solution = b(( ) < u (., ), p (.) >
1
f x) . (44) This implies that
x0k + e Mx0
Since e Mx0 k k 1
M 0
x , we have
ut b(t ) u +
exp ( p
T
b d )
f x)
x0k
1
+e Mx0
x0k +
1
k k 1
x
. (45)
p =1 k + exp
p ( p
T
b d ) (49)
M 0 f p (t )X p ( x ).
Mx
By using the inequality e M
Mx0, we get
By letting t T in (41), we get
M 1
= x0k 1e Mx k 1 e (kk
( )
M
Mx
e Mx0 T
k M (46) exp p b d
1 u ( x, T ) =
( )
= k 1 e kMxM 0
. T
g pX p ( x ).
M p =1 k + exp
p b d
p
Mk k
This gives e kMx
M 0
or M 0 ln( M
kMx ).
k
k k
Therefore, u is the solution of the problem (36).
Therefore x0 1
kM
ln( M
k
). Hence, we obtain Part B. The problem (36) has at most one solu-
tion in W.
1 ( kM )k We can prove this part in a similar way as in the
f x) k . (47)
x0k ln k ( Mk ) step 1 of Theorem 2.2. This ends.
The Lemma is completely proved, and Step 2. The solution of the problem (36) depends
Now we pass to the proof of Theorem 3.1. continuously on g in L2 ( ) .
Denote W = C([0, T]; L2()) L2((0, T); Let w and v be two solutions of (36) correspond-
(0 T ); H 01 ( )) .
H 01 ( )) C 1((0 ing to the given values g and h.
260
( ) 2 2pk g 2p
t
exp b( )d = . (55)
( )
p 2
w ( x, t ) =
( )
p =1 T
T k exp
p ( ))d
p =1 k + exp
p p b( )d (50) p
T
g p e p s
T )d Using the following estimate
f p s )ds X p ( x ).
t
( )
2
k
exp ( ) t
b( )d
+ p
( )
p 0
v ( x, t ) =
( ) > 2 2k
T + p (56)
p =1 k + exp b( )d (51)
( )
p 0
T
T p
T
)d > exp 2 b s ds
hp t e s
f p s )ds X p ( x ). 0
we get
where gp = g(x)Xp(x)dx, hp(x) = h(x)
Xp(x)dx. It follows that || u ( , T ) ( x ) ||2L2
( )
N
T
(., ) v(.,
|| w(., (., ) ||2 2 2 k g 2p expp ( s)) . (57)
( )
t
2 p =1 2
exp p ( ))d
=
( )
T
( p p) (52) By taking such that
p =1 k + exp
p p ( )d
0
< ,
( )
2
1 N
( p p) . 2 2 k g 2p p
p =1 k p( B2T )
exp( p =1
261
( )
t t | (., t ) (., t ), X p (.) |
+ exp
( )
e p p s b( )d f p s ) ]X p ( )
0
exp
p k
( )
=
( )
(58)
T
k + exp
p b( )
where u0p = u0(x)Xp(x)dx, fp(s) = < f(x, s), p 0
T
p p ( t
T
b( )d g p )
p =1
p p b( )dd f p ( s )ds
s
( )
t t k exp d )
+ t
2 t
p s b d t
p =1
( )
0
T
2 2 k B2T
k exp
p p b( )d gp
f p2 s )ds + e k t
( )
2 T
k p p b( )d f p s )ds ).
T s
+ exp p p b d
(59) k exp
t
t
|| u (., 0 ) u0 (.) ||2L2
It follows that
t 2
+ T2
0
( k + e TBB )2 2 k f p2 s )ds
(., ) u (., ) ||2L2
2
p =1 || u(.,
|| u (., 0 ) u0 (.) ||2L2
2 k = | (., t )
( t ), X p (.) |2
(.,
B T
+ B42 ln 3
0 || f s ) ||S2 k ds. p =1
2k
k
B
B42 ln 3
2k | u( x, t ), X p ( x ) |2
Hence lim || u ( ) u(( ) ||L = 0 . Thus p =1
0 2 k
lim || u (., ) u(.,
u(((., ) ||L2 = 0 . Using the Theorem B
0 C 2 ln 3 .
3.2, we have lim || u (., ) g ((.)) ||L2 = 0. Therefore,
0
u( x, T ) g ( x ). Hence, u( x, t ) is the unique solution (61)
of the problem (1). From (59), we also conclude Hence
that u ( x, t ) converges to u( x t ) uniformly in t.
k
Theorem 3.4 Let f L2 (0, T ; L2 ( )) and B
g L2 ( ). Suppose that the problem (1) has (., ) u (.,
|| u(., ( , ) ||L2 C ln 3 . (62)
a unique q solution u( x t ) in C([0, T];
(0 T ); L2 ( )) which satisfies
H 01 ( )) C 1((0 f u(., t) Using (62) and Step 2 of Theorem 3.1, we get
S k ( ) for any t [ 0,T ] . Let g L2 ( ) be a
measured data such that || g g || Then there || v (., ) u(.,
(., ) ||L2
exists a function v satisfying
|| v ((., ) u (., ) ||L2 + || u (.,
( ) u(., t ) ||L2
k k k
B B B
|| u(., ( , ) ||L2 [C + ] ln 3
(., ) v (., (60) 1 ln 3 || g g || + C ln 3 (63)
262
Remark 2.
Proof: Note that w(t ) is well-defined because
If k = 1 then the condition u(., t ) SS k ( ) is < w(t ), X p ( x ) > = 0 if p Q. This fact also implies
equivalent to the condition u(., t ) 2
( ). that w Pw . Now for two solutions w1, w2 we have
Hence, this condition is natural and acceptable.
To estimate the error in higher Sobolev spaces w1( t ) w2 (., t )
2
L2 ( )
such as H1 and H2, we can not continue to use 2
the modified quasi-boundary value method. We = < w1(t ) w2 (t ), X p ( x ) > (68)
present a truncation method in the next Section. p R
( )
2
4 REGULARIZATION BY THE
= exp < g1 g2 , X p ( x ) >
p R
TRUNCATION METHOD AND ERROR (T t ) M 2
ESTIMATES IN L2, H1, H2 e2B g1(.) 2 (.) L2 ( ).
( )
T Theorem 4.2
u ( x, t ) e p p b( )d Assume that the problem (1)
( ) has at most one
( )
t
p =1
(weak) solution u C C ,T ; L C 1((0, T );
( )
T
T
)) corresponding to f L2 ((0, T ); L2 ( )) and
gp
p p s b( )d f p ( s )ddss X p ( x )
L2())
g L2 ( ). Let g be measured data such that
t
(64)
g g L2 ( )
.
Let
R { p , p N p M}, Define the regularized solution u L2 ((0, T );
(65) 2
L ()) from g as in (67). Then for each t [ 0, T ] ,
Q { p , p N p > M}.
C ( ) H 01 ( ) and lim u ( ) = u(( ) in
u(t ) C
0
In spite of the uniqueness, the problem is still ln( 1 )
L2 ( ) if we choose M = .
ill-posed, and a regularization is necessary. For 2TBB2
each > 0, we introduce the truncation mapping Proof:
P : L1()) C ( ) H 01 ( ) Note that u(t ) = Pu(t ) C ( ) H 01 ( ) as
in Remark 1. Moreover using the stability in Theo-
Pw( x ) < w, X p ( x ) > X p ( x ). (66) rem 4.1 we find that
p R
u(., t ) u(., t ) L2 ( )
In fact, P is a finite-dimensional orthogonal pro-
jection on L2 ( ). We shall approximate the original Pu(t
(t ) Pu(t ) L2 ( )
+ P uu(t
(t ) u(t ) L2 ( )
(71)
problem by the following well-posed problem.
t
2T
+ | < u(., t ), X p (.) > | 2
p Q
Theorem 4.1
For each f L2 (((0, T ); L2 ( )) and g L2 ( ), t
( )
T
T | < u(., t ), X p (.) |2 u(., t ) L2 ( )
< .
< Pg, X p ( x ) > s b( (67) p Q
t
< P f ( , s ), X p ( ) > ds and M as 0.
263
264
Similarly, from (79) and the estimate with k = 1, 2, respectively. We shall see that from
the condition (41) above with k > 2 we may
> M
improve the estimate, and particularly give an error
| < u(., t ), X p (.) > |2 estimate in H 2 ( ). We next consider a stronger
p 1
condition, although it is quite strict for the linear
1
2 | < u(., t ), X p (.) > |2
M p 1
(82) case, as we discussed in previous section, if the
above condition (85) holds then we have a better
1 2 convergence rate.
u(.,
(., ) H 2 ( ),
M
Theorem 4.4
Let u, u, M be as in Theorem 4.2 and let [ 0, ].
we find that
i. Assume that
2
u (.,
( , t ) u(., t ) L2 ( )
t 1 ln(( 1 ) 2
(83) (., ) ||S2 k ( ) = 22kk |
|| u(., ((., t )), X p ((.)) |2 < ,
+
T
u(., t ) H2( )
. p =1
2TB
B2 M
(86)
Using the inequality a b ( a b ) we
2
t
M 2 < u (., t ) u(., t ), X p (.) > 2T + || (., t ) ||S ,
p 1 B2 ln( 1 )
2TB ( )
> M
2 (88)
+ 2 < u( x, t ), X p ( x ) > (84)
u (., t ) u(., t )
p 1 H2( )
2
M 2 P u (., t ) P u(., t )
2k 2
L2 ( ) t
1
ln(( ) 2 B2T 2
3 2T + 3 || u(.,
(., ) ||S k ( ).
ln( 1 )
> M
2 2TB
B2
+ 2 < u(., t ), X p (.) >
p 1 (89)
> M
ln( 1 ) 2
t
2T
T
+ 2
< u(., t ), X p (.) > 0 Here we assume e for the estimate in H ( ). 2
2TB2 p 1
ii. Assume that
as 0 due to the convergence in (80).
Fr (t ) = e 2rr | u(., t )), X p ((.)) |2 <
Remark 3. p 1
In the subsection (ii) of Theorem 4.3, an error
estimate in H 2 ( ) is not given because we only for some constant r > 0. Then
265
w w + w + || w
w ||L2 3 || w ||L2 (94) 2
H L2 H 01 (u u )(., t ) L2
ln(( 1 )
and M 1 we conclude the desired estimate in
t r
2T + M 2 Fr (t ) T . (99)
H 2 ( ). 2TB
B2
ii. From (71) and
Hence
> M
| < u(., t ), X p (.) > | 2
(u u )(., t )
p 1 (95) L2
e
r T t r
2 rM 2rr
e | < u(., t ), X p (.) > | Fr (t ) 2 T
2T
+ M Fr (t ) 2T (100)
p 1
t ln( 1 ) Fr (t ) ln( 1 ) r 2T
= 2T
+ .
266
267
268
ABSTRACT: In the paper, we study the numerical solution to 3D homogeneous Helmholtz equation
with Dirichlet boundary condition. The discretization of the problem is considered using the Boundary
Element Method (BEM). The problem in the whole domain is first established in terms of the interior
and exterior boundary integral equation. Based on the Green formula, analytical solution to Helmholtz
equation is represented in terms of the boundary data. At last, in this study, we also apply the Finite Dif-
ference Method (FDM) in order to give comparative results. Numerical performances are then proposed
to indicate the validity of our method.
1 INTRODUCTION
where is a continuous function defined on R3 .
The full 3D Helmholtz problem (1.1) is consid-
The Helmholtz equation, carries the name of the
ered to divide into two subproblems, called interior
physicist Hermann Ludwig Ferdinand von Helm-
and exterior problems accordingly.
holtz, was contributed to the mathematical acoustics
The interior Helmholtz problem with Dirichlet
and electromagnetics. It is an important equation
boundary condition is presented as:
to be solved in numerical electromagnetic problems,
for instance, the waveguide problems in physical
u + 2u = 0, in ,
phenomena, acoustic rediation, heat conduction, (1.4)
wave propagation and electrolocation/echolocation u = g , on ,
problems. The research of Helmholtz equation in
2D and 3D are studied in many literatures, such as in which, let us note that 2 cannot be a Dirichlet
(Colton 1998), (E.A. Spence & Fokas 2009), (Olaf eigenvalue for on , for frequencies f in (1.2)
& Sergej 2007), (Burton & Miller 1971), (Ihlenburg small enough and therefore the continuous prob-
& Babuska 1997), (Liu, Nakamura, & Potthast lem has a unique solution. Without this assump-
2007), (Goldstein 1982), (Jan. 2009) and a lot of tion, the Helmholtz operator is singular and there
references therein. A variety of problems require is either no solution or as infinite set of solution
solutions to the Helmholtz equation in both inte- to (1.4). Thus, this is important to do sufficiently
rior and exterior domains. In the case of no source, accurate discrete approximations.
the three-dimensional (3D) time-independent linear The exterior Helmholtz problem, also called scat-
Helmholtz equation is considered as: tering problem, the domain of the solution to this
problem is unbounded domain ext 3
,
u + 2u = 0, x 3
, (1.1) with Dirichlet boundary condition and the addi-
tional Sommerfeld radiation condition holds at
where > 0 is the wave number given by: infinity as follows:
2 f 2
= = = . (1.2)
c c u + 2u = 0, in ext ,
We introduce here = c
the wavelength of u = g , on ,
f
plane waves of frequency f . The governing
1
n u ( x ) i u ( x ) =O 2 , ,
Dirichlet boundary condition is considered on the
bounded Lipschitz domain R3 as: x x
u g on ,
g, (1.3) (1.5)
271
272
(
))(( ) G ( ) ( y )ds( y ), (3.1) 0int : Hloc
1
( ) H 1/ 2 ( ),
0 : Hloc ( ) H 1/ 2 ( ),
ext 1
and
such that
D : H 1/ 2 ( ) Hlo
locc ( ),
1
0intv = v | for v C ( ),
such that for x R3 , (3.8)
0extv = v | for v C (ext ),
G ( y )
( ))(( ) ny
( y )ds( y ), (3.2) and the 1int 1ext are interior and exterior Neumann
trace operators are defined respectively as:
G ( , y )
( )( ) ny
( y )ds( y ), for x , (3.3) such that
v
1intv = for v C ( ),
The adjoint double layer potential operator K n
is defined by: (3.9)
v
1extv = for v C (ext ).
G ( , y ) n
( )( ) nx
( y )ds( y ), for x . (3.4)
Theorem 3.2. Let 0 and 1 be the Dirich-
let and Neumann trace operators. Then we have
As a result, let us recall the definitions of Dirich-
H 1/ 2 ( ) H 1/ 2 ( ) :
let and Neumann trace operators 0 1 , were pro-
posed in (Jan. 2009). The Dirichlet trace operator
0 is: 0
ext
0int
= 0,
1
ext
1int
= ,
0 : H loc
1
( ) H 1/ 2 ( ). 0 D = 0ext D 0int D = ,
1D = 1ext D 1int D = 0,
Combine the operator 0 with S
one has the
single layer potential: Theorem 3.3. (Jan. 2009) For H 1/ 2 ( ) there
holds
S H 1/ 2 ( ) H 1/ 2 ( ) S = 0 S
. (3.5)
1
1int (
k )( x ) ( ) ( K )( x ), x (3.10)
And the Neumann trace operator is given as: 2
1
1 : H loc
1
( ) H 1/ 2 ( ). 1ext (
k )( x ) ( ) ( K )( x ), x (3.11)
2
Combine the operator 1 with the single layer In addition, let us also introduce here the hyper-
potential one has the linear continuous mapping: singular integral operator, which is defined as the
negative Neumann trace of the double layer poten-
1S
: H 12
( ) H 1/ 2 ( ). tial K , denoted as E .
273
(3.18)
where the Neumann trace of the double layer
potential term 1( ) is given on the boundary obtains the Fredholm boundary integral equation
as: of the second kind:
1( 1 int
) 1ext ( ) 1 ( ),
int
H 1 2 ( ). ( 1 )( ) ( int
)( )
)( ( )( ), x .
1
2
(3.14)
(3.19)
3.1 Interior boundary value problem
According to these above equations it gives vari-
Theorem 3.4. If u is a solution to the interior Dirich- ational problems as
let Helmholtz problem:
1
u + 2u = 0, in , S int
u = I K g H 11 2 ( ),
2
iint (3.15)
0 u = g , on ,
(3.20)
with a bounded Lipschitz domain and Dirichlet and
boundaryy condition g H 1/ 2 ( ) , then the Neumann
trace 1int u satisfies the boundary integral equation 1
I
2
K 1in
iintt
u, E g
H 1 2 ( ).
int 1
S ( u( x )) = g ( x ) ( K g )( x ), x . (3.16)
2 (3.21)
(3.17) ext 1
( )( ) = g(( ) + K g ( ), x . (3.23)
2
and since u g , x we get g , which 0int u
implies the Fredholm boundary integral equation and u has the representation formula (3.7).
of the first kind: Conversely, if 1int u satisfies the boundary inte-
gral equation (3.23), then the representation formula
int 1
S ( u( x )) = g ( x ) ( K g )( x ), x . (3.7) defines a solution u to the interior Dirichlet
2 problem (3.22).
274
275
N M
g gl l . u( x ) uh ( x ) = Wk G ( x y )ds( y )
k
l =1 k =1
N
G ( x, y )
( gh )l l ( y ) ds( y ).
The linear space T ( ) is applied for the approx-
l =1
n( y )
imation of the Dirichlet data, i.e., the values of the
solution on . (4.8)
4.1.2 Discrete solution to interior problem 4.1.3 Discrete solution to exterior problem
Let us denote uh is the approximated solution Similarly, we denote uh is the approximated solu-
to the interior problem (1.4), and the discrete tion to the exterior problem (1.5), and the unknown
unknown Neumann data as: exterior Neumann data W 1ext uh . From (3.26),
for all h H h , we have
W : 1int uh . (4.3)
1
S W ,h I K gh h . (4.9)
From (3.20), for all h H h , we have 2
1
SW ,h
I K gh h . (4.4) We find the approximate forms in term of func-
2 tional bases as:
M N
We find the approximate forms in term of func-
tional bases as: W W k k T gh gl l T ( ).
k =1 l =1
M N
W Wk k T gh gl l T ( ). For every i 1, 2,..., M , from (4.9) it gives dis-
k =1 l =1 crete variational problem:
(4.10)
1
VW = R P gh , (4.5) Finally, we obtain the linear system to find the
2
approximation of Neumann data W :
or simplicity in term:
1
V W = R + P gh , (4.11)
CW = D, (4.6) 2
M
and
u( x ) uh ( x ) = W k G (x y dds( y )
k
k =1
R ( j m ) j
m ( x )ds( x ), N
( gh )l l ( y )
G ( x, y )
ds( y ).
G (x
( , y) l =1
n( y )
P ( j m ) m ( y)
j n( y )
d ( x ).
ds( y )ds
(4.12)
276
Au = b, A C N N N
, u, b C N , (4.14) Figure 1. Domain for numerical experiments.
277
6 CONCLUSION
REFERENCES
Figure 5. Solution to the interior problem by FDM
inside the cube.
Burton, A.J. & G.F. Miller (1971). The application of
integral equation methods to the numerical solution
of some exterior boundary-value problems. Proceed-
u( x y z ) are then displayed in a three dimensional ings of the Royal Society of London, Series A, Math-
coordinate system. ematical and Physical Sciences 323(1553), 201210.
Otherwise, on the cube, Figures 4 and 6 are the Colton, D. (1998). Inverse acoustic and electromagnetic
simulations results for the numerical solutions. scattering theory, second edition.
278
279
R. Blaheta
Institute of Geonics of the CAS, Ostrava, Czech Republic
ABSTRACT: The paper provides an overview of the stochastic Finite Element Method (FEM) for the
investigation of the flow in heterogeneous porous materials with a microstructure being a Gaussian ran-
dom field. Quantities characterizing the flow are random variables and the aim is to estimate their prob-
ability distribution. The integral mean of the velocity over the domain is one of these quantities, which is
numerically analyzed for a described model problem. The estimation of those quantities is realized using
the standard Monte Carlo method and the multilevel Monte Carlo method. The paper also concerns the
use of the mixed finite element method for the solution of the Darcy flow and efficient assembling and
solving of the arising linear systems.
281
C X T ), Cij = c(x
(XX ( x ( i ) , x ( j ) ). (10)
282
Figures 3. Random field for parameters: = 2, = 0.1. Figures 6. Random field for parameters: = 2, = 0.3.
Figures 4. Random field for parameters: = 4, = 0.1. Figures 7. Random field for parameters: = 4, = 0.3.
283
material microstructure.
3.2 Multilevel Monte Carlo method
The Figures 2, 3, 4, 5, 6, 7 show that the changes
of the parameter affects only the logarithmic For the mean value E(L ) of a random variable
scale of the values, which is caused by the linear L we can write
relation between g k and 2. The influence of
the parameter can be observed in a smoother L
material with growing . E(L ) E(0 ) + E(l
E( l 1 ). (15)
l =1
2
a specific sample k(d) of the Gaussian random field,
to the 95% confidence interval for the estimated which was obtained for a random vector X, where
value. For the random variable keff the same esti-
mation as for u x was obtained. The graphs in
X i N (0,1) , i
( n) { }
, , d 2 . To obtain the value
l 1 we first create a coarse material k(d/2) from a
1
284
285
=1 0.6024 0.0059 0.2335 0.0023 where TMC is the total time of the MC simulation
=2 2.34 0.0229 0.5695 0.0056
and TMLMC time of the MLMC simulation, see the
=4 67.3806 0.6604 2.9495 0.0289
Table 10.
The value 1 for = 4 and = 0.3 in Table 10 is
caused by the fact that in this case it was evaluated
in the preliminary run, that only one level should
Table 7. Sample average of u x2 . be used, i.e. it is the standard MC method. In the
remaining cases all of the 4 levels were used.
= 0.3 = 0.1 The Table 11 shows the values of sl2 on each
of the levels l { , , } calculated in the prelimi-
=1 0.0005 0.0015 0.0001 0.001 nary run (level l = 1 corresponds to the coarsest
=2 0.003 0.0062 0.0002 0.0022 grid, while the remaining values present the dif-
=4 0.02 0.1623 0.0023 0.0109 ference between the fine and coarse grid on the
given level). We used these values to calculate the
numbers of samples to be executed on each of the
MLMC levels.
Table 8. Sample standard deviation of u x2 . The following table shows the ratio of the num-
bers of samples, that were used on different levels.
= 0.3 = 0.1
Procedure 2: Coarse grid approximation as
=1 0.1098 0.0011 0.069 0.0007 arithmetic mean of correlated random field
=2 0.4499 0.0044 0.1596 0.0016 In this case we use a similar approach as in the pro-
=4 11.7075 0.1147 0.7865 0.0077 cedure 1, but the key difference is that the smooth-
ing is applied to the correlated values,
1 c
Y1c = X1 X 2c X dc +1 + X dc + 2 (20)
(weighted arithmetic mean), etc. This approach 4
ensures that the values Yi follow the N (0,1) distri-
bution, therefore the obtained material k(d/2) is also (arithmetic mean). A disadvantage is that this
a Gaussian random field. The value of l(1n)
is then coarse grid approximation is not the same ran-
calculated on the coarse ggrid and remains corre- dom field, so in the lower MLMC levels we always
lated with the value of l( n ) . need to construct a new covariance matrix and
The MLMC method was tested on the model its Choleski factorization. The new covariance
problem with grid size 200 200, therefore is was matrix is created by averaging of elements of the
possible to use three coarser grids of dimensions fine grid covariance matrix according to the fine
100 100, 50 50 and 25 25. The numbers of grid to coarse grid elements mapping, this con-
samples Nl to be performed on specific levels were
calculated from a preliminary simulation run. In
this run the same number of samples was per- Table 9. MLMC method results for u x1 .
formed on each level and then the values of com-
putation time Tl and sample standard deviation = 0.3 = 0.1
sl were estimated for each level. The values of Nl
=1 1.1302 0.0039 1.0189 0.0007
were then calculated according to (Cliffe, Giles,
=2 1.6744 0.0189 1.1003 0.0021
Scheichl, & Teckentrup 2011) as N sl2 Tl , where
=4 9.6647 0.5259 1.745 0.0152
N is a constant common to all the levels.
The Table 9 presents the results of the MLMC
method that can be compared with the MC method
results (Table 5). Table 10. MLMC/MC efficiency calculated via (19).
The MLMC results were calculated with differ-
= 0.3 = 0.1
ent number of samples (i.e. different computation
time) than the MC results, therefore we propose the =1 1.9382 7.7148
following indicator for comparison of the efficiency. =2 1.1841 5.7974
The efficiency of the MLMC estimator in compari- =4 1 3.0011
son to the MC estimator will be calculated as
286
N4 N3 N2 N1
287
M BT
A= (23)
B 0
M
BTW 1B BT
P1 = , (24)
0 W
with M
being a suitable approximation to M and
W being a block independent on sampling, e.g.
W 1r I , where r is a (large) regularization param-
eter, { , , } . Special cases are M
being a Figure 11. Discretized velocity u (first coordinate).
288
ACKNOWLEDGEMENT
289
M.-P. Tran
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: In numerical analysis, the Schur complement is the heart of domain decomposition
method. A lot of promising results have been derived to present its mathematical basis. In this paper, we
propose a numerical method of solving Schur interface problem using the conjugate gradient method.
The parallel STD algorithm is also described to give comparable results with the proposed method. Some
numerical experiments have been performed to show a good parallel efficiency and convergence of our
method. Efficient parallel computation requires a few minutes to be completed, and they are much less
coupled than the direct solvers. In the rest of this paper, some numerical results have been carried out to
show its convergence properties and some open problems are also discussed.
1 INTRODUCTION
Let R d be an open bounded domain with
the boundary = . Suppose that we want to
Domain decomposition methods are important
solve the following Poissons equation:
techniques for the numerical simulation. Domain
decomposition can be used in the framework of u = f in , (1.1)
several discretization method for Partial Differen-
tial Equations (PDEs) to get more efficient solution
on parallel computing. The basic idea in domain with Dirichlet boundary condition:
decomposition methods is to split the domain of
study into non-overlapping subdomains, on that one u g on .
g, (1.2)
has discretized problems are simple and convenient
to be solved. Many variants of domain decomposi- The Schur complement method splits up the
tion method have been proposed and investigated linear system into subproblems. To do so, let us
in (Milyukova 2001, B. Smith 1996) and references divide into p subdomains 1 2 , ..., p with
therein. In numerical analysis, the Schur comple- share interfaces 1 2 , ... . One divides the entire
ment method is the basic non-overlapping domain problem into smaller non-overlapping subdomain
decomposition method, one of the most popular problems, then solves the subdomain problems to
linear solvers. The Schur complement is a directed form interface problem and solves it. This paper
parallel method, that can be applied to solve any will discuss the Schur complement as proposing
sparse linear equation system. For instance, the the parallel implementations for general sparse lin-
parallel Schur complement method was followed by ear system. One considers the parallel solution to
(Mansfield 1990, S Kocak 2010) and a lot of recent one dimensional case (1D), where the Schur com-
literatures. In many practical applications, the pre- plement system on subdomain interfaces is solved
conditioned conjugate gradient method is used by conjugate gradient method.
because of its simplicity, one can refer to (Meyer The problem (1) and (2) are discretized to get
1990). Therefore, it is a convenient framework for the system:
the solution to our sparse matrix systems.
In this paper, we present a simple Schur comple- .U = F ,
AU (1.3)
ment using conjugate gradient method for solving
one-dimensional Poissons equation. We only con- where the stiffness matrix A, the load vector f
sider the classical PDE in this study to claim the par- and approximate solution U can be decomposed
allel efficiency of the proposed method for a simple into p groups, corresponding to subdomains
linear equations system. The idea of solving other 1 2 , ...,
p . In this study, we have just treated the
large equations systems using proposed method following 1D problem. For general elliptic problem,
turns out to be very successful in the same way. the Schur complement is more complicated so that
291
{
2 THE SCHUR INTERFACE PROBLEM
xx = G ( x ), on = ( , )
(1.6)
( ) ( ) = 0. In this section, we present the Schur interface
complement method that is applied to solve the
Therefore, without loss of generality, in this problem (1.7). One refers to (Mansfield 1990) the
paper we only study the following problem with Schur complement method given in the following
Dirichlet homogeneous boundary condition, as steps:
the same as (1.6):
1. The domain of Sproblem (1.7) is subdivided
{uxx F ( x ), on = ( , ),
u( ) u( ) = 0.
(1.7)
into non-overlapping subdomains using parallel
graph partition,
2. Rewrite the stiffness matrix A in the linear sys-
tem (8) in each subdomain and interface,
where u is unknown, F represents a continuous
3. Solve the subdomain problem to calculate the
source term. This problem (1.7) can be rewritten in
Schur matrix S from each known submatrix,
term of the linear system:
4. Solve the Schur complement system SU = G ,
5. Solve the subdomain system to obtain
Au = B, (1.8)
solution in the whole domain by parallel
algorithm.
where A is the sparse matrix. By using the Schur
method as proposed as in the paper, the Schur In (1.8), one subdivides the problem into p parts
complement matrix S is introduced and the system ( p 2 ). The vector solution U can be decomposed
(1.8) is rewritten in the abbreviated form: into p groups, that is U (U1,U 2 ,...,U p ) , where
U i (i p ) are corresponding to domain
S = , (1.9) 1 2 , ..., p , respectively. It is important to notice
that the decomposition of into the subdomain i
where the matrix S in the interface problem is related does not have any cross point. In this study, we also
to the entire problem (1.8). This paper also presents discuss two models of domain decomposition and
a proposed method allows Matlab users to take propose the parallel schemes for solving problem
advantages of the Message Passing Interface (MPI) (8) numerically, for the case p = 2 and for general
to design parallel performance. In particular, only p2.
the basic send and receive operations: MPI Send Suppose that the approximation to the weak
and MPI Recv are both blocking calls, respectively. formulation results of (1.1) and (1.2) is of the form
These calls are used to implement programs across (1.8), the stiffness matrix A is given as the follow-
multiple processors for parallel computation. ing sparse matrix:
292
K K12 0
11 Then, the original linear system (1.8) is divided
into three subproblems given as:
A= K 21 K22 K23 , (2.3)
0 K32 K33
K11U1 + K12U 2 = F1, (2.10)
where Kij are defined for i j = 1, 2, 3 as: K 21U1 + K 22U 2 + K 23U 3 = F2 , (2.11)
11 and K33 are matrices of order k, one denotes K32U 2 + K33U 3 = F3 . (2.12)
K11, K33 R k k as below:
From the first and third equation (2.10) and
(2.12), one obtains that:
2 1 0 0
1 2 1 0
1 U1 K111 ( F1 K12U 2 ) ,
K11 K33 = 20 1 2 0 . (2.4) (2.13)
h U 2 K331 ( F3 K32U 2 ) .
0 0 0 2 Make substitution to the second equation (2.11),
we arrive at the Schur complement equation:
K12 and K32 R k 1 :
U2 = G,
SU (2.14)
293
K11 K12 0 0
K K 22 0 0
21
0 K 32 K33
A= 0 (2.17)
,
0
K(2 p 2 )( 2 p
p 2)
K( 2 p 2 )( 2 p 1)
0 0 K(2 p 1)( 2 p
p 2) p 1)
K( 2 p 1)( 2 p
294
1 1 Gi F i K ( 2i )( i 1) K (21i 1)( 2 i 1) F2 i 1
K( = K 4 = 2 [0 0 1]; (2.32)
i )( i )
h2 h K ( 2i1i )( i +1) F2i +1,
(2.21)
1 1
K( = 2 K 6 = 2 [ 1 0 0 ].
i )( i )
h h and U (U 2 ,U 4 , ,U 2( p ) ) .
The problem of solving (2.27) is called the Schur
K( i )( i ) =2. interface problem, and the assembly and solution
of sub-matrices in (2.26) can be performed paral-
It can be noticed that since the subdomain i lely by different processors. The parallel implemen-
is disconnected to each other (non-overlap), the tation of the Schur conjugate gradient method can
corresponding block matrices Kij is also discon- be presented in three steps:
nected to each other. This allows us to make an
easy parallelization. 1. Step 1: Calculate the matrix G by the parallel
The linear sparse system (1.8) is split into several scheme in Figure 5;
particular blocks: 2. Step 2: Calculate Sx by parallel scheme as in
Figure 6;
K11U1 + K12U 2 = F1, (2.22) 3. Step 3: Solve the equation SU = G by the pre-
conditioned conjugate gradient method. The
K 21U1 + K 22U 2 + K 23U 3 = F2 , (2.23) similar procedure is applied to independent
K32U 2 + K33U 3 + K34U 4 = F3 , subproblems. Figure 3 also shows the pseudoc-
(2.24) ode to solve it numerically.
K( p )( p )U 2 p 2 + K( p )( p )U 2 p 1 = F2 p 1 , (2.25)
295
(
where U U1,U 2 , ,U 2 p )
p 1 . Suppose that we have
comm = p , that means, it shares works for p proc-
Figure 5. Parallel scheme to calculate G. essors. Computation of quantities in the system in
each subdomain can be done by parallel scheme in
A11 A12 0 0
A
21 A22 0 0
0 A32 0 0
A= , (3.1)
0 A( m A( m 1) m
)( m
m 1)
0 Am ( ) Amm
296
u( x ) = e 2 x 4 sin x + cos x + 2 x, (4.1)
3 4
Figure 10. Numerical solution to Example 1, for different cases of p and k. (a): By STD algorithm, (b) and (c): By
Schur conjugate gradient method. Red: the exact solution. Blue: The approximate solution.
297
Figure 12. Numerical solution for case p = 3 and k = 30. (a) By conjugate gradient method, (b) By STD algorithm.
Red: the exact solution. Blue: The approximate solution.
4.2 Example 2 in Figure 11. One remarks that in this case, four
processors have been used, that is, p = 4.
Let us consider the function: As in the Example 1, the numerical behaviour of
parallel Schur conjugate gradient gives convergent
solution in only two first iterations. Nevertheless,
u( x ) = sin x + x 2 , (4.2)
2 in Figures 10 and 11, we give numerical evidence
that by the STD algorithm, the convergence is still
is the exact solution to (1.4). The numerical results not achieved after maxit = 200 iterations. Let us
have been implemented also for the tolerance tol = consider additional example, where the exact solu-
106 and the iterations maxit = 200. The approxi- tion u( x ) = 3x 3 5x 2 + 2 x to (1.4), the numerical
mate solution by conjugate gradient method and simulation of both parallel schemes are also pre-
the parallel STD implementation can be represented sented in Figure 12 to give a comparison.
298
299
P. Kurov
VBTechnical University of Ostrava, Ostrava, Czech Republic
ABSTRACT: This article models the Phadiatop test. This study was created with support from the
Clinic of Occupational and Preventive Medicine to try to avoid unnecessary and costly testing. This esti-
mation used statistical methods, specifically logistic regression to predict a patient into a particular group,
and contingency tables to verify other dependences between the patients other characteristics. Patients
were categorized only on the basis of their personal and family anamnesis, age and sex. Patients were put
into the correct group (healthy or sick) with 64% probability. Also a testing based on age groups of the
patients was done using this database. The presence of the positive Phadiatop test was the most common
for people born between 1972 and 1981, where the genetic predispositions for a positive Phadiatop test
results are about 55%.
303
nl nl 22
e 0 + x l =1
L
(x ) = P (Y = 1 X = x )= . (2) nl
e 0 + x + 1 MH = . (7)
L nl nl 21
304
Number of
patient 5 16 35 40 41
Sex m m m m w
Year of birth 1973 1970 1974 1986 1991
PA_asthma 0 1 1 0 0
PA_allergic 1 1 0 0 0
rhinitis
PA_eczema 1 0 0 0 0
PA_others 0 0 0 1 0
FA_asthma 0 1 0 0 0
FA_allergic 1 0 0 0 0 Figure 1. Depiction of age groups contingent on the
rhinitis Phadiatop test result.
FA_eczema 1 0 0 0 0
FA_others 1 0 0 1 0
Independent
variables Estimate Walds Significant
305
306
To 1982 28 22 26 11 0.21
Chi-Square statistics, Table 5. If we are examining 1981 to 1972 54 33 45 18 0.31
dependence for groups of patients by their year of 1962 to 1971 44 26 27 11 0.24
birth, we can see from the stated values that in any 1952 to 1961 33 17 23 9 0.16
of the groups we do not reject the null hypothesis Before 1952 8 9 9 4 0.08
on independence of the Phadiatop test result on Sum 167 107 130 53 1
sex at a 5% level of significance.
To determine a more accurate result of the
Phadiatop test testing contingent on a patients sex, Positive asthma or allergic rhinitis and other
depending on whether we consider a patients age records then did not need to be taken into
as well, we will perform a conditional dependence account
test by means of Cohran and Mantel-Haenszel sta- Positive eczema and other diseases at once, if the
tistics, Table 6. asthma and rhinitis were negative.
Based on the performed tests, we do not reject
the null hypothesis on independence of the Phadi- Based on Table 7, it is obvious that the most
atop result on sex at a 5% level of significance. patients with a genetic predisposition are from the
According to the above stated calculated sta- age group born between 1981 and 1972, 45 patients
tistics it was verified that a patients sex and age out of the total of 130 patients who were found to
have no influence on the Phadiatop test result. The have a genetic predisposition to a positive result of
illnesses influencing the Phadiatop test result are the Phadiatop test, Table 7 (the column Predis-
Asthma, Allergic rhinitis and Eczema in a patients positionGenetic). The proportional representa-
personal anamnesis. tion of the patients with genetic predisposition is
(130/274 = 0.47).
This proves the division of the data file, where
4 THE PHADIATOP TEST RESULTS there are 107 positive patients out of 274, about
EVALUATION BASED ON THE AGE 39%. Based on the family predispositions, 47% of
AND GENETICAL PREDISPOSITIONS all patients should be in the positive Phadiatop
test group. Nevertheless, there are (53/107 = 0.49)
Based on the above stated information about a genetically predisposed patients with the positive
patients age and categorization of patients into Phadiatop test (107 records).
age groups, we will perform the search for genetic
predispositions to a positive Phadiatop test. The
positive (II to VI) and the negative (0 and I) Phadi- 5 CONCLUSION
atop test results are stated.
Based on the mentioned age groups and the Knowledge of the Phadiatop test is very signifi-
positive and negative Phadiatop tests division cant, for both patient examination in offices of
it is clear that a proportional representation of occupational and preventive medicine and for cor-
the diseased patients is more or less equal. The rect patient care (e.g. Travellers). Unfortunately
only group that differs and has the most positive performing the Phadiatop test is expensive; there-
patients is the age group born 19811972, thus, fore, there is an effort to model its result as accu-
the young patients. According to the available rately as possible using characteristics that are easy
information, the genetic predisposition for atopy to discover, such as a patients personal and family
(positive Phadiatop test II to VI) should be about anamnesis, and also e.g. age, sex etc.
30% for the inhabitants of the Czech Republic. The tested database came from the years 2010
For this analysis, a patient with genetic predispo- 2012 form the University Hospital of Ostrava;
sitions was that one who filled in the statistically there were 274 patient entries available for the so
significant positive diseases of the Phadiatop test called control group of patients, i.e. patients who
results into his family anamnesis. Thus, a patient have no specific illness, and the test is performed
with genetic predispositions included into his/her preventively (e.g. Travellers etc.). The testing used
family anamnesis: logistic regression; on the basis of the performed
307
308
S. Ly
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
U.H. Pham
Faculty of Economic Mathematics, University of Economics and Laws, Ho Chi Minh City, Vietnam
R. Bri
VSBTechnical University of Ostrava, Ostrava, Czech Republic
ABSTRACT: Distortion risk measure is a very effective tool for quantifying losses in finance and insur-
ance while copulas play an important role in modeling dependence structure of random vectors. In this
paper, we propose a new method to estimate distortion risk measures and use copulas to find the distribu-
tion of a linear combination of two dependent continuous random variables. As a result, partial risks as
well as aggregate risk are definitely estimated via distortion risk measure using copulas approach.
1 INTRODUCTION
fY y ) = fX1 fX 2 (yy fX ((xx
1
fX 2 ( y x ) dx. (3)
Suppose that we have a portfolio Y consisting of
two assets X1 and X2 as follows: In Cherubini et al. (2011), the authors consider
the case X1 and X2 are not independent. In their
Y w1X1 w2X 2 , (1) approach, copula is used to define a C-convolution
given by
where, wi denotes the weight of asset i, i = 1, 2 .
Let FX1 FX 2 and FY be distribution functions of C
X1 X 2 and Y, respectively, where X1 and X2 are FX1 X2 ( y ) = FX FX ( y )
1 2
(4)
not independent. Here, our goal is to calculate the
risk of the portfolio Y under distortion risk meas-
=
1
0 u ( (
C u, FX 2 y FX11 ( u ) du, ))
ure, see Wang (2000), given by
where, C is a copula capturing dependence struc-
Rg [Y ] = g ( F Y ( y ) ) ddyy + g ( F Y ( y ) )
0 ture of X1 and X2.
dy
d ,
0
In this article, we consider a more general case in
(2) the sense that using copula to find the distribution
of the porfolio Y as a combination of two continu-
where, g is a distortion function and ous variables. After that, we will conduct an estima-
F Y ( y ) 1 FY ( y ) is a survival function of Y. tion for the risk of Y using distortion risk measure.
The risk measure Rg is formed using Choquet The paper is organized as follows. The intro-
integral, see Wang (2000). In some cases, Y denotes duction is presented in section 1. The preliminar-
non-negative loss, then the distortion risk measure ies about distortion risk measures and copulas are
only has the first part in (2). As we can see, the briefly recalled in section 2 and section 3. After
important thing is that we have to derive the dis- that, in section 4, we propose a new formula for
tribution of Y. estimating the risk. In section 5, a copula-based
Recall that if Y X1 + X 2 and X1, X2 are inde- method for finding the distribution of a linear
pendent, then it is well-known that the solution combination of random variables is established.
can be solved through convolution product of two Next, we show the applications in section 6 and the
density functions fX1 and fX 2 , given by conclusions are stated in the last section.
309
310
1 t
Rg [Y ] = g 1 FY
1
dt. FY ( y) = sgn(w2 )
1 y w1F1 1
(u) du,
0 1 t ( t)
2 C u, F2
0 u w2
Let k (t ) = g ( FY ( t t )) ( 1t ) 2
and apply (17)
the composite trapezoidal rule, we have an where, c denotes the density of copula C and sgn (x)
approximation: is a sign function of x,
1 1 k( ) + k ( ) + n 1 k i . 1, if x > 0,
Rg [Y ] = k (t ) dt sgn( ) =
0 n 2 2 i =1 n 1, if x < 0.
311
0 1 = w2
w2
( )
Y1 Y2 1
J= = 1 w = 0. 1 t w F ( ) 1
312
w1 3% d w2 9 %.
313
Figure
g 3. F
approximates normal CDF, Copula Statistic Parameter P-value
( )
1
N %, . % .
%,
C 0.015919 0.45947 0.6658
H ( ( x ), ( x )),
(x ,x ) C (32)
1 2 F1 1 F 2 2
where, F 1 and F
2 approximate normal distribu-
is a Student
tion as shown in Figure 3 and 4; C
copula with the parameter = 0.46 and = 9,
given by
(u, v ) t t
C ,( (u ) ,t,t (v )) ,
314
7 CONCLUSIONS
ACKNOWLEDGEMENT
315
Ron S. Kenett
The KPA Group, Raanana, Israel, Department of Mathematics G. Peano, University of Turin, Italy
Institute for Drug Development, The Hebrew University of Jerusalem, Israel
Faculty of Economics, Ljubljana, Slovenia, Center for Risk Engineering, NYU Tandon School of Engineering,
New York, USA
ABSTRACT: The literature on statistical process control has focused on the Average Run Length
(ARL) to an alarm, as a performance criterion of sequential schemes. When the process is in control,
ARL0 denotes the ARL to false alarm and represents the in-control operating characteristic of the proce-
dure. The average run length from the occurrence of a change to its detection, typically denoted by ARL1,
represents the out-of-control operating characteristic. These indices however do not tell the whole story.
The concept of Information Quality (InfoQ) is defined as the potential of a dataset to achieve a specific
(scientific or practical) goal using a given empirical analysis method. InfoQ is derived from the Utility (U)
of applying an analysis (f) to a data set (X) for a given purpose (g). Formally, the concept of Information
Quality (InfoQ) is defined as: InfoQ(f, X, g) = U(f(X | g)). These four components are deconstructed into
eight dimensions that help assess the information quality of empirical research in general. In this paper,
we suggest the use of Probability of False Alarm (PFA) and Conditional Expected Delay (CED) as an
alternative to ARL0 and ARL1 enhances the Information Quality (InfoQ) of statistical process control
methods. We then review statistical process control methods from a perspective of the eight InfoQ dimen-
sions. As an extension, we discuss the concept of a system for statistical process control.
317
318
3 DETECTION OF CHANGE
Change happens and, invariably, its early detec- Figure 1. Four time series with change point at the 10th
tion is of importance. Usually we are not given observation.
319
320
321
322
323
324
325
Nabendu Pal
Department of Mathematics, University of Louisiana at Lafayette, Louisiana, USA
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
ABSTRACT: We are familiar with the two-way Analysis Of Variance (ANOVA) using the normal
model where we assume the additivity of the factor effects on the mean of the response variable assuming
homoscedasticity (i.e., variances are all equal across the factor levels). But this type of normal set-up is
not applicable in many problems, especially in engineering and biological studies where the observations
are non-negative to begin with and likely to be positively skewed. In such situations one may use the
Gamma model to fit the data, and proceed with further inferences. However, a normal type inference
based on the decomposition of total Sum of Squares (SS) is not possible under the Gamma model, and
further sampling distributions of the SS components are intractable. Therefore, we have looked into this
problem from the scratch, and developed a methodology where one can test the effects of the factors.
Our approach to tackle this interesting problem depends heavily on computations and simulation which
bringa host of other challenges.
327
1 xijk / iij 1 The most general case, i.e., where all scale
f xijk | ij iij ) = ij
exp xijkij . (1.2)
( ij ) iij parameters are unknown and possibly unequal will
be considered in a later phase of our study. The
above assumption (1.4) helps us in developing the
The mean and variance of X ijk are given as ideas and concepts as well as computational tools
which will be generalized rather easily for the next
E (X ijk ) = ij ( ) = ij iij , phase of our study.
(1.3)
V (X ijk ) = ij2 (
andV ) = ij iij2 . Under the assumption of equal scale (i.e., (1.4)),
the four hypothesis testing problems essentially
To study whether the factor levels, individually boil down to studying the effects of the two factors
or jointly, have any significant influence on the on the shape parameters ij s only.
means, or whether the two factors have any inter- In Section 2, we develop the test procedures
action, we are going to consider the following four for Problem 1 (i.e., testing the significance of
hypothesis testing problems. Factor 2) followed by a comprehensive simula-
tion. In Section 3, we consider the test procedures
Problem - 1: Test H 0(1) ij i jj vs. H A( ) : ijij ij , for Problem 3 (i.e., testing the joint significance of
for some j j ,1 j j b . Factor 1 and Factor 2). In each of the above two
Problem - 2: Test H 0( 2 ) : ij j i vs. problems we first derive the Asymptotic Likelihood
H A : iijj i jj , for some i i ,1 i , i a .
( )
Ratio Test (ALRT). As we will see from our simula-
Problem - 3: Test H 0(3) : ij (i( j) vs. tion, the ALRT performs poorly in maintaining the
H A : ijij i j , for some i i and/or
( )
nominal level for small sample sizes, and hence an
j j ,1 i , i a 1 j , j b . improvement is presented in terms of a Parametric
Bootstrap (PB) version of the test based on the like-
Table 1. Enzyme activity data according to gender and lihood ratio statistic, henceforth called PBLRT.
genotype.
In a series of recent papers (see Pal et al. (2007),
Gender Chang et al. (2008), Chang et al. (2010), Lin et al.
Genotype Male Female (2015), it has been shown that the PBLRT works
much better than the ALRT for many other prob-
FF 1.884; 2.283; 2.838;4.216; lems where an exact optimal test either does not
4.939; 3.486; 2.889; 4.198 exist or hard to find due to a complicated sampling
FS 2.396; 2.956; 3.550; 4.556; distribution. Although the PBLRT performs better
3.105; 2.649; 3.087; 1.943 than the ALRT in terms of maintaining the level
SS 2.801; 3.421; 3.620; 3.079; (i.e., probability of type I error) condition, espe-
4.275; 3.110; 3.586; 2.669 cially for small to moderate sample sizes, it is heavily
328
DN 7.0; 7.8; 13.0; 12.9; 8.0; 7.7; 11.3; 13.2; 5.9; 6.4; 9.0; 8.2; 6.0; 6.5; 8.9; 8.1;
14.7; 7.8; 8.0; 12.6 14.8; 7.9; 8.1; 12.7 9.6; 6.7; 6.8; 9.0 9.6; 5.8; 5.9; 8.1
SG 8.0; 9.5; 13.2; 13.0; 8.0; 9.5; 13.2; 13.0; 7.0; 8.1; 9.8; 12.4; 7.3; 8.3; 10.1; 12.8;
13.8; 26.8; 140.0; 17.2; 13.8; 26.9; 141.0; 17.3; 12.5; 21.9; 140.0; 15.5; 12.9; 19.6; 122.0; 14.0;
56.4; 23.8; 20.7; 162.0; 56.5; 23.8; 18.9; 149.6; 53.5; 20.8; 19.1; 132.0; 47.3; 21.7; 18.2; 134.0;
67.4; 17.5; 17.0; 11.3 59.9; 15.7; 18.8; 11.3 58.0; 15.6; 14.9; 9.2 59.5; 14.2; 15.5; 9.2
HL 10.4; 15.0; 8.0; 7.5; 9.3; 15.2; 7.9; 7.6 7.8; 8.7; 7.0; 6.2; 6.8; 9.4; 6.5; 6.4;
7.1; 6.0; 7.6; 7.4; 7.4; 5.4; 7.7; 7.4; 5.7; 4.4; 5.5; 5.7; 5.8; 4.5; 5.6; 5.7;
6.3; 27.7; 22.9; 11.2 6.3; 27.7; 29.9; 11.6 4.5; 19.1; 18.9; 9.4 4.5; 19.1; 18.9; 9.4
dependent on computations. But given the compu- percentile value of 2 -distribution. But one must
tational resources available today, implementation note that this test based on the Chi-square distri-
of PBLRT should not be any difficulty. bution is not very good (or accurate) when nijs are
small. But first we are going to see the details of
this LRT method.
2 TESTING THE SIGNIFICANCE OF Given the independent observations X ijk
FACTOR 2 (PROBLEM 1) Gamma ( ij ), the likelihood function L is given as
329
and then obtain as As stated earlier, for moderately large nijj val-
ues, follows 2 under H 0(1) , with = a(b ) .
a b We will see later that for small nijs, the size of the
= (n X ) / nij ij . (2.11) ALRT is higher than whereas the proposed
i 1 j 1 PBLRT keeps it within . The beauty of the
PBLRT is that it is a purely computational tech-
Thus, nique where one does not need to know the sam-
pling distribution of the test statistic (which is the
sup = ( ij , ,1 ,1 j | j ,(
ijk , j , )). LRT statistic in this case), and the critical value is
(2.12) derived through a simulation. Before discussing
further about the pros and cons of the PBLRT,
we first describe how it is implemented through a
The log-likelihood function under H 0(1) , hence- series of steps as given below.
forth denoted by L0(1) , is
Steps of the proposed PBLRT
a b
(i , j k )} ,
L0(1) nij { ln ( i ) ii ln (1 / )X ij Step 1: Given the original data { ijk jk
obtain the unrestricted MLEs ( ij ) as well as
i =1 j=
j 1 0
ij .}.
+ ( ii 1) ln X restricted MLEs ( i 0 ) (under H 0(1) ). Com-
pute using (2.12) and (2.18).
(2.13) Step 2:
i. Assuming that H 0(1) is true, generate arti-
Differentiating w.r.t. i and , and then
L0(1)
ficial (bootstrap) observations in an inter-
setting them equal to zero yields
nal loop of M replications. In the mth
(m)
b b
replications
p we generate X ijk j from Gamma
0 0
( i ) 1 i a.( )
nijj i nij {ln X ij ln },i; ij 1 j 1
ii. With the artificial observations { ijk (i , j k )}
j =1 j 1
j= 0 0
a b a b (2.14) compute ( ij ) and ( i ) as done in
and nij i = nij X ij Step 1, and call them ( ) ( ) ) and
(
ij
0( ) 0( )
i =1 j=
j 1 i =1 j=
j 1 ( i ) , respectively. Then obtain
value as done in Step - 1, and call it m .
Define the total sample size subject to ith level iii. By repeating g above ((i)(ii)
) for m 1, 2, , M ,
of Factor 1, and the corresponding sampling we have 1 2 ,, M . Order these m val-
proportion as ues as ( ) ( ) ( ).
330
nij PALRT PPBLRT PALRT PPBLRT PALRT PPBLRT PALRT PPBLRT PALRT PPBLRT
0.01 5 0.019 0.011 0.018 0.011 0.021 0.013 0.022 0.014 0.024 0.016
10 0.015 0.009 0.015 0.008 0.014 0.007 0.017 0.009 0.015 0.006
25 0.011 0.006 0.012 0.007 0.013 0.005 0.017 0.006 0.011 0.006
50 0.012 0.005 0.012 0.006 0.010 0.005 0.011 0.006 0.010 0.005
0.05 5 0.076 0.047 0.075 0.047 0.079 0.053 0.091 0.058 0.088 0.054
10 0.076 0.045 0.071 0.046 0.069 0.039 0.070 0.042 0.070 0.040
25 0.051 0.029 0.055 0.031 0.053 0.030 0.064 0.035 0.054 0.029
50 0.059 0.035 0.055 0.032 0.050 0.025 0.054 0.028 0.049 0.026
0.10 5 0.142 0.097 0.142 0.093 0.144 0.100 0.154 0.110 0.156 0.113
10 0.133 0.093 0.129 0.086 0.127 0.081 0.129 0.082 0.126 0.082
25 0.106 0.063 0.110 0.071 0.104 0.063 0.118 0.072 0.111 0.061
50 0.109 0.071 0.110 0.068 0.106 0.061 0.108 0.060 0.098 0.054
331
4 TESTING THE JOINT SIGNIFICANCE The MLEs of and under H 0(3) , denoted by
0 0
OF FACTOR 1 AND FACTOR 2 0and , are obtained as follows. First obtain
by solving the following equation
Our goal in this section is to consider the
ln n
0 0
Problem - 3, with the hypothesis testing of (
H 0(3) : ij (i
( j ) against the alternative which
i 1 j 1 nij ln X ij
a b
(4.5)
negates it. The null hypothesis is stating that both
= ln( )
the factors have no influence on the mean response n
when they act simultaneously, which under the
assumption of equality of scales ( ij (i (i , j )) and then obtain
0
as
can be written as
0 X
H 0(3) : ij (i
( j ) for some suitable ; (4.1) = (0) . (4.6)
where can be thought as ( ) . Similar to Sec-
tion 2, the likelihood ratio test statistic is given as Thus,
0
sup ( 0 | j (
ijk , j , )). (4.7)
sup H ( 3 ) L H 0(3)
= 2 ln 0
(4.2)
sup L
Similar to Section 2, a PBLRT version test can
With all nij moderately large, we can approxi- be constructed based on the LRT statistic. A com-
mate the sampling distribution of under H 0(3) prehensive simulation has been carried out, and
as it has been noted that the PBLRT adheres to the
level much closer than the ALRT. Table 4 shows
the simulated size values of the two tests (ALRT
2 , and PBLRT).
332
Table 6. Results for the Enzyme data (gamma model). Table 11. Results for the BC data (gamma model).
333
Table 13. Results for the Brightness data (normal We sincerely thank Mr. Huynh Van Kha of the
model). Faculty of Mathematics and Statistics, Ton Duc
Thang University, for his generous help with the
Source F-value p-value
numerical computations of this work.
Time 9.692 0.009 We are indebted to our colleague Dr. Pham Anh
Temperature 2.606 0.115 Duc of the Faculty of Environment and Labour
Interaction 0.111 0.896 Safety, Ton Duc Thang University, for providing
the BOD dataset.
334
Author index
335
Bri
Snel
Khanh
Dao
Applied Mathematics in Engineering and Reliability contains papers presented at the International
Conference on Applied Mathematics in Engineering and Reliability (ICAMER 2016, Ho Chi Minh City,
Viet Nam, 4-6 May 2016). The book covers a wide range of topics within mathematics applied in
reliability, risk and engineering, including:
Applied Mathematics in Engineering and Reliability will be of interest to academics and professionals
working in a wide range of industrial, governmental and academic sectors, including Electrical and
Electronic Engineering, Safety Engineering, Information Technology and Telecommunications, Civil
Engineering, Energy Production, Infrastructures, Insurance and Finance, Manufacturing, Mechanical
Engineering, Natural Hazards, Nuclear Engineering, Transportation, and Policy Making.
Applied Mathematics
in Engineering and Reliability
an informa business