Beruflich Dokumente
Kultur Dokumente
Fourth Edition
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu
http://orsc.edu.cn/liu/ut.pdf
c 2013 by Uncertainty Theory Laboratory
4th Edition
c 2010 by Springer-Verlag Berlin
3rd Edition
c 2007 by Springer-Verlag Berlin
2nd Edition
c 2004 by Springer-Verlag Berlin
1st Edition
Contents
Preface
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
5
1 Uncertain Measure
1.1 Events . . . . . . . . . . . . . .
1.2 Uncertain Measure . . . . . . .
1.3 Uncertainty Space . . . . . . .
1.4 Product Uncertain Measure . .
1.5 Independence . . . . . . . . . .
1.6 Polyrectangular Theorem . . .
1.7 Conditional Uncertain Measure
1.8 Bibliographic Notes . . . . . . .
9
9
10
16
16
20
23
24
27
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Uncertain Variable
2.1 Uncertain Variable . . . . . . . . . .
2.2 Uncertainty Distribution . . . . . . .
2.3 Independence . . . . . . . . . . . . .
2.4 Operational Law . . . . . . . . . . .
2.5 Expected Value . . . . . . . . . . . .
2.6 Variance . . . . . . . . . . . . . . . .
2.7 Moments . . . . . . . . . . . . . . .
2.8 Entropy . . . . . . . . . . . . . . . .
2.9 Distance . . . . . . . . . . . . . . . .
2.10 Inequalities . . . . . . . . . . . . . .
2.11 Sequence Convergence . . . . . . . .
2.12 Conditional Uncertainty Distribution
2.13 Uncertain Vector . . . . . . . . . . .
2.14 Bibliographic Notes . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
. 29
. 31
. 44
. 46
. 68
. 77
. 80
. 83
. 89
. 91
. 93
. 98
. 101
. 105
vi
Contents
3 Uncertain Programming
3.1 Uncertain Programming . . . . . . . . .
3.2 Numerical Method . . . . . . . . . . . .
3.3 Machine Scheduling Problem . . . . . .
3.4 Vehicle Routing Problem . . . . . . . . .
3.5 Project Scheduling Problem . . . . . . .
3.6 Uncertain Multiobjective Programming
3.7 Uncertain Goal Programming . . . . . .
3.8 Uncertain Multilevel Programming . . .
3.9 Bibliographic Notes . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
107
107
110
112
115
119
122
124
125
126
4 Uncertain Statistics
4.1 Experts Experimental Data . . . .
4.2 Questionnaire Survey . . . . . . . .
4.3 Empirical Uncertainty Distribution
4.4 Principle of Least Squares . . . . .
4.5 Method of Moments . . . . . . . .
4.6 Multiple Domain Experts . . . . .
4.7 Delphi Method . . . . . . . . . . .
4.8 Bibliographic Notes . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
127
127
128
129
130
132
133
134
135
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
137
139
140
140
141
141
142
143
146
147
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
149
149
150
151
151
152
152
153
.
.
.
.
.
.
.
.
.
.
vii
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
155
155
157
159
161
161
8 Uncertain Entailment
8.1 Uncertain Entailment Model . .
8.2 Uncertain Modus Ponens . . . .
8.3 Uncertain Modus Tollens . . . .
8.4 Uncertain Hypothetical Syllogism
8.5 Bibliographic Notes . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
163
165
166
168
169
9 Uncertain Set
9.1 Uncertain Set . . . . . . . . . . . .
9.2 Membership Function . . . . . . .
9.3 Independence . . . . . . . . . . . .
9.4 Set Operational Law . . . . . . . .
9.5 Arithmetic Operational Law . . . .
9.6 Expected Value . . . . . . . . . . .
9.7 Variance . . . . . . . . . . . . . . .
9.8 Entropy . . . . . . . . . . . . . . .
9.9 Distance . . . . . . . . . . . . . . .
9.10 Conditional Membership Function
9.11 Uncertain Statistics . . . . . . . .
9.12 Bibliographic Notes . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
171
171
177
186
190
193
198
204
205
209
209
210
213
10 Uncertain Logic
10.1 Individual Feature Data
10.2 Uncertain Quantifier . .
10.3 Uncertain Subject . . .
10.4 Uncertain Predicate . .
10.5 Uncertain Proposition .
10.6 Truth Value . . . . . . .
10.7 Algorithm . . . . . . . .
10.8 Linguistic Summarizer .
10.9 Bibliographic Notes . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
215
215
216
223
226
229
230
234
237
240
11 Uncertain Inference
11.1 Uncertain Inference Rule .
11.2 Uncertain System . . . . .
11.3 Uncertain Control . . . .
11.4 Inverted Pendulum . . . .
11.5 Bibliographic Notes . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
241
241
245
249
249
251
viii
Contents
12 Uncertain Process
12.1 Uncertain Process . . . . . . . . . . . . .
12.2 Uncertainty Distribution . . . . . . . . . .
12.3 Independence . . . . . . . . . . . . . . . .
12.4 Independent Increment Process . . . . . .
12.5 Stationary Independent Increment Process
12.6 Extreme Value Theorem . . . . . . . . . .
12.7 First Hitting Time . . . . . . . . . . . . .
12.8 Time Integral . . . . . . . . . . . . . . . .
12.9 Bibliographic Notes . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
253
253
254
258
259
262
267
271
274
276
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
277
277
281
283
286
287
289
290
14 Uncertain Calculus
14.1 Liu Process . . . . . .
14.2 Liu Integral . . . . . .
14.3 Fundamental Theorem
14.4 Chain Rule . . . . . .
14.5 Change of Variables .
14.6 Integration by Parts .
14.7 Bibliographic Notes . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
293
293
297
302
303
304
305
306
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
307
307
315
317
319
322
324
327
330
332
16 Uncertain Finance
16.1 Uncertain Stock Model . . . . .
16.2 Uncertain Interest Rate Model
16.3 Uncertain Currency Model . . .
16.4 Bibliographic Notes . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
335
335
346
347
351
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ix
Contents
A Probability Theory
A.1 Probability Measure . . . . . .
A.2 Random Variable . . . . . . . .
A.3 Probability Distribution . . . .
A.4 Independence . . . . . . . . . .
A.5 Operational Law . . . . . . . .
A.6 Expected Value . . . . . . . . .
A.7 Variance . . . . . . . . . . . . .
A.8 Law of Large Numbers . . . . .
A.9 Conditional Probability . . . .
A.10 Stochastic Process . . . . . . .
A.11 Itos Stochastic Calculus . . . .
A.12 Stochastic Differential Equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B Chance Theory
B.1 Chance Measure . . . . . . . . . . . . .
B.2 Uncertain Random Variable . . . . . . .
B.3 Chance Distribution . . . . . . . . . . .
B.4 Operational Law . . . . . . . . . . . . .
B.5 Expected Value . . . . . . . . . . . . . .
B.6 Variance . . . . . . . . . . . . . . . . . .
B.7 Law of Large Numbers . . . . . . . . . .
B.8 Uncertain Random Programming . . . .
B.9 Uncertain Random Risk Analysis . . . .
B.10 Uncertain Random Reliability Analysis .
B.11 Uncertain Random Graph . . . . . . . .
B.12 Uncertain Random Network . . . . . . .
B.13 Bibliographic Notes . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
353
353
354
355
357
358
360
362
363
363
364
366
367
.
.
.
.
.
.
.
.
.
.
.
.
.
369
369
373
375
377
384
388
390
391
394
397
398
402
403
407
407
Bibliography
415
429
407
408
409
409
410
411
x
Index
Contents
430
Preface
When no samples are available to estimate a probability distribution, we
have to invite some domain experts to evaluate the belief degree that each
event will occur. Perhaps some people think that the belief degree is subjective probability or fuzzy concept. However, it is usually inappropriate
because both probability theory and fuzzy set theory may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, an
uncertainty theory was founded in 2007 and subsequently studied by many
researchers. Nowadays, uncertainty theory has become a branch of axiomatic
mathematics for modeling human uncertainty.
Uncertain Measure
The most fundamental concept is uncertain measure that is a type of set
function satisfying the axioms of uncertainty theory. It is used to indicate
the belief degree that an uncertain event may occur. Chapter 1 will introduce normality, duality, subadditivity and product axioms. From those four
axioms, this chapter will also present uncertain measure, product uncertain
measure, and conditional uncertain measure.
Uncertain Variable
Uncertain variable is a measurable function from an uncertainty space to
the set of real numbers. It is used to represent quantities with uncertainty.
Chapter 2 is devoted to the uncertain variable, uncertainty distribution, operational law, expected value, variance, and so on.
Uncertain Programming
Uncertain programming is a type of mathematical programming involving
uncertain variables. Chapter 3 will provide a type of uncertain programming model with applications to machine scheduling problem, vehicle routing
problem, and project scheduling problem. In addition, uncertain multiobjective programming, uncertain goal programming and uncertain multilevel
programming are also documented.
xii
Preface
Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting experts
experimental data by uncertainty theory. Chapter 4 will present a questionnaire survey for collecting experts experimental data. In order to determine
uncertainty distributions from those experts experimental data, Chapter 4
will also introduce empirical uncertainty distribution, the principle of least
squares, the method of moments, and the Delphi method.
Uncertain Risk Analysis
The term risk has been used in different ways in literature. In this book
the risk is defined as the accidental loss plus the uncertain measure of such
loss, and a risk index is defined as the uncertain measure that some specified
loss occurs. Chapter 5 will introduce uncertain risk analysis that is a tool
to quantify risk via uncertainty theory. As applications of uncertain risk
analysis, Chapter 5 will also discuss structural risk analysis and investment
risk analysis.
Uncertain Reliability Analysis
Reliability index is defined as the uncertain measure that some system is
working. Chapter 6 will introduce uncertain reliability analysis that is a tool
to deal with system reliability via uncertainty theory.
Uncertain Propositional Logic
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. Chapter 7 will present a framework of uncertain propositional logic. In
addition, uncertain entailment is a methodology for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given. Chapter 8
will discuss an uncertain entailment model from which uncertain modus ponens, uncertain modus tollens and uncertain hypothetical syllogism are deduced.
Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts. The main difference between uncertain set and
uncertain variable is that the former takes values of set and the latter takes
values of point. Uncertain set theory will be introduced in Chapter 9. In
order to determine membership functions, Chapter 9 will also provide some
methods of uncertain statistics.
Preface
xiii
Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact encourages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 10 will be devoted to uncertain logic and
linguistic summarizer.
Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowledge via uncertain set theory. Chapter 11 will present a set of uncertain
inference rules, uncertain system, and uncertain control with application to
an inverted pendulum system.
Uncertain Process
An uncertain process is essentially a sequence of uncertain variables indexed
by time. Thus an uncertain process is usually used to model uncertain phenomena that vary with time. Chapter 12 is devoted to basic concepts of
uncertain process as well as independent increment process, and stationary
independent increment process. In addition, extreme value theorem, first
hitting time and time integral of uncertain processes are also introduced.
Chapter 13 deals with uncertain renewal process, delayed renewal process,
renewal reward process, alternating renewal process and uncertain insurance
model.
Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. Chapter 14 will introduce Liu process
that is a stationary independent increment process whose increments are
normal uncertain variables, and discuss Liu integral that is a type of uncertain
integral with respect to Liu process. In addition, the fundamental theorem of
uncertain calculus will be proved in this chapter from which the techniques
of chain rule, change of variables, and integration by parts are also derived.
Uncertain Differential Equation
Uncertain differential equation is a type of differential equation involving
uncertain processes. Chapter 15 will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and will introduce
Yao-Chen formula that represents the solution of an uncertain differential
equation by a family of solutions of ordinary differential equations. On the
basis of this formula, a numerical method for solving uncertain differential
xiv
Preface
equations is designed. In addition, extreme value, first hitting time and time
integral of solutions are provided.
Uncertain Finance
As applications of uncertain differential equation, Chapter 16 will discuss
uncertain stock model, uncertain interest rate model, and uncertain currency
model.
Law of Truth Conservation
The law of excluded middle tells us that a proposition is either true or false,
and the law of contradiction tells us that a proposition cannot be both true
and false. In the state of indeterminacy, some people said, the law of excluded
middle and the law of contradiction are no longer valid because the truth
degree of a proposition is no longer 0 or 1. I cannot gainsay this viewpoint
to a certain extent. But it does not mean that you might go as you please.
The truth values of a proposition and its negation should sum to unity. This is
the law of truth conservation that is weaker than the law of excluded middle
and the law of contradiction. Furthermore, the law of truth conservation
agrees with the law of excluded middle and the law of contradiction when
the uncertainty vanishes.
Maximum Uncertainty Principle
An event has no uncertainty if its uncertain measure is 1 because we may
believe that the event occurs. An event has no uncertainty too if its uncertain
measure is 0 because we may believe that the event does not occur. An event
is the most uncertain if its uncertain measure is 0.5 because the event and
its complement may be regarded as equally likely. In practice, if there is
no information about the uncertain measure of an event, we should assign
0.5 to it. Sometimes, only partial information is available. In this case, the
value of uncertain measure may be specified in some range. What value does
the uncertain measure take? For any event, if there are multiple reasonable
values that an uncertain measure may take, then the value as close to 0.5 as
possible is assigned to the event. This is the maximum uncertainty principle.
Matlab Uncertainty Toolbox
Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) is a collection of functions built on Matlab for many methods of uncertainty theory,
including uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain logic, uncertain inference, uncertain differential equation, scheduling, logistics, data mining, control, and
finance.
Preface
xv
Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.
Uncertainty Theory Online
If you want to read more papers related to uncertainty theory and applications, please visit the website at http://orsc.edu.cn/online.
Purpose
The purpose is to equip the readers with an axiomatic approach to deal
with human uncertainty. The book is suitable for researchers, engineers, and
students in the field of mathematics, information science, operations research,
industrial engineering, computer science, artificial intelligence, automation,
economics, and management science.
Acknowledgment
This work was supported by a series of grants from National Natural Science
Foundation, Ministry of Education, and Ministry of Science and Technology
of China.
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
September 15, 2013
Chapter 0
Toward Uncertainty
Theory
Real decisions are usually made in the state of indeterminacy. For modeling indeterminacy, there exist two mathematical systems, one is probability
theory (Kolmogorov, 1933) and the other is uncertainty theory (Liu, 2007).
Probability is interpreted as frequency, while uncertainty is interpreted as
personal belief degree.
What is indeterminacy? What is frequency? What is belief degree? This
chapter will answer these questions, and show in what situation we should use
probability theory and in what situation we should use uncertainty theory.
0.1
Indeterminacy
0.2
0.3
among others by Kahneman and Tversky [69] that human beings usually
overweight unlikely events. This fact makes the belief degree function deviate far from the long-run cumulative frequency. More precisely, the belief
degree function may have much larger variance than the long-run cumulative
frequency. In this case, Liu [122] declared that it is inappropriate to use
probability theory because it may lead to counterintuitive results.
exactly 90 tons
...
...
... ...
... ...
... ...
... ...
... ... ...
... ... ...
... .... ..... .....
... .... ..... .....
.
.
.
.
. . .. .
.. .. .. ..
... ... .... .. ...
... ... ..... ... ...
... ... .. .. ... ...
...................................................................
... ... ... .. ... ...
... .. .. ... ... ....
.............
... .. .. ... ... ....
... ..... .... ..... ..... .....
.................................... .... .... .... .... .... .... .... .... .... .... .... .... ....
... ..... .... ..... ..... .....
.
.
.
.
.
.
.
.
.
.
... ... ...
. .
... .... ...
........................ ... ... ... ... ... ... ... ... ... ... ... ... ...
... ... ..
... .... ....
... ... ...
...................
... ... ....
... .. ..
... .. ..
... ... ...
... ... ... .................................................................................................................. ..................................................................
... ..... ....
... ..... ....
.
... ... ...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... ...
......
......
...... ......
...
. ...
... ...
.
...
.. ....
.
..
.
.
.
.
.
.
.
.
.
.
.
..
.. ..
..
.
.
.
.
.
.
.
.
...
...... ...........
......
... ......
....... ...
... ...
... ...
.
... ...
.... ...
... ...
.... ....
... ...
... ...
.. ..
...................................
....
...........
.... . .....
... ... .....
...
...
..
.
.
....
..
....
....
..
....
.....
....
......
.
.
.
.
.
.
.
.....
......
.
.
.
.
.
..........................................................................................................................
...
...... ...........
......
... ......
...... ...
... ....
.. ..
... ...
.... ...
.. ...
... ...
... ...
... ....
... ...
.. ..
...................................
Unknown Strength
Figure 2: A Truck is Crossing over a Bridge
Consider a counterexample presented by Liu [122]. Assume there is one
truck and 50 bridges in an experiment. Also assume the weight of the truck is
90 tons and the 50 bridge strengths are iid normal random variables N (100, 1)
in tons (I am afraid this fact cannot be verified without the help of God).
For simplicity, suppose a bridge collapses whenever its real strength is less
than the weight of the truck. Now let us have the truck cross over the 50
bridges one by one. It is easy to verify that
Pr{the truck can cross over the 50 bridges} 1.
(1)
That is to say, the truck may cross over the 50 bridges successfully.
However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degree function about it. As we stated before, usually the belief
degree function has much larger variance than the real bridge strengths. Assume the belief degree function looks like a normal probability distribution
N (100, 100). Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have no choice but to
regard the 50 bridge strengths as iid normal random variables with expected
value 100 and variance 100 in tons. If we have the truck cross over the 50
bridges one by one, then we immediately have
Pr{the truck can cross over the 50 bridges} 0.
(2)
Thus it is almost impossible that the truck crosses over the 50 bridges successfully. Unfortunately, the results (1) and (2) are at opposite poles. This
conclusion seems unacceptable and then the belief degree function cannot be
treated as a probability distribution.
How to obtain belief degrees?
At first, we have to admit that any destructive experiment is not allowed for
a real bridge. Thus we have no samples about the bridge strength. In this
case, there do not exist any statistical methods to estimate its probability
distribution. How do we deal with it? It seems that we have no choice
but to invite some bridge engineers to evaluate the belief degree about the
bridge strength. In practice, it is almost impossible for bridge engineers to
give a perfect description of the belief degree function. Instead, they can only
provide some statements about the belief degrees. For example, the following
statements are given by the bridge engineers:
(a) Im 10% sure that the bridge strength does not exceed 80 tons;
(b) Im 20% sure that the bridge strength does not exceed 90 tons;
(c) Im 50% sure that the bridge strength does not exceed 100 tons;
(d) Im 70% sure that the bridge strength does not exceed 110 tons;
(e) Im 90% sure that the bridge strength does not exceed 120 tons.
From these statements, we may obtain a set of experts experimental data as
follows,
(80, 0.1), (90, 0.2), (100, 0.5), (110, 0.7), (120, 0.9).
(3)
Then some methods (e.g. the principle of least squares) were invented to
determine an uncertainty distribution from the experts experimental data
like (3). If you believe your estimated uncertainty distribution is close enough
to the belief degree function hidden in the mind of the domain experts, then
you may use uncertainty theory to deal with your problem on the basis of
your estimated uncertainty distributions.
Uncertainty theory is applicable when belief degrees are available!
In order to rationally deal with belief degrees, an uncertainty theory was
founded by Liu [113] in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of axiomatic mathematics
for modeling human uncertainty.
When no samples are available, we have to invite some domain experts to
evaluate the belief degree function about the indeterminacy quantity. Since
the belief degree function has much larger variance than the long-run cumulative frequency, probability theory is no longer applicable. Liu [122] declared
that uncertainty theory is the only legitimate approach when only belief degree is available.
Probability
..
.........
........
......................
...
.
.............
....
........................
..
. .....
................... ...
...
.... . .
...
.... .. .. ..
...
......... .... .... ...
...
..... ... .. .. ...
.. .... .... .... .... ...
...
.
...
... ...... ... .... .... ...
...
... ... .. ... .. .. ...
.. .. .. .. ... ... ..
...
.. ...... .... .... .... .... ....
.
...
.
...
.... .... .... .... ..... .... .... ....
...
....
... ..
...
.....
......... .... .... .... .... .... ....
.....
...
.. ... ... ... ... ... ... ...
......
.
.
.
.
.
...
.
.
.
.
.
.
.......
.
..........................................................................................................................................................................................
....
...
..
Uncertainty
Figure 3: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the long-run cumulative frequency (left histogram). In this case, probability theory is the only legitimate
approach. When the belief degrees are available (no samples), the estimated
uncertainty distribution (right curve) may have much larger variance than the
long-run cumulative frequency (right histogram). In this case, uncertainty
theory is the only legitimate approach.
0.4
i L.
(4)
i=1
i L.
(5)
i=1
Example 0.3: Let L be the collection of all finite disjoint unions of all
intervals of the form
(, a],
(a, b],
(b, ),
(6)
Then L is an algebra over < (the set of real numbers), but not a -algebra
because i = (0, (i 1)/i] L for all i but
i = (0, 1) 6 L.
(7)
i=1
i L;
i=1
i L;
1 \ 2 L;
i=1
lim i L.
(8)
(10)
(11)
(12)
where i Li for all i and i = i for all but finitely many i. The smallest
-algebra containing all measurable rectangles of = 1 2 is called
the product -algebra, denoted by
L = L1 L2
(13)
Example 0.10: Any continuous function f from < to < is also measurable.
Example 0.11: Assume is a subset of . Then its characteristic function
(
1, if x
f (x) =
(14)
0, if x
6
is measurable if is a measurable set; and is not measurable if is not.
Example 0.12: Let f and g be two measurable functions. Then their sum
f +g, product f g, and compound function f g are all measurable functions.
Example 0.13: Let f be a measurable function. Then its positive part
(
f (), if f () > 0
+
f () =
(15)
0,
otherwise
and negative part
(
f () =
f (), if f () < 0
0,
otherwise
(16)
are also measurable functions. Note that both of them are nonnegative.
Example 0.14: Let f1 , f2 , be a sequence of measurable functions. Then
the pointwise supremum, pointwise infimum, and pointwise limitation
sup fi (),
1i<
inf
1i<
fi (),
lim fi ()
(17)
Chapter 1
Uncertain Measure
Uncertainty theory was founded by Liu [113] in 2007 and subsequently studied by many researchers. Nowadays uncertainty theory has become a branch
of axiomatic mathematics for modeling human uncertainty. This chapter
will present normality, duality, subadditivity and product axioms of uncertainty theory. From those four axioms, this chapter will also introduce the
uncertain measure that is a fundamental concept in uncertainty theory. In
addition, product uncertain measure and conditional uncertain measure will
be explored at the end of this chapter.
1.1
Events
(1.2)
10
Also assume the second event we are concerned about corresponds to the
proposition the bridge strength is more than 100 tons. Then it may be
represented by
2 = (100, 150].
(1.3)
If we are only concerned about the above two events, then we may construct
a -algebra L containing the two events 1 and 2 , for example,
L = {, 1 , 2 , }.
(1.4)
1.2
Uncertain Measure
[
X
M
i
M{i }.
(1.5)
i=1
i=1
11
0.6, then all of us will think that the proposition is false with belief degree
0.4.
Remark 1.3: Given two events with known belief degrees, it is frequently
asked that how the belief degree of their union is generated from the individuals. Personally, I do not think there exists any rule to make it. A lot
of surveys showed that, generally speaking, the belief degree of the union is
neither the sum of individuals (e.g. probability measure) nor the maximum
(e.g. possibility measure). Perhaps there is no explicit relation between the
union and individuals except for the subadditivity axiom.
Remark 1.4: Pathology occurs if subadditivity axiom is not assumed. For
example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each set with at least
2 elements. Then such a set function satisfies all axioms but subadditivity.
Do you think it is strange if such a set function serves as a measure?
Remark 1.5: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 16.
Definition 1.1 (Liu [113]) The set function M is called an uncertain measure if it satisfies the normality, duality, and subadditivity axioms.
Exercise 1.1: Let = {1 , 2 , 3 }. It is clear that there exist 8 events in
the -algebra
L = {, {1 }, {2 }, {3 }, {1 , 2 }, {1 , 3 }, {2 , 3 }, }.
(1.6)
Define
M{1 } = 0.6,
M{1 , 2 } = 0.8,
M{2 } = 0.3,
M{3 } = 0.2,
M{1 , 3 } = 0.7,
M{} = 0,
M{2 , 3 } = 0.4,
M{} = 1.
(1.7)
x6=y
sup (x),
x
(1.8)
12
M{} =
Z
(x)dx,
if
Z
(x)dx,
if
0.5,
(1.10)
otherwise
xc
sup
(x)
+
(x)dx,
if
sup
(x)
+
(x)dx < 0.5
x
x
Z
Z
M{} =
1 sup (x)
(x)dx, if sup (x) +
(x)dx < 0.5
xc
xc
c
c
0.5,
otherwise
is an uncertain measure on <.
Monotonicity Theorem
Theorem 1.1 (Monotonicity Theorem) Uncertain measure M is a monotone increasing set function. That is, for any events 1 2 , we have
M{1 } M{2 }.
(1.12)
Proof: The normality axiom says M{} = 1, and the duality axiom says
M{c1 } = 1 M{1 }. Since 1 2 , we have = c1 2 . By using the
subadditivity axiom, we obtain
1 = M{} M{c1 } + M{2 } = 1 M{1 } + M{2 }.
Thus M{1 } M{2 }.
13
Theorem 1.2 Suppose that M is an uncertain measure. Then the empty set
has an uncertain measure zero, i.e.,
M{} = 0.
(1.13)
Proof: Since = c and M{} = 1, it follows from the duality axiom that
M{} = 1 M{} = 1 1 = 0.
Theorem 1.3 Suppose that M is an uncertain measure. Then for any event
, we have
0 M{} 1.
(1.14)
Proof: It follows from the monotonicity theorem that 0 M{} 1 because
and M{} = 0, M{} = 1.
Null-Additivity Theorem
Null-additivity is a direct deduction from the subadditivity axiom. We first
prove a more general theorem.
Theorem 1.4 Let 1 , 2 , be a sequence of events with M{i } 0 as
i . Then for any event , we have
lim M{ i } = lim M{\i } = M{}.
(1.15)
if i ,
(1.16)
if i .
(1.17)
14
X
1 = M{}
M{i }.
i=1
0,
if =
if is upper bounded
,
0.5,
if both and c are upper unbounded
(1.18)
M{} =
,
if
is
upper
bounded
1,
if = .
It is easy to verify that M is an uncertain measure. Write i = (, i] for
i = 1, 2, Then i and limi M{i } = . Furthermore, we have
ci and limi M{ci } = 1 .
Extension Theorem
Let c1 and c2 be nonnegative numbers with c1 + c2 = 1. Then there exists
an uncertain measure M on the universal set {1 , 2 } such that
(
c1 , if = 1
M{} =
(1.19)
c2 , if = 2 .
Furthermore, if M is an uncertain measure on the universal set {1 , 2 , 3 }
and c1 , c2 , c3 are nonnegative numbers satisfying the consistency condition
ci + cj 1 c1 + c2 + c3 ,
then
i 6= j,
c1 , if = 1
c2 , if = 2
M{} =
c3 , if = 3
(1.20)
(1.21)
M{1 , 3 } = 1 c2 ,
M{2 , 3 } = 1 c1 .
(1.22)
15
However, when there are four or more elements in the universal set, the
uncertain measure cannot be uniquely determined by the singletons. In this
case, we have the following theorem if the maximum uncertainty principle is
assumed.
Theorem 1.6 Let M be an uncertain measure on {1 , 2 , , n }. Then we
have
M{i } + M{j } 1 M{1 } + M{2 } + + M{n },
i 6= j.
(1.23)
i 6= j,
(1.24)
then
c1 , if = 1
c2 , if = 2
M{} =
..
cn , if = n
(1.25)
ci ,
1
ci ,
i 6
1
ci ,
X
M{} =
ci ,
1
ci ,
i 6
ci ,
0.5,
if
ci > 0.5,
if
_
_
ci > 0.5,
ci > 0.5,
ci > 0.5,
_
_
i 6
ci 0.5,
if
ci +
ci 1
ci < 1
i 6
ci +
ci 1
i 6
i 6
if
X
i 6
i 6
if
ci +
if
ci +
ci < 1
ci < 0.5
i 6
ci 0.5,
ci < 0.5
otherwise
X
i
ci .
(1.26)
16
1.3
Uncertainty Space
(1.27)
1.4
Product uncertain measure was defined by Liu [116] in 2009, thus producing
the fourth axiom of uncertainty theory. Let (k , Lk , Mk ) be uncertainty
spaces for k = 1, 2, Write
= 1 2 ,
L = L1 L2
(1.29)
Y
^
M
k =
Mk {k }
(1.30)
k=1
k=1
17
M{} =
min Mk {k },
sup
1 2 1k<
if
1
sup
1 2 1k<
sup
min Mk {k },
(1.31)
1 2 c 1k<
if
0.5,
sup
1 2 c 1k<
otherwise.
.2
...
.........
....
.........
...
................ ........................
.......
........
...
......
.......
.....
...
......
....
.....
.
.
...
.
.... ................................................................................ .......
.
.........................................
.
.. ...
... ....
.
........
...
.
...
..
..
...
.
...
.
...
.
...
...
...
...
....
....
...
....
...
..
...
...
...
...
...
...
...
...
....
...
...
...
...
...
...
...
...
...
...
..
.
.
...
.
...
.
.
...
...
2 ...
..
..
.
.
..
..
...
..
.
...
.
.
...
.
..
...
.
.
.
.
...
.
.
.
...
...
...
...
...
....
...
...
.
...
...
...
....
...
...
.
...
...
...
..
..........
... ....
...
.
.
...........................................
.
.... ............................................................................... ...
..... ..
.. ......
...
......
....
...
......
......
.. ......
...
...... ...
.
.
.
.
.
.. .............
...
.
..
.............................................
..
...
..
...
...
..
..
...
.
.
.
..................................................................................................................................................................................................
..
..
..
...
...
...
...
...
...
...................................
..
...................................
min Mk {k } +
1 2 1k<
sup
min Mk {k } 1.
1 2 c 1k<
18
min Mk {k }
1 2 1k<
and
min Mk {k }
sup
1 2 c 1k<
min Mk {k } +
1 2 1k<
sup
min Mk {k } = 1,
1 2 c 1k<
min Mk {k }.
sup
1 2 1k<
(1.32)
Theorem 1.7 (Peng and Iwamura [173]) The product uncertain measure
defined by (1.31) is an uncertain measure.
Proof: In order to prove that the product uncertain measure (1.31) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{} = 1.
Step 2: We prove the duality, i.e., M{} + M{c } = 1. The argument
breaks down into three cases. Case 1: Assume
min Mk {k } > 0.5.
sup
1 2 1k<
sup
1 2 c 1k<
min Mk {k },
sup
1 2 1k<
min Mk {k } = 1 M{}.
sup
1 2 (c )c 1k<
1 2 c 1k<
min Mk {k } 0.5
1 2 1k<
19
and
min Mk {k } 0.5.
sup
1 2 c 1k<
It follows from (1.31) that M{} = M{c } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose
and are two events in L with . The argument breaks down into
three cases. Case 1: Assume
min Mk {k } > 0.5.
sup
1 2 1k<
Then
sup
min Mk {k }
1 2 1k<
sup
1 2 1k<
sup
1 2 c 1k<
Then
sup
min Mk {k }
1 2 c 1k<
Thus
M{} = 1
1
sup
1 2 c 1k<
min Mk {k }
sup
1 2 c 1k<
min Mk {k } = M{}.
sup
1 2 c 1k<
Case 3: Assume
sup
min Mk {k } 0.5
sup
min Mk {k } 0.5.
1 2 1k<
and
1 2 c 1k<
Then
M{} 0.5 1 M{c } = M{}.
Step 4: Finally, we prove the subadditivity of M. For simplicity, we only
prove the case of two events and . The argument breaks down into three
cases. Case 1: Assume M{} < 0.5 and M{} < 0.5. For any given > 0,
there are two rectangles
1 2 c ,
1 2 c
such that
1 min Mk {k } M{} + /2,
1k<
20
Note that
(1 1 ) (2 2 ) ( )c .
It follows from the duality and subadditivity axioms that
Mk {k k } = 1 Mk {(k k )c } = 1 Mk {ck ck }
1 (Mk {ck } + Mk {ck })
= 1 (1 Mk {k }) (1 Mk {k })
= Mk {k } + Mk {k } 1
for any k. Thus
M{ } 1 min Mk {k k }
1k<
1 min Mk {k } + 1 min Mk {k }
1k<
1k<
M{} + M{} + .
Letting 0, we obtain
M{ } M{} + M{}.
Case 2: Assume M{} 0.5 and M{} < 0.5. When M{ } = 0.5, the
subadditivity is obvious. Now we consider the case M{ } > 0.5, i.e.,
M{c c } < 0.5. By using c = (c c ) and Case 1, we get
M{c } M{c c } + M{}.
Thus
M{ } = 1 M{c c } 1 M{c } + M{}
1 M{c } + M{} = M{} + M{}.
Case 3: If both M{} 0.5 and M{} 0.5, then the subadditivity is
obvious because M{} + M{} 1. The theorem is proved.
Definition 1.5 Assume (k , Lk , Mk ) are uncertainty spaces for k = 1, 2,
Let = 1 2 , L = L1 L2 and M = M1 M2 Then the
triplet (, L, M) is called the product uncertainty space.
1.5
Independence
M
i =
M{i }
(1.33)
i=1
where
i=1
21
Remark 1.10: Note that (1.33) represents 2n equations. For example, when
n = 2, the four equations are
M{1 2 } = M{1 } M{2 },
M{c1 2 } = M{c1 } M{2 },
M{1 c2 } = M{1 } M{c2 },
M{c1 c2 } = M{c1 } M{c2 }.
Example 1.2: The impossible event is independent of any event because
c = and
M{ } = M{} = M{} M{},
M{c } = M{} = M{c } M{},
M{ c } = M{} = M{} M{c },
M{c c } = M{c } = M{c } M{c }.
Example 1.3: The sure event is independent of any event because
c = and
M{ } = M{} = M{} M{},
M{c } = M{c } = M{c } M{},
M{ c } = M{c } = M{} M{c },
M{c c } = M{c } = M{c } M{c }.
Example 1.4: Generally speaking, an event is not independent of itself
because
M{ c } =
6 M{} M{c }
whenever M{} is neither 1 nor 0.
Theorem 1.8 (Liu [120]) The events 1 , 2 , , n are independent if and
only if
( n
)
n
[
_
M
i =
M{i }
(1.34)
i=1
where
i=1
i=1
i=1
i=1
The equation (1.34) is proved. Conversely, if the equation (1.34) holds, then
( n
)
( n
)
n
n
\
[
_
^
c
M
i = 1 M
i
=1
M{c
}
=
M{i }.
i
i=1
i=1
i=1
i=1
22
.2
..........................................................................
...
...
..........
....
...
...
...
...
...
...
...
..
...
...
...
...
...
...
..
...
.
...............................................................................................................................................................................
...
.....
...
...
...
...
...
...
...
...
...
...
...
....
...
....
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
2 ...
1
2 ..
...
...
...
...
...
....
...
...
...
...
...
...
...
.
.
.
.
.
...
..
.
.
.
.
.
.
.
.
...
.
..
.
.........
..
.
.
.
.
.........................................................................................................................................................................
.
.
.
....
....
.....
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
.....................................................................................................................................................................................
...
...
...
.
...
...
.
.
..
...
................................
.................................
Figure 1.2: (1 2 ) (1 2 ) = 1 2
The equation (1.33) is true. The theorem is proved.
Theorem 1.9 (Liu [131]) Let (i , Li , Mi ) be uncertainty spaces and i Li
for i = 1, 2, , n. Then the events
1 i1 i i+1 n ,
i = 1, 2, , n
(1.35)
are always independent in the product uncertainty space. That is, the events
1 , 2 , , n
(1.36)
23
1.6
Polyrectangular Theorem
(1.37)
(1i 2i )
(1.38)
i=1
(1.39)
21 22 2m .
(1.40)
.
..........
.... ....................
.........................
...
.. ...
...........................
...
...
...
... ...
...
....
...
...
..
... ...
...
...
...
...
.
... ...
..........................
...
.
...
...
.
.
.
...
... ...
.
.
...
...
.
.
.
.
...
... ...
.
.
.......................
.......................
........................
.
... ...
...
..
....
...
.........................
.
...
... ...
...
.
...
...
...
... ...
...
...
....
...
.
...
.
.
... ...
.
.
.
...
...
.........................
... ...
....
...
.
...
... ...
.........................
.........................
.........................
...
.
.
.
...
.
.
... ...
.
.
...
...
..
...
...
... ...
.
.
.
........................
...
...
... ...
...
....
...
...
...
... ...
...
...
...
...
...
.
.
.
... ...
.........................
.........................
... .................................................................
...
.
.
.................................................................................................................................................................................................................................................................................
....
.
(1i 2i )
(1.41)
i=1
24
...
.........
.... ...
..
...
... ....
......
.......
... ........
... ..
... ...
.. .....
... ......
.
... ....
.. ...
... ........
.
... ....
...
.
.
... ... ...
.
... .......
....
...
.
......
.
.
... ... ....
.
.
...
.....
........
...
.
.
... .... ...
.
.
.
.
...
...........
.......
....
.
.
....................
.
.
.
... ... ...
.
.
.
.
...
.......
....
...........
.
.
.
... ... ....
.
.
.
.......
..................
.
.....
.
.
...........
.
.....
... .... ...
.
.
.
.
........
....
...
.
...
.
.
.
.
.
... ...
.
.
.
.
....
..
...
. .......
.
.
.
... ...
.
.
.
.
...
....
.
..
.....
... ....
... ....
.... ....
.....
... ..
... ...
... ...
......
... ...
... ...
........
......
......
.........
.
.
... ....
.
.
..
...
....
... .............................................................................
...
..
................................................................................................................................................................................................................................................................................
....
..
1.7
25
M{A B}
.
M{B}
(1.43)
M{Ac B}
.
M{B}
(1.44)
M{Ac B}
M{A B}
1.
M{B}
M{B}
(1.45)
M{A B}
M{A B}
,
if
< 0.5
M{B}
M{B}
M{Ac B}
M{Ac B}
M{A|B} =
(1.46)
1
, if
< 0.5
M{B}
M{B}
0.5,
otherwise
provided that M{B} > 0.
Remark 1.12: It follows immediately from the definition of conditional
uncertain measure that
1
M{Ac B}
M{A B}
M{A|B}
.
M{B}
M{B}
(1.47)
Furthermore, the conditional uncertain measure obeys the maximum uncertainty principle, and takes values as close to 0.5 as possible.
Remark 1.13: The conditional uncertain measure M{A|B} yields the posterior uncertain measure of A after the occurrence of event B.
26
M{}
M{c B}
=1
= 1.
M{B}
M{B}
M{Ac B}
0.5,
M{B}
M{A B}
M{A B}
+ 1
= 1.
M{B}
M{B}
That is, M{|B} satisfies the duality axiom. Finally, for any countable sequence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from (1.47) and
the subadditivity axiom that
(
)
X
[
(
) M
M{Ai B}
Ai B
X
[
i=1
i=1
=
M{Ai |B}.
M
Ai | B
M{B}
M{B}
i=1
i=1
Suppose there is one term greater than 0.5, say
M{A1 |B} 0.5,
i = 2, 3,
[
X
M
Ai | B
M{Ai |B}.
i=1
i=1
If M{i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
!
[
\
c
c
(Ai B)
Ai B ,
A1 B
i=2
i=1
27
M{Ac1
B}
(
\
M{Ai B} + M
i=2
(
[
)
Aci
i=1
)
Ai | B
i=1
=1
(
\
)
Aci B
i=1
M{B}
M{Ac1 B}
+
M{Ai |B} 1
M{B}
i=1
M{Ai B}
i=2
M{B}
If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{|B} satisfies the subadditivity axiom. Hence M{|B} is
an uncertain measure. Furthermore, (, L, M{|B}) is an uncertainty space.
1.8
Bibliographic Notes
Chapter 2
Uncertain Variable
Uncertain variable is a fundamental concept in uncertainty theory. It is used
to represent quantities with uncertainty. The emphasis in this chapter is
mainly on uncertain variable, uncertainty distribution, independence, operational law, expected value, variance, moment, entropy, distance, convergence,
and conditional uncertainty distribution.
2.1
Uncertain Variable
Roughly speaking, an uncertain variable is a real valued function on an uncertainty space. A formal definition is given as follows.
Definition 2.1 (Liu [113]) An uncertain variable is a measurable function
from an uncertainty space (, L, M) to the set of real numbers, i.e., { B}
is an event for any Borel set B.
<..
...
........
.........
........ ..........
....
....
.....
...
....
.....
...
...
....
...
....
.
...
.
...
...
...
...
.
.
.
...
.
..
.
...
.
...
...
.
.
...
...
...
...
...
...
...
.
...
.
..
..
...
..................................
...
........
.......
...
...
.......
.......
...
.
.
.
.
...
.......
......
....
......
...
.....
.......
.....
.........
...
......................
...
..
..............................................................................................................................................................................................................................................
....
.
()
30
(2.1)
Example 2.3: Let 1 and 2 be two uncertain variables. Then the sum
= 1 + 2 is an uncertain variable defined by
() = 1 () + 2 (),
31
2.2
Uncertainty Distribution
This section introduces a concept of uncertainty distribution in order to describe uncertain variables. Mention that uncertainty distribution is a carrier
of incomplete information of uncertain variable. However, in many cases, it
is sufficient to know the uncertainty distribution rather than the uncertain
variable itself.
Definition 2.5 (Liu [113]) The uncertainty distribution of an uncertain
variable is defined by
(x) = M { x}
(2.2)
for any real number x.
(x)
...
..........
...
............................................................................
.....................................
...
...............
...........
...
........
.
.
.
.
.
...
.
.
.......
...
.....
...
......
......
...
.....
.
.
.
...
.
.
.....
...
.....
...
.....
......
...
......
.
.
.
.
...
.
.......
...
.......
...
........
. ...............
......................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
..
...
...
0, if x < 0
0.7, if 0 x < 1
(x) =
1, if 1 x.
32
M{2 } = 0.3,
M{3 } = 0.2.
1, if = 1
2, if = 2
() =
3, if = 3
has an uncertainty distribution
0,
0.6,
(x) =
0.8,
1,
if
if
if
if
x<1
1x<2
2x<3
3 x.
0, if x < 1
0.5, if 1 x < 1
(x) =
1, if x 1.
Thus the two uncertain variables and are identically distributed but 6= .
Sufficient and Necessary Condition
Theorem 2.2 (Peng-Iwamura Theorem [172]) A function (x) : < [0, 1]
is an uncertainty distribution if and only if it is a monotone increasing function except (x) 0 and (x) 1.
Proof: It is obvious that an uncertainty distribution is a monotone increasing function. In addition, both (x) 6 0 and (x) 6 1 follow from the
asymptotic theorem immediately. Conversely, suppose that is a monotone
increasing function but (x) 6 0 and (x) 6 1. We will prove that there is
an uncertain variable whose uncertainty distribution is just . Let C be a
33
collection of all intervals of the form (, a], (b, ), and <. We define a
set function on < as follows,
M{(, a]} = (a),
M{(b, +)} = 1 (b),
M{} = 0,
M{<} = 1.
For an arbitrary Borel set B, there exists a sequence {Ai } in C such that
B
Ai .
i=1
Note that such a sequence is not unique. Thus the set function M{B} is
defined by
X
X
inf
M{A
},
if
inf
M{Ai } < 0.5
S
S
B Ai i=1
B Ai i=1
i=1
i=1
X
X
M{B} =
inf
M{A
},
if
inf
M{Ai } < 0.5
i
S
S
c
c
B
A
B
A
i=1
i=1
i
i
i=1
i=1
0.5,
otherwise.
We may prove that the set function M is indeed an uncertain measure on <,
and the uncertain variable defined by the identity function () = from the
uncertainty space (<, L, M) to < has the uncertainty distribution .
Example 2.4: Let c be a number with 0 < c < 1. Then (x) c is an
uncertainty distribution. When c 0.5, we define a set function over < as
follows,
0,
if =
if is upper bounded
c,
0.5, if both and c are upper unbounded
M{} =
1 c, if c is upper bounded
1,
if = .
Then (<, L, M) is an uncertainty space. It is easy to verify that the identity
function () = is an uncertain variable whose uncertainty distribution is
just (x) c. When c > 0.5, we define
0,
if =
1 c, if is upper bounded
0.5, if both and c are upper unbounded
M{} =
c,
if c is upper bounded
1,
if = .
34
(2.3)
1 .........................................................................................
..
...
..
...
...
...
...
............................................................
...
..................................................................................
.........
...
......
......
...
.....
.
.
...
.
....
...
.....
...
.....
......
...
......
.
.
.
.
.
...
.
.......
.............................................................................................................................................................................................................................................
..
...
..
0.5
0,
if x 24
(x 24)/4, if 24 x 28
(x) =
(2.6)
1,
if x 28.
35
(x)
....
........
..
...
.........................
.............................................................................................................................................................
....
........
......
...
.....
.
.
.
...
.
..
... ......
... ...
... ...
.......
.
...............................................................................
.....
...
..
...
...
...
...
...
...
...............................................................................................................................................................................................................................
..
....
.
0.5
0,
if x 180
1,
if x 185.
Some Special Uncertainty Distributions
Definition 2.7 An uncertain variable is
uncertainty distribution
0,
(x a)/(b a),
(x) =
1,
(2.8)
0,
if x a
(x a)/2(b a),
if a x b
(x) =
(2.9)
(x + c 2b)/2(c b), if b x c
1,
if x c
denoted by Z(a, b, c) where a, b, c are real numbers with a < b < c.
36
(x)
....
........
..
...
..........................................................
.......................................................
...
......
...
..... ..
..... ..
...
.....
..
.
.
.
...
.
..
..
...
.....
..
.....
...
.....
.
.
.
...
.
...
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
....
.
.
.
...
..
.
...
.
.
.
...
..
.
...
.
.
.
.
.
.
............................................................................................................................................................................................................................
..
...
..
..
0.5
1 + exp
(e x)
1
x<
(2.10)
Definition 2.10 An uncertain variable is called lognormal if ln is a normal uncertain variable N (e, ). In other words, a lognormal uncertain variable has an uncertainty distribution
(x) =
1 + exp
(e ln x)
1
,
x0
denoted by LOGN (e, ), where e and are real numbers with > 0.
(2.11)
37
(x)
....
........
..
...
.........................................................................
..
........
..............................
....
.............
..........
...
........
.
.
.
.
.
...
.
.
......
...
.....
...
......
.....
...
.....
.
.
.
...
.
....
...
............
.........................................................................
...
.
..... ...
...
..... .. ....
.
.
.
...
.
...
.....
...
...
...
......
..
...
...
......
..
...
.......
...
....
.......
.
.
.
.
.
.
.
...
.
.....
.
......
.
.
.
.
.
.
.
.
.
.
.........
.
.
.
.
.
..................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...........................................................................................................................................................................................................
...............
.........................................
..
..
....
.
0.5
0.5
exp(e)
0,
if x < x1
(i+1 i )(x xi )
, if xi x xi+1 , 1 i < n
i +
(x) =
xi+1 xi
1,
if x > xn
(2.12)
38
(x)
....
........
..
...
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .......................................
..
....
..
..
....
5 .............................................................................................................................
.
...
.
.
..
.
.
.......
..
..
.............
.
......
4 .......................................................................
..
..
. ...
.
...
.. ...
....
...
.
.. ..
.
..
...
..
.. ..
.
...
.. ...
....
.
...
..
..
...
.
...
..
..
..
...
.
..
... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...........
...
...
.
.
.
3 ...
... ..
.
.
.
.
.
.
..
.
.
.
..
....
..
..
....
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .................
..
.
.
.
..
.
.
..
2 ...
.
.. ..
.
.
.
.
.
.
..
.
..
..
.... ...
..
.
.
....
.
..
..
... .
.
.
....
.
...
.
..
..
..
...
.
.
..
...
.
.
.
.
.
.
.
...
..
.
.
.
.
...
.
..
..
..
...
.
.
.
....
.
...
.
.
..
..
...
.
.
...
.
...
.
.
.
.
.
..
..
...
..
.... .. .. .. .. .. .. .. .. .
.......
..
..
.
1 ...
..
.
...
.
.
..
.
.
.
..
....
...
.
.
.
........................................................................................................................................................................................................................................................................
...
..
1
2
3
4
5
..
x x
_
_
_
X
ci ,
if
ci > 0.5,
ci +
ci 1
xi >x
xi x
xi x
xi x
X
_
_
X
c
,
if
c
>
0.5,
c
+
ci < 1
i
i
i
xi >x
xi >x
xi x
xi x
_
_
_
X
1
ci , if
ci > 0.5,
ci +
ci 1
x
>x
x
>x
x
>x
x
x
i
i
i
i
X
_
_
X
(x) =
ci ,
if
ci > 0.5,
ci +
ci < 1
xi >x
xi >x
xi x
xi x
X
_
X
ci , if
ci 0.5,
ci < 0.5
xi >x
xi >x
1in
X
_
X
ci ,
if
ci 0.5,
ci < 0.5
xi x
1in
xi x
0.5,
otherwise.
Especially, if c1 , c2 , , cn are nonnegative numbers such that c1 + c2 + +
cn = 1, then
X
(x) =
ci .
(2.15)
xi x
39
(x)
....
........
..
...
..
..
5 ...........................................................................................................................................
..
...
..
...
..
...
..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 ....
...
...
..
..
...
.
..
..
...
.
.
..
...
.
.
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
..
3 ...
.
.
.
...
..
..
....
..
..
.
...
..
.
..
..
..
...
.
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
2 ...
..
.
.
..
.
..
..
.
.
....
.
.
...
..
..
...
...
..
..
..
...
...
..
.
.. .. .. .. .. .. .. .. .. .
.........................................
..
.
.
.
.
.
1 ...
.
.
.
..
.
.
.
..
..
..
..
..
..
....
...
..
.
..
..
..
.
.
...
...
..
.
...
...
...
..........................................................................................................................................................................................................................................................
....
..
1
2
3
4
5
...
M{ x} = 1 (x).
(2.16)
Proof: The equation M{ x} = (x) follows from the definition of uncertainty distribution immediately. By using the duality of uncertain measure
and continuity of uncertainty distribution, we get
M{ x} = 1 M{ < x} = 1 (x).
The theorem is verified.
Theorem 2.4 Let be an uncertain variable with continuous uncertainty
distribution . Then for any interval [a, b], we have
(b) (a) M{a b} (b) (1 (a)).
(2.17)
Proof: It follows from the subadditivity of uncertain measure and the measure inversion theorem that
M{a b} + M{ a} M{ b}.
That is,
M{a b} + (a) (b).
Thus the inequality on the left hand side is verified. It follows from the
monotonicity of uncertain measure and the measure inversion theorem that
M{a b} M{ (, b]} = (b).
40
_
_
_
X
ci ,
if
ci > 0.5,
ci +
ci
xi A
xi A
xi A
xi 6A
X
_
_
X
c
,
if
c
>
0.5,
c
+
ci
i
i
i
xi A
xi A
xi 6A
xi 6A
_
_
_
X
1
ci , if
ci > 0.5,
ci +
ci
x
A
x
6
A
x
6
A
x
6
A
i
i
i
i
X
_
_
X
M{ A} =
ci ,
if
ci > 0.5,
ci +
ci
x
A
x
A
x
6
A
x
6
A
i
i
i
i
X
_
X
1
ci , if
ci 0.5,
ci < 0.5
1in
x
6
A
x
6
A
i
i
X
_
X
ci ,
if
ci 0.5,
ci < 0.5
xi A
xi A
1in
0.5,
otherwise
1
<1
1
<1
41
lim (x) = 1.
x+
(2.21)
For example, linear uncertainty distribution, zigzag uncertainty distribution, normal uncertainty distribution, and lognormal uncertainty distribution
are all regular.
A regular uncertainty distribution (x) has an inverse function on the
range of x with 0 < (x) < 1, and the inverse function 1 () exists on
the open interval (0, 1). It is easy to verify that 1 () is a continuous and
strictly increasing function with respect to (0, 1).
For convenience, we stipulate that the uncertainty distribution of a crisp
value c is regular. That is, we will say
(
1, if x c
(x) =
(2.22)
0, if x < c
is a continuous and strictly increasing function with respect to x at which
0 < (x) < 1 even though it is discontinuous at c. We will also stipulate
that a crisp value c has an inverse uncertainty distribution
1 () c
(2.23)
(2.24)
Example 2.6: The inverse uncertainty distribution of linear uncertain variable L(a, b) is
1 () = (1 )a + b.
(2.25)
42
1 ()
....
.........
.
...
..
..
.....
b .................................................................
..... ..
...... .
..
....
...
...
......
..
......
..
.....
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
..
.
.
.
...
.
.....
..
.
.
.
...
.
...
.
..
.
.
.
...
.
....
.
.
.
.
.
.
.
............................................................ ..........................................................................................................................
.
.
.
..
.....
.
.
.
...
.
.....
...
......
... ..........
.. .....
........
...
...
.
..
....
.....
c .........................................................
....... .
....
....... .
..
....
...
.......
..
...
.......
..
..
.......
.......
.
..
.
.
.
...
.
.
.....
.
..
.
.
.
...
.
.
..
..
...........................................
.
.
..
.
.
...
.
... ..
.
..
.
.
...
.
.
...
..
.
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
..
.
.
.
.
.
.
.
.
.................................................. ...................................................................................................................................
.
.
..
.
...
.
.
.
...
.
...
...
.....
... .........
.. ....
.......
...
0.5
3
1
ln
.
(2.27)
() = e +
1
Example 2.9: The inverse uncertainty distribution of lognormal uncertain
variable LOGN (e, ) is
!
1 () = exp e +
ln
.
(2.28)
43
1 ()
....
...
.........
.
...
....
...
.......
...
... ..
...
.. .
...
... ..
.... .
...
.... ...
.
.
.
...
.
.....
...
...
......
...
..
.......
.........
...
..
..........
.
.
.
.
.
.
.
...
.
..
.
.....
..
...................................................
.
.
.
.
.
.
..
...
.
.....
.
.
.
.
.
.
.
.
.
..
.
...
.....
.
.
.
.
.
.
.
..
.
...
....
.
.
.
.
.
.
..
.
...
..
...
.
.... ........
......................................................................................................................................................................................
... ...
... ...
......
....
0.5
0, if x lim 1 ()
, if x = 1 ()
(x) =
1, if x lim 1 ().
1
It follows from Peng-Iwamura theorem that (x) is an uncertainty distribution of some uncertain variable . Then for each (0, 1), we have
M{ 1 ()} = (1 ()) = .
44
2.3
Independence
i=1
i=1
=1
i=1
M{i Bic } =
i=1
n
_
i=1
M {i Bi } .
45
n
^
i=1
n
^
M{fi (i ) Bi }.
i=1
i=1
(2.31)
46
2.4
Operational Law
(2.32)
(2.33)
(2.34)
(2.35)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
1
M{ 1 ()} = M{f (1 , 2 ) f (1
1 (), 2 ())}.
47
(2.37)
(2.38)
(2.39)
(2.41)
(2.43)
48
(2.44)
The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,
k L(a, b) = L(ka, kb).
(2.45)
(2.46)
(2.47)
(
1
2 ()
49
(1 2)a2 + 2b2 ,
if < 0.5
(2 2)b2 + (2 1)c2 , if 0.5.
It follows from the operational law that the inverse uncertainty distribution
of 1 + 2 is
(
(1 2)(a1 + a2 ) + 2(b1 + b2 ),
if < 0.5
1
() =
(2 2)(b1 + b2 ) + (2 1)(c1 + c2 ), if 0.5.
Hence the sum is also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ).
The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable Z(a, b, c) is . It follows from the operational law
that when k > 0, the inverse uncertainty distribution of k is
(
(1 2)(ka) + 2(kb),
if < 0.5
1 () = k1 () =
(2 2)(kb) + (2 1)(kc), if 0.5.
Hence k is just a zigzag uncertain variable Z(ka, kb, kc).
Theorem 2.13 Let 1 and 2 be independent normal uncertain variables
N (e1 , 1 ) and N (e2 , 2 ), respectively. Then the sum 1 + 2 is also a normal
uncertain variable N (e1 + e2 , 1 + 2 ), i.e.,
N (e1 , 1 ) + N (e2 , 2 ) = N (e1 + e2 , 1 + 2 ).
(2.48)
(2.49)
1 3
1
1 () = e1 +
ln
,
2 3
1
ln
.
2 () = e2 +
1
It follows from the operational law that the inverse uncertainty distribution
of 1 + 2 is
(1 + 2 ) 3
1
1 () = 1
()
+
()
=
(e
+
e
)
+
ln
.
1
2
1
2
1
Hence the sum is also a normal uncertain variable N (e1 + e2 , 1 + 2 ). The
first part is verified. Next, suppose that the uncertainty distribution of the
50
(k) 3
1
1
ln
.
() = k () = (ke) +
1
Hence k is just a normal uncertain variable N (ke, k).
Theorem 2.14 Assume that 1 and 2 are independent lognormal uncertain
variables LOGN (e1 , 1 ) and LOGN (e2 , 2 ), respectively. Then the product
1 2 is also a lognormal uncertain variable LOGN (e1 + e2 , 1 + 2 ), i.e.,
LOGN (e1 , 1 ) LOGN (e2 , 2 ) = LOGN (e1 + e2 , 1 + 2 ).
(2.50)
The product of a lognormal uncertain variable LOGN (e, ) and a scalar number k > 0 is also a lognormal uncertain variable LOGN (e + ln k, ), i.e.,
k LOGN (e, ) = LOGN (e + ln k, ).
(2.51)
3
1
1
ln
,
1 () = exp e1 +
1
!
2 3
1
2 () = exp e2 +
ln
.
1
It follows from the operational law that the inverse uncertainty distribution
of 1 2 is
!
(
+
)
3
1
2
1
1 () = 1
ln
.
1 () 2 () = exp (e1 + e2 ) +
1
Hence the product is a lognormal uncertain variable LOGN (e1 + e2 , 1 + 2 ).
The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable LOGN (e, ) is . It follows from the operational
law that, when k > 0, the inverse uncertainty distribution of k is
!
3
1
1
ln
.
() = k () = exp (e + ln k) +
1
Hence k is just a lognormal uncertain variable LOGN (e + ln k, ).
Theorem 2.15 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. If f is a
strictly increasing function, then
= f (1 , 2 , , n )
(2.52)
51
sup
min i (xi ).
(2.53)
(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .
M {(1 x1 ) (2 x2 )}
sup
f (x1 ,x2 )=x
M{1 x1 } M{2 x2 }
sup
f (x1 ,x2 )=x
1 (x1 ) 2 (x2 ).
sup
f (x1 ,x2 )=x
(2.55)
sup
(2.56)
x1 +x2 ++xn =x
52
sup
x1 x2 xn =x
(2.59)
(2.62)
(2.65)
(2.67)
(2.69)
S = S1 S2 Sn
(2.70)
(2.71)
53
sup
f (x1 ,x2 , ,xn )=x
= min
sup
= min i (x).
1in
sup
f (x1 ,x2 , ,xn )=x
= max
sup
= max i (x).
1in
(2.72)
(2.73)
54
whenever xi < yi for i = 1, 2, , n. If f (x1 , x2 , , xn ) is a strictly increasing function, then f (x1 , x2 , , xn ) is a strictly decreasing function. Furthermore, 1/f (x1 , x2 , , xn ) is also a strictly decreasing function provided
that f is positive. Especially, the following are strictly decreasing functions,
f (x) = x,
f (x) = exp(x),
f (x) =
1
,
x
x > 0.
Theorem 2.17 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If f
is a strictly decreasing function, then
= f (1 , 2 , , n )
(2.74)
(2.75)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
1
M{ 1 ()} = M{f (1 , 2 ) f (1
1 (1 ), 2 (1 ))}.
1
.
1 (1 )
(2.76)
55
(2.78)
sup
(2.79)
(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .
sup
M {(1 x1 ) (2 x2 )}
sup
M{1 x1 } M{2 x2 }
sup
x <.
(2.80)
56
Exercise 2.17: Let be a positive uncertain variable with continuous uncertainty distribution . Show that 1/ is an uncertain variable with uncertainty
distribution
1
(x) = 1
, x > 0.
(2.82)
x
Exercise 2.18: Let be an uncertain variable with continuous uncertainty
distribution . Show that exp() is a positive uncertain variable with
uncertainty distribution
(x) = 1 ( ln(x)),
x > 0.
(2.83)
(2.84)
(2.85)
(2.86)
57
On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain
1
{ 1 ()} {1 1
1 ()} {2 2 (1 )}.
(2.87)
Exercise 2.20: Let 1 and 2 be independent and positive uncertain variables with regular uncertainty distributions 1 and 2 , respectively. Show
that the inverse uncertainty distribution of the quotient 1 /2 is
1 () =
1
1 ()
.
1
2 (1 )
(2.88)
Exercise 2.21: Assume 1 and 2 are independent and positive uncertain variables with regular uncertainty distributions 1 and 2 , respectively.
Show that the inverse uncertainty distribution of 1 /(1 + 2 ) is
1 () =
1
1 ()
.
+ 1
2 (1 )
1
1 ()
(2.89)
Theorem 2.20 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with continuous uncertainty distributions 1 , 2 , , n , respectively.
If the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 ,
, xm and strictly decreasing with respect to xm+1 , xm+2 , , xn , then
= f (1 , 2 , , n )
(2.90)
58
1im
m+1in
(2.91)
(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .
sup
M {(1 x1 ) (2 x2 )}
sup
M{1 x1 } M{2 x2 }
sup
(2.92)
y<
Exercise 2.23: Let 1 and 2 be independent and positive uncertain variables with continuous uncertainty distributions 1 and 2 , respectively. Show
that 1 /2 is an uncertain variable with uncertainty distribution
(x) = sup 1 (xy) (1 2 (y)).
(2.93)
y>0
59
Theorem 2.21 (Liu [119]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If
f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and strictly
decreasing with respect to m+1 , m+2 , , n , then
M{f (1 , 2 , , n ) 0}
(2.94)
(2.95)
(2.96)
(2.97)
(2.98)
60
1
1
1
Figure 2.15: f (1
1 (), , m (), m+1 (1 ), , n (1 ))
(2.99)
(2.100)
(2.101)
61
1
1
1
Figure 2.16: f (1
1 (1 ), , m (1 ), m+1 (), , n ())
(2.102)
if and only if
1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 )) 0.
(2.103)
Proof: It follows from Theorem 2.19 that the inverse uncertainty distribution
of f (1 , 2 , , n ) is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).
62
sup
min i (xi ),
if
sup
min i (xi ) < 0.5
sup
min i (xi ),
1
if
sup
min i (xi ) 0.5
1in
f (x1 ,x2 , ,xn )=1
(2.108)
for i = 1, 2, , n, respectively.
Proof: Let B1 , B2 , , Bn be nonempty subsets of {0, 1}. In other words,
they take values of {0}, {1} or {0, 1}. Write
= { = 1},
c = { = 0},
i = {i Bi }
sup
min M{i Bi },
if
sup
min M{i Bi } > 0.5
1
sup
min M{i Bi },
M{ = 1} =
if
sup
min M{i Bi } > 0.5
0.5, otherwise.
(2.109)
63
i (0) = M{i = 0}
Then we have
min M{i Bi } = 1
sup
sup
sup
min i (xi ).
Case 2: Assume
sup
Then we have
min M{i Bi } = 1
sup
sup
sup
min i (xi ).
Case 3: Assume
sup
sup
Then we have
sup
sup
sup
min i (xi ).
Case 4: Assume
sup
64
sup
Then we have
min M{i Bi } = 1
sup
sup
sup
min i (xi ).
(2.111)
(2.112)
M{ = 0} = (1 a1 ) (1 a2 ) (1 an ).
(2.113)
Proof: Since is the minimum of Boolean uncertain variables, the corresponding Boolean function is
f (x1 , x2 , , xn ) = x1 x2 xn .
(2.114)
sup
1in
1i<n
sup
min i (xi ) = an .
65
sup
sup
min i (xi ) = 1 (1 an ) = an .
(2.116)
(2.117)
M{ = 0} = (1 a1 ) (1 a2 ) (1 an ).
(2.118)
Proof: Since is the maximum of Boolean uncertain variables, the corresponding Boolean function is
f (x1 , x2 , , xn ) = x1 x2 xn .
(2.119)
sup
sup
1<in
1in
sup
min i (xi ) = 1 (1 a1 ) = a1 .
66
sup
min i (xi ) = a1 .
1, if 1 + 2 + + n k
0, if 1 + 2 + + n < k
(2.121)
(2.122)
M{ = 0} = k-min [1 a1 , 1 a2 , , 1 an ]
(2.123)
and
where k-max represents the kth largest value, and k-min represents the kth
smallest value.
Proof: This is the so-called k-out-of-n system. The corresponding Boolean
function is
(
1, if x1 + x2 + + xn k
f (x1 , x2 , , xn ) =
(2.124)
0, if x1 + x2 + + xn < k.
Without loss of generality, we assume a1 a2 an . Then we have
sup
sup
k<in
k<in
67
sup
min i (xi ) = 1 (1 ak ) = ak .
sup
sup
min i (xi ) = ak .
68
x
+
x
with
uncertain
measure
c
c
12
21
12
21
2.5
(2.129)
Expected Value
69
Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ x} = 1 (x) and M{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
M{ x}dx
M{ x}dx
(1 (x))dx
(x)dx.
(1 (x))dx
(x)dx
Proof: It follows from the integration by parts and Theorem 2.29 that the
expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx
Z
=
xd(x) +
0
xd(x) =
xd(x).
70
(x)
....
........
..
.
...............................................................................................................................
....................................................................................
............................................................
.............................................
.......................................
.....................................
............................
........................
..................
..............
.......
........
.
.
.
.
............
.
.
.
.
.
.
................
.
.
.
.
.
.
............................
......
................................
........................................
..................................................
.
.
.
.
.
.
.
.
.
.
.
.....................................................................
.......................................................................................................................................................................................................................................................................
..
...
...
.
Z
xd(x) =
1 ()d
1 ()d.
(2.134)
Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.30 that the expected value is
Z
E[] =
Z
xd(x) =
1 ()d.
71
Exercise 2.27: Show that the lognormal uncertain variable LOGN (e, )
has an expected value
(
E[] =
+,
if / 3.
(2.138)
This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.
Exercise 2.28: Let be an uncertain variable with empirical uncertainty
distribution
0,
if x < x1
(i+1 i )(x xi )
i +
, if xi x xi+1 , 1 i < n
(x) =
xi+1 xi
1,
if x > xn
where x1 < x2 < < xn and 0 1 2 n 1. Show that
E[] =
n1
X i+1 i1
n1 + n
1 + 2
x1 +
xi + 1
xn .
2
2
2
i=2
(2.139)
i 6= j.
(2.141)
n
X
wk xk
(2.142)
wk = qk qk1
(2.143)
k=1
where
72
and
ci ,
xi xk
ci ,
xi >xk
1
ci ,
x
>x
i
k
X
qk =
ci ,
xi xk
1
ci ,
xi >xk
ci ,
xi xk
0.5,
if
ci > 0.5,
xi xk
if
_
_
ci > 0.5,
ci > 0.5,
ci > 0.5,
_
_
xi >xk
ci 0.5,
ci 1
ci < 1
xi >xk
ci +
ci 1
xi xk
ci +
ci < 1
xi xk
ci < 0.5
xi >xk
1in
if
ci +
xi >xk
xi >xk
if
X
xi >xk
xi xk
xi >xk
if
ci +
xi xk
xi xk
if
ci 0.5,
1in
ci < 0.5
xi xk
otherwise
for k = 1, 2, , n. Note that q0 0, qn 1 and w1 , w2 , , wn are nonnegative numbers with w1 + w2 + + wn = 1. Especially, if c1 , c2 , , cn are
nonnegative numbers such that c1 + c2 + + cn = 1, then
wk ck , k = 1, 2, , n.
(2.144)
1
1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 ))d. (2.145)
73
f (1 ())d.
E[f ()] =
(2.146)
E[f ()] =
f (x)d(x).
(2.147)
1 ()1 ()d.
E[] =
(2.148)
1 ()
E
d.
(2.149)
=
1 (1 )
0
Exercise 2.33: Assume and are independent and positive uncertain
variables with regular uncertainty distributions and , respectively. Show
that
Z 1
1 ()
=
d.
(2.150)
E
1 () + 1 (1 )
+
0
Linearity of Expected Value Operator
Theorem 2.33 (Liu [120]) Let and be independent uncertain variables
with finite expected values. Then for any real numbers a and b, we have
E[a + b] = aE[] + bE[].
(2.151)
1 ()d = aE[].
74
1 ()d = aE[].
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[a + b] = E[a] + E[b] = aE[] + bE[].
The theorem is proved.
Example 2.14: Generally speaking, the expected value operator is not
necessarily linear if the independence is not assumed. For example, take
(, L, M) to be {1 , 2 , 3 } with M{1 } = 0.7, M{2 } = 0.3 and M{3 } = 0.2.
It follows from the extension theorem that M{1 , 2 } = 0.8, M{1 , 3 } = 0.7,
M{2 , 3 } = 0.3. Define two uncertain variables as follows,
1, if = 1
0, if = 1
0, if = 2
2, if = 2
() =
() =
2, if = 3 ,
3, if = 3 .
Note that and are not independent, and their sum is
1, if = 1
2, if = 2
( + )() =
5, if = 3 .
It is easy to verify that E[] = 0.9, E[] = 0.8, and E[ + ] = 1.9. Thus we
have
E[ + ] > E[] + E[].
75
0,
1,
() =
2,
Then
are defined by
if = 1
if = 2
if = 3 ,
0, if = 1
3, if = 2
() =
1, if = 3 .
0, if = 1
4, if = 2
( + )() =
3, if = 3 .
It is easy to verify that E[] = 0.5, E[] = 0.9, and E[ + ] = 1.2. Thus we
have
E[ + ] < E[] + E[].
Comonotonic Functions of Uncertain Variable
Two real-valued functions f and g are said to be comonotonic if for any
numbers x and y, we always have
(f (x) f (y))(g(x) g(y)) 0.
(2.152)
It is easy to verify that (i) any function is comonotonic with itself (or positive
constant multiple of the function); (ii) any monotone increasing functions are
comonotonic with each other; and (iii) any monotone decreasing functions are
also comonotonic with each other.
Theorem 2.34 (Yang [216]) Let f and g be comonotonic functions. Then
for any uncertain variable , we have
E[f () + g()] = E[f ()] + E[g()].
(2.153)
Proof: Let f () and g() have uncertainty distributions and , respectively. Since f and g are comonotonic functions, at least one of the following
relations is true,
{f () 1 ()} {g() 1 ()},
{f () 1 ()} {g() 1 ()}.
On the one hand, we have
M{f () + g() 1 () + 1 ()}
M{(f () 1 ()) (g() 1 ())}
= M{f () 1 ()} M{g() 1 ()}
= = .
76
Z
=
1 ()d +
1 ()d
(2.154)
(2.155)
b ()
() a
a+
b.
ba
ba
It follows from the convexity of f that
() =
b ()
() a
f (a) +
f (b).
ba
ba
Taking expected values on both sides, we obtain the inequality.
f (())
(2.156)
77
2.6
Variance
(2.157)
This definition tells us that the variance is just the expected value of
( e)2 . Since ( e)2 is a nonnegative uncertain variable, we also have
Z +
V [] =
M{( e)2 r}dr.
(2.158)
0
(2.159)
M{( e)2 = 0} = 1.
That is, M{ = e} = 1. Conversely, assume M{ = e} = 1. Then we
immediately have M{( e)2 = 0} = 1 and M{( e)2 r} = 0 for any
r > 0. Thus
Z
+
V [] =
0
78
(2.160)
(2.161)
M{( e +
x) ( e
x)}dx
(M{ e +
x} + M{ e
x})dx
(1 (e +
x) + (e
x))dx.
V [] =
(1 (e + x) + (e x))dx.
(2.162)
0
79
Proof: This theorem is based on the stipulation (2.162) that says the variance is
Z +
Z +
V [] =
(1 (e + y))dy +
(e y)dy.
0
(1 (e + y))dy =
(1 (x))d(x e)2 =
(x e)2 d(x).
0
2
(e y)dy =
(x)d(x e) =
(x e)2 d(x).
0
e
2
(x e) d(x) =
(x e)2 d(x).
Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.39 that the variance is
Z +
Z 1
V [] =
(x e)2 d(x) =
(1 () e)2 d.
80
1
1
1
2
(f (1
1 (), , m (), m+1 (1 ), , n (1 )) e) d
V [] =
0
2.7
Moments
(1 ( k x))dx.
(2.167)
M{ k x}dx =
M{
Z
x}dx =
(1 ( k x))dx.
Z
0
(1 ( k x))dx
( k x)dx.
(2.168)
81
M{
M{
x}dx
x}dx
(1 ( k x))dx
( k x)dx.
M{(
x) ( k x)}dx
0
+
(M{
x} + M{ k x})dx
0
+
Z
=
(1 ( k x) + ( k x))dx.
(2.169)
E[ k ] =
(1 ( k x) + ( k x))dx.
0
Proof: When k is an odd number, Theorem 2.43 says that the k-th moment
is
Z +
Z 0
E[ k ] =
(1 ( k y))dy
( k y)dy.
0
k
k
k
(1 ( y))dy =
(1 (x))dx =
xk d(x)
0
82
and
Z
xk d(x).
(x)dx =
( y)dy =
Thus we have
E[ k ] =
xk d(x) +
xk d(x) =
xk d(x).
E[ k ] =
(1 ( k y) + ( k y))dy.
0
(1 ( k y))dy =
(1 (x))dxk =
xk d(x).
0
k
k
( y)dy =
(x)dx =
xk d(x).
xk d(x) =
xk d(x).
(1 ())k d.
E[ k ] =
(2.171)
Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.44 that the k-th moment is
Z +
Z 1
E[ k ] =
xk d(x) =
(1 ())k d.
83
Exercise 2.38: Show that the second moment of the normal uncertain
variable N (e, ) is
E[ 2 ] = e2 + 2 .
(2.173)
Theorem 2.46 (Sheng [194]) Assume 1 , 2 , , n are independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively, and k is a positive integer. If f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and strictly decreasing with respect to
xm+1 , xm+2 , , xn , then the k-th moment of = f (1 , 2 , , n ) is
E[ k ] =
1
1
1
1
f k (1
1 (), , m (), m+1 (1 ), , n (1 ))d.
2.8
Entropy
H[] =
S((x))dx
(2.174)
Z
(0 ln 0 + 1 ln 1) dx
H[] =
(1 ln 1 + 0 ln 0) dx = 0.
a
84
S(t)
.
....
.......
..
...
.
... . . . . . . . . . . . . . . ..............................
.......
.
.......
.....
.
....
.....
.....
.
...
.....
.....
.
.....
....
...
.
....
....
.
.
.
...
.
....
...
.
.
...
...
.
.
...
...
...
.
.
...
..
.
...
.
...
.
..
...
.
...
.
..
...
...
.
.
..
...
.
...
.
...
.
... ....
...
.
... ...
.
...
.
... ...
...
.
... ...
...
.
...
... ...
.
...
.
... ...
.
...
......
.
...
......
.
.
.
.
....................................................................................................................................................................................
..
...
..
ln 2
0.5
H[] = .
(2.178)
3
Theorem 2.47 Let be an uncertain variable. Then H[] 0 and equality
holds if is essentially a constant.
Proof: The nonnegativity is clear. In addition, when an uncertain variable
tends to a constant, its entropy tends to the minimum 0.
Theorem 2.48 Let be an uncertain variable taking values on the interval
[a, b]. Then
H[] (b a) ln 2
(2.179)
and equality holds if has an uncertainty distribution (x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
85
d.
(2.181)
H[] =
1 () ln
1
0
Proof: It is clear that S() is a derivable function with S 0 () = ln /(1
). Since
Z (x)
Z 1
S((x)) =
S 0 ()d =
S 0 ()d,
0
(x)
we have
Z
H[] =
1 ()S 0 ()d
Z
=
(0)
(0)
S ()ddx
S((x))dx =
(x)
S 0 ()ddx.
(x)
1 ()
S 0 ()dxd
1 ()S 0 ()d
(0)
1
1 ()S 0 ()d =
Z
0
1 () ln
d.
1
86
and strictly decreasing with respect to xm+1 , xm+2 , , xn , then the uncertain
variable = f (1 , 2 , , n ) has an entropy
Z 1
1
1
1
f (1
H[] =
d.
1 (), , m (), m+1 (1 ), , n (1 )) ln
1
0
Proof: Since the function f (x1 , x2 , , xn ) is strictly increasing with respect
to x1 , x2 , , xm and strictly decreasing with respect to xm+1 , xm+2 , , xn ,
it follows from Theorem 2.19 that the inverse uncertainty distribution of is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).
d.
H[] =
1 ()1 () ln
1
0
Exercise 2.42: Let and be independent and positive uncertain variables
with regular uncertainty distributions and , respectively. Show that
Z 1
1 ()
H
ln
d.
=
1
(1
)
1
0
Exercise 2.43: Let and be independent and positive uncertain variables
with regular uncertainty distributions and , respectively. Show that
Z 1
1 ()
H
ln
d.
=
1
+
() + 1 (1 ) 1
0
Positive Linearity of Entropy
Theorem 2.52 (Dai and Chen [24]) Let and be independent uncertain
variables. Then for any real numbers a and b, we have
H[a + b] = |a|H[] + |b|H[].
(2.182)
1
H[a] =
a () ln
d = a
1 () ln
d = |a|H[].
1
1
0
0
87
1
H[a] =
a (1 ) ln
d =(a)
1 () ln
d = |a|H[].
1
0
0
Thus we always have H[a] = |a|H[].
Step 2: We prove H[ + ] = H[] + H[]. Note that the inverse uncertainty distribution of + is
1 () = 1 () + 1 ().
It follows from Theorem 2.50 that
Z 1
H[ + ] =
(1 () + 1 ()) ln
0
d = H[] + H[].
1
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[a + b] = H[a] + H[b] = |a|H[] + |b|H[].
The theorem is proved.
Maximum Entropy Principle
Given some constraints, for example, expected value and variance, there are
usually multiple compatible uncertainty distributions. Which uncertainty
distribution shall we take? The maximum entropy principle attempts to
select the uncertainty distribution that has maximum entropy and satisfies
the prescribed constraints.
Theorem 2.53 (Chen and Dai [12]) Let be an uncertain variable whose
uncertainty distribution is arbitrary but the expected value e and variance 2 .
Then
(2.183)
H[]
3
and the equality holds if is a normal uncertain variable N (e, ).
Proof: Let (x) be the uncertainty distribution of and write (x) =
(2e x) for x e. It follows from the stipulation (2.162) and the change
of variable of integral that the variance is
Z +
Z +
V [] = 2
(x e)(1 (x))dx + 2
(x e)(x)dx = 2 .
e
88
(x e)(x)dx = (1 ) 2 .
2
e
Z
2
(x e)(1 (x))dx 2
Z
2
(x e)(x)dx (1 ) 2 .
(x) = 1 + exp
,
6
!!1
(x e)
(x) = 1 + exp p
.
6(1 )
Then the entropy is
H[] = +
6
6
which achieves the maximum when = 1/2. Thus the maximum entropy
distribution is just the normal uncertainty distribution N (e, ).
89
2.9
Distance
Definition 2.20 (Liu [113]) The distance between uncertain variables and
is defined as
d(, ) = E[| |].
(2.184)
That is, the distance between and is just the expected value of | |.
Since | | is a nonnegative uncertain variable, we always have
Z +
M{| | r}dr.
(2.185)
d(, ) =
0
Theorem 2.54 Let , , be uncertain variables, and let d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +
d(, ) =
M {| | r} dr
0
M {| | + | | r} dr
0, if = 1
1, if = 1
1, if = 2
1, if = 2
() =
() 0.
() =
0, if = 3 ,
1, if = 3 ,
It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus
3
(d(, ) + d(, )).
2
A conjecture is d(, ) 1.5(d(, )+d(, )) for arbitrary uncertain variables
, and . This is an open problem.
d(, ) =
90
M{( x) ( x)}dx
=
0
+
(M{ x} + M{ x})dx
0
+
(1 (x) + (x))dx.
=
0
(1 (x) + (x))dx.
d(, ) =
(2.186)
Mention that (2.186) is a stipulation rather than a precise formula! Furthermore, substituting (x) with and x with 1 (), the change of variables
and integration by parts produce
+
1
1
(1 (x))dx =
(1 )d
1 ()d.
() =
(0)
(0)
d(1 ()) =
(x)dx =
0
(0)
(0)
1 ()d.
d(, ) =
Z
()d
(0)
(0)
Z
()d =
|1 ()|d.
where
and
respectively.
91
2.10
Inequalities
Theorem 2.55 (Liu [113]) Let be an uncertain variable, and f a nonnegative function. If f is even and increasing on [0, ), then for any given
number t > 0, we have
E[f ()]
.
f (t)
M{|| t}
(2.188)
0
f (t)
M{|| f 1 (r)}dr
f (t)
dr M{|| f 1 (f (t))}
= f (t) M{|| t}
which proves the inequality.
Theorem 2.56 (Liu [113], Markov Inequality) Let be an uncertain variable. Then for any given numbers t > 0 and p > 0, we have
M{|| t}
E[||p ]
.
tp
(2.189)
92
p
function f (x, y) = x q y is a concave function on {(x, y) : x 0, y 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),
x 0, y 0.
p
f (x, y) = ( x + p y)p is a concave function on {(x, y) : x 0, y 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),
x 0, y 0.
93
Theorem 2.60 (Liu [113], Jensens Inequality) Let be an uncertain variable, and f a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].
(2.193)
2.11
Sequence Convergence
This section introduces four convergence concepts of uncertain sequence: convergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.
Table 2.1: Relationship among Convergence Concepts
Convergence
in Mean
Convergence
in Measure
Convergence
in Distribution
Definition 2.21 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables defined on the uncertainty space (, L, M). The sequence {i } is said
to be convergent a.s. to if there exists an event with M{} = 1 such that
lim |i () ()| = 0
(2.195)
(2.196)
94
Definition 2.23 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables with finite expected values. We say that the sequence {i } converges in
mean to if
lim E[|i |] = 0.
(2.197)
i
x <.
(2.198)
M{} =
sup 1/i,
i
1 sup 1/i,
i 6
i 6
0.5,
otherwise.
1
0.
i
95
(2.200)
It follows from (2.198) and (2.199) that i (x) (x). The theorem is
proved.
Example 2.23: Convergence in distribution does not imply convergence in
measure. Take an uncertainty space (, L, M) to be {1 , 2 } with M{1 } =
M{2 } = 1/2. We define an uncertain variable as
(
1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then i and have the same chance
distribution. Thus {i } converges in distribution to . However, for some
small number > 0, we have
M{|i | } = M{|i | } = 1.
That is, the sequence {i } does not converge in measure to .
96
i
1 sup i/(2i + 1), if sup i/(2i + 1) < 0.5
M{} =
i 6
i 6
0.5,
otherwise.
Then we define uncertain variables as
(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and 0. The sequence {i } converges a.s. to . However,
for some small number > 0, we have
M{|i | } = M{|i | } =
1
i
.
2i + 1
2
97
1
0.
2j
That is, the sequence {i } converges in mean to . However, for any [0, 1],
there is an infinite number of intervals of the form [k/2j , (k+1)/2j ] containing
. Thus i () does not converge to 0. In other words, the sequence {i } does
not converge a.s. to .
Convergence Almost Surely vs. Convergence in Distribution
Example 2.28: Convergence in distribution does not imply convergence a.s.
Take an uncertainty space (, L, M) to be {1 , 2 } with M{1 } = M{2 } =
1/2. We define an uncertain variable as
(
1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then i and have the same uncertainty distribution. Thus {i } converges in distribution to . However, the
sequence {i } does not converge a.s. to .
Example 2.29: Convergence a.s. does not imply convergence in distribution.
Take an uncertainty space (, L, M) to be {1 , 2 , } with
i
1 sup i/(2i + 1), if sup i/(2i + 1) < 0.5
M{} =
i 6
i 6
0.5,
otherwise.
98
0,
if x < 0
1,
if x i
for i = 1, 2, , respectively. The uncertainty distribution of is
0, if x < 0
(x) =
1, if x 0.
It is clear that i (x) does not converge to (x) at x > 0. That is, the
sequence {i } does not converge in distribution to .
2.12
(2.201)
0,
if (x) (t)
(x)
0.5, if (t) < (x) (1 + (t))/2
(x|(t, +)) =
1 (t)
(x) (t)
, if (1 + (t))/2 (x).
1 (t)
Proof: It follows from (x|(t, +)) = M {
conditional uncertainty that
M{ > t}
M{ > t}
0.5,
if
if
otherwise.
99
= 0.5
M{ > t}
1 (t)
1 (t)
and
.
M{ > t}
1 (t)
(x)
0.5.
1 (t)
0.5.
M{ > t}
1 (t)
1 (t)
Thus
(x|(t, +)) = 1
0,
if x t
xa
0.5, if t < x (b + t)/2
(x|(t, +)) =
bt
xt
1, if (b + t)/2 x.
bt
Theorem 2.64 Let be an uncertain variable with uncertainty distribution
(x), and t a real number with (t) > 0. Then the conditional uncertainty
100
(x|(t, +))
....
........
..
...
..
........................................................................
....
.......................................
...
.......
............
...
.
.
.
.
.
.
...
.
.
... ....
...
. ....
...
............
...
.. ....
..... .......
...
. ...
...
..... .......
...
..
..
..... .........
...
.
.
.
.
.
...
..
.
....
.
.
.
.
...................................................
............................................
....
....
.
..... .....
...
....
.
...
.... ... .
.
.
.
.
...
.... ....
...
..........
...
...
.. .
..... ..
...
..
.
.
.
...
...
...
...
.
..
...
.....
.
.
...
...................................................................................................................................................................................................................................................
....
...
.
0.5
(x)
,
if (x) (t)/2
(t)
(x) + (t) 1
(x|(, t]) =
0.5, if (t)/2 (x) < (t)
(t)
1,
if (t) (x).
Proof: It follows from (x|(, t]) = M {
conditional uncertainty that
M{( x) ( t)}
M{ t}
M{ t}
0.5,
if
M{( x) ( t)}
< 0.5
M{ t}
if
otherwise.
= 0.5.
M{ t}
(t)
(t)
Thus
(x|(, t]) =
M{( x) ( t)}
(x)
=
.
M{ t}
(t)
= 0.5
M{ t}
(t)
(t)
101
and
,
M{ t}
(t)
i.e.,
1
.
M{ t}
(t)
(x) + (t) 1
0.5.
(t)
xa
0,
if x (a + t)/2
ta
bx
(x|(, t]) =
0.5, if (a + t)/2 x < t
1
ta
1,
if x t.
2.13
Uncertain Vector
102
(x|(, t])
....
........
..
...
..
........................................................................
.........................................................................
....
..
...
..
.....
..
...
.
..
...
.....
.. .. .
...
.. ....
...
..
...
......
...
.
.
.
.
...
.
.. .......
...
.
.
..
..... ........ ...
...
.
.
..
.
.
.
...
..
....
..
...
.....
....................................
...........................................
..
...
.... . ...
.
.
..
.
...
... ......
.
.
..
.
...
... ....
.
.
..
.
...
.. ...
.
.
.
..
.
...
.... .......
.
..
.
...
.
... ..
.
..
.
...
.
... .....
.
..
.
.
...
......
.
..
.
.
...
... ...
.
.
..
.
...
.........
.
.
..
.
...
..
.
.
.
.
.
...
...................................................................................................................................................................................................................................................
....
..
..
0.5
(ai , bi ) =
{i (ai , bi )}
i=1
i=1
is an event. Next, the class B is a -algebra over <k because (i) we have
<k B since { <k } = ; (ii) if B B, then { B} is an event, and
{ B c } = { B}c
is an event. This means that B c B; (iii) if Bi B for i = 1, 2, , then
{ Bi } are events and
(
)
[
[
Bi =
{ Bi }
i=1
i=1
103
is an event. This means that i Bi B. Since the smallest -algebra containing all open intervals of <k is just the Borel algebra over <k , the class B
contains all k-dimensional Borel sets. The theorem is proved.
Definition 2.27 (Liu [113]) The joint uncertainty distribution of an uncertain vector (1 , 2 , , k ) is defined by
(x1 , x2 , , xk ) = M {1 x1 , 2 x2 , , k xk }
(2.202)
(2.203)
i=1
i=1
i=1
i=1
=1
i=1
M{ i Bic } =
i=1
n
_
i=1
M { i Bi } .
104
n
^
i=1
M{ i fi1 (Bi )} =
i=1
n
^
M{fi ( i ) Bi }.
i=1
(x1 x2 xm )
1 + exp
3
1
(2.207)
xi
lim
(x1 ,x2 , ,xm )+
(x1 , x2 , , xm ) = 1.
(2.208)
(2.209)
(x1 , x2 , , xm )
(2.210)
105
Definition 2.30 (Liu [132]) Let (1 , 2 , , m ) be a standard normal uncertain vector, and let ei , ij , i = 1, 2, , k, j = 1, 2, , m be real numbers.
Define
m
X
i = ei +
ij j
(2.211)
j=1
(2.212)
for some real vector e and some real matrix , where is a standard normal
uncertain vector. Please also note that for every index i, i is a normal
uncertain variable with expected value ei and standard variance
m
X
|ij |.
(2.213)
j=1
2.14
Bibliographic Notes
106
Chapter 3
Uncertain Programming
Uncertain programming is a type of mathematical programming involving
uncertain variables. This chapter will provide a theory of uncertain programming, and present some uncertain programming models for machine
scheduling problem, vehicle routing problem, and project scheduling problem.
3.1
Uncertain Programming
(3.1)
E[f (x, )]
min
x
(3.3)
subject to:
M{gj (x, ) 0} j , j = 1, 2, , p.
Definition 3.1 (Liu [115]) A vector x is called a feasible solution to the
uncertain programming model (3.3) if
M{gj (x, ) 0} j
(3.4)
108
for j = 1, 2, , p.
Definition 3.2 (Liu [115]) A feasible solution x is called an optimal solution to the uncertain programming model (3.3) if
E[f (x , )] E[f (x, )]
(3.5)
1
1
1
f (x, 1
1 (), , m (), m+1 (1 ), , n (1 ))d.
(3.6)
(3.7)
(3.9)
109
(3.11)
i=1
i 3
ln
xi e
ln
.
ei +
n
X
i=1
(3.13)
1
h+
i (x)i ()
i=1
n
X
1
h
i (x)i (1 ) h0 (x)
(3.15)
i=1
where
(
h+
i (x)
=
(
h
i (x)
(3.16)
(3.17)
for i = 1, 2, , n.
Theorem 3.3 Assume f (x, 1 , 2 , , n ) is strictly increasing with respect
to 1 , 2 , , m and strictly decreasing with respect to m+1 , m+2 , , n ,
and gj (x, 1 , 2 , , n ) are strictly increasing with respect to 1 , 2 , , k
and strictly decreasing with respect to k+1 , k+2 , , n for j = 1, 2, , p.
110
If 1 , 2 , , n are independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively, then the uncertain programming
E[f (x, 1 , 2 , , n )]
min
x
(3.18)
subject to:
M{gj (x, 1 , 2 , , n ) 0} j , j = 1, 2, , p
is equivalent to the crisp mathematical programming
Z 1
1
1
1
min
f (x, 1
subject to:
1
1
1
gj (x, 1
1 (j ), , k (j ), k+1 (1 j ), , n (1 j )) 0
j = 1, 2, , p.
Proof: It follows from Theorems 3.1 and 3.2 immediately.
3.2
Numerical Method
When the objective functions and constraint functions are monotone with
respect to the uncertain parameters, the uncertain programming model may
be converted to a crisp mathematical programming.
It is fortunate for us that almost all objective and constraint functions
in practical problems are indeed monotone with respect to the uncertain
parameters (not decision variables).
From the mathematical viewpoint, there is no difference between crisp
mathematical programming and classical mathematical programming except
for an integral. Thus we may solve it by simplex method, branch-and-bound
method, cutting plane method, implicit enumeration method, interior point
method, gradient method, genetic algorithm, particle swarm optimization,
neural networks, tabu search, and so on.
Example 3.1: Assume that x1 , x2 , x3 are nonnegative decision variables,
1 , 2 , 3 are independent linear uncertain variables L(1, 2), L(2, 3), L(3, 4),
and 1 , 2 , 3 are independent zigzag uncertain variables Z(1, 2, 3), Z(2, 3, 4),
Z(3, 4, 5), respectively. Consider the uncertain programming,
max E x1 + 1 + x2 + 2 + x3 + 3
x
,x
,x
1
2
3
subject to:
x1 , x2 , x3 0.
111
max
x1 ,x2 ,x3
q
q
q
1
1
x1 + 1
()
+
x
+
()
+
x
+
()
d
2
3
1
2
3
subject to:
1
1
2
2
2
(x1 + 1
1 (0.9)) + (x2 + 2 (0.9)) + (x3 + 3 (0.9)) 100
x1 , x2 , x3 0
1
1
1
1
1
where 1
1 , 2 , 3 , 1 , 2 , 3 are inverse uncertainty distributions of
uncertain variables 1 , 2 , 3 , 1 , 2 , 3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and obtain an optimal solution
x1 ,x2
subject to:
0 x1 , 0 x2 .
2
2
It is clear that x1 sin(x1 1 ) x2 cos(x2 + 2 ) is strictly decreasing with
respect to 1 and strictly increasing with respect to 2 . Thus the uncertain
programming is equivalent to the crisp model,
Z 1
min
x1 sin(x1 1
(1 )) x2 cos(x2 + 1
()) d
1
2
x1 ,x2 0
subject to:
0 x1 ,
2
0 x2
1
where 1
are inverse uncertainty distributions of 1 , 2 , respectively.
1 , 2
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution
112
3.3
...
.......
..
.
.............................................................................................................................................................................................
....
....
....
...
...
...
...
...
...
.
.
.
...
.
6
7
3 ..
....
...
..
...
..
...
..............................................................................................................................................................................................
..
....
....
....
..
...
...
...
..
...
...
..
..
...
...
4
5
2 ......
..
...
...
..
...
...
....
..
.........................................................................................................................................................................
..
...
...
...
....
..
...
...
...
...
..
...
...
...
...
..
.
.
.
.
.
.
.
.
3
1 ...
1
2
..
..
..
..
.
.
.
..
.....
....
....
....
.......................................................................................................................................................................................................................
..
...
..
...
.
.
.............................................
.............................................
M
M
Time
Makespan
113
as follows,
Machine 1: xy0 +1 xy0 +2 xy1 ;
Machine 2: xy1 +1 xy1 +2 xy2 ;
y0
...
...
.......
...
...... ........
... ....
..
.
... ...
... ..... 1......
.............
...
...
..................................
... ...
y1
...
...
.......
...
...... ........
... ....
..
.
... ...
... ..... 3......
.............
...
...
...........................................................................
.
...................
...
...
.....
.
... 2 ....
.................
M-1
y2
y3
...
...
..
...
.......
.......
.......
...
...... ........
...... ........
...... ........ ....
... ....
...
...
..
..
.. ...
.
.
.
.
.
.
.
.
... ..
... 6 ..
... 7 .. ...
...
... ..... 5......
....
.
.
.
...... ......
.............
................
...
...
.......
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
........................................................
......................................................................................
.
.
...................
...
...
.....
.
... 4 ....
.................
M-2
(3.19)
M-3
Completion Times
Let Ci (x, y, ) be the completion times of jobs i, i = 1, 2, , n, respectively.
For each k with 1 k m, if the machine k is used (i.e., yk > yk1 ), then
we have
Cxyk1 +1 (x, y, ) = xyk1 +1 k
(3.20)
(3.21)
and
for 2 j yk yk1 .
If the machine k is used, then the completion time Cxyk1 +1 (x, y, ) of
job xyk1 +1 is an uncertain variable whose inverse uncertainty distribution is
1
xy
k1 +1
(x, y, ) = 1
xy
k1 +1
k ().
(3.22)
Generally, suppose the completion time Cxyk1 +j1 (x, y, ) has an inverse uncertainty distribution 1
xyk1 +j1 (x, y, ). Then the completion time
Cxyk1 +j (x, y, ) has an inverse uncertainty distribution
1
xy
k1 +j
(x, y, ) = 1
xy
k1 +j1
(x, y, ) + 1
xy
k1 +j
k ().
(3.23)
114
Makespan
Note that, for each k (1 k m), the value Cxyk (x, y, ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by
f (x, y, ) = max Cxyk (x, y, )
1km
(3.24)
(3.25)
E[f (x, y, )]
min
x,y
subject to:
1 xi n, i = 1, 2, , n
(3.26)
xi 6= xj , i 6= j, i, j = 1, 2, , n
0 y1 y2 ym1 n
xi , yj , i = 1, 2, , n, j = 1, 2, , m 1, integers.
Since 1 (x, y, ) is the inverse uncertainty distribution of f (x, y, ), the
machine scheduling model is simplified as follows,
Z 1
min
1 (x, y, )d
x,y 0
subject to:
(3.27)
1 xi n, i = 1, 2, , n
xi 6= xj , i 6= j, i, j = 1, 2, , n
0 y1 y2 ym1 n
xi , yj , i = 1, 2, , n, j = 1, 2, , m 1, integers.
Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear uncertain processing times
ik L(i, i + k),
i = 1, 2, , 7, k = 1, 2, 3
where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the
115
optimal solution is
x = (1, 4, 5, 3, 7, 2, 6),
y = (3, 5).
(3.28)
3.4
Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers
Due to its wide applicability and economic importance, vehicle routing
problem has been extensively studied. Liu [120] first introduced uncertainty
theory into the research area of vehicle routing problem in 2010. In this
section, vehicle routing problem will be modelled by uncertain programming
in which the travel times are assumed to be uncertain variables with known
uncertainty distributions.
We assume that (a) a vehicle will be assigned for only one route on which
there may be more than one customer; (b) a customer will be visited by one
and only one vehicle; (c) each route begins and ends at the depot; and (d) each
customer specifies its time window within which the delivery is permitted or
preferred to start.
Let us first introduce the following indices and model parameters:
i = 0: depot;
i = 1, 2, , n: customers;
116
k = 1, 2, , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, , n;
ij : uncertainty distribution of Tij , i, j = 0, 1, 2, , n;
[ai , bi ]: time window of customer i, i = 1, 2, , n.
Operational Plan
Liu [105] suggested that an operational plan should be represented by three
decision vectors x, y and t, where
x = (x1 , x2 , , xn ): integer decision vector representing n customers
with 1 xi n and xi 6= xj for all i 6= j, i, j = 1, 2, , n. That is, the
sequence {x1 , x2 , , xn } is a rearrangement of {1, 2, , n};
y = (y1 , y2 , , ym1 ): integer decision vector with y0 0 y1 y2
ym1 n ym ;
t = (t1 , t2 , , tm ): each tk represents the starting time of vehicle k at
the depot, k = 1, 2, , m.
We note that the operational plan is fully determined by the decision
vectors x, y and t in the following way. For each k (1 k m), if yk = yk1 ,
then vehicle k is not used; if yk > yk1 , then vehicle k is used and starts from
the depot at time tk , and the tour of vehicle k is 0 xyk1 +1 xyk1 +2
xyk 0. Thus the tours of all vehicles are as follows:
Vehicle 1: 0 xy0 +1 xy0 +2 xy1 0;
Vehicle 2: 0 xy1 +1 xy1 +2 xy2 0;
...
...
.....
...
....... .........
... ....
..
... ...
.
... ..... 1......
.............
...
...
..........................................
.
y1
...
...
.....
...
....... .........
... ....
..
... ...
.
... ..... 3......
.............
...
...
..............................................................................
.
.....
....... .........
...
..
....
.
... 2 ....
.................
V-1
y2
V-2
y3
...
...
...
..
.....
.....
.....
...
....... .........
....... .........
....... ......... ....
...
...
... ....
..
..
.. ...
....
....
... ...
.
.
.
.
.
.
.
... 6 ...
... 7 ... ....
... ..... 5.....
.................
.................
...
.............
...
...
...
.
................................................................................................
............................................................
.
.
.....
....... .........
...
..
....
.
... 4 ....
.................
V-3
Figure 3.4: Formulation of Operational Plan in which Vehicle 1 visits Customers x1 , x2 , Vehicle 2 visits Customers x3 , x4 and Vehicle 3 visits Customers
x5 , x6 , x7 .
It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.
117
Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 k m, if vehicle k is used (i.e., yk > yk1 ), then we have
fxyk1 +1 (x, y, t) = tk + T0xyk1 +1
and
fxyk1 +j (x, y, t) = fxyk1 +j1 (x, y, t) axyk1 +j1 + Txyk1 +j1 xyk1 +j
for 2 j yk yk1 . If the vehicle k is used, i.e., yk > yk1 , then the arrival
time fxyk1 +1 (x, y, t) at the customer xyk1 +1 is an uncertain variable whose
inverse uncertainty distribution is
1
xy
k1 +1
(x, y, t, ) = tk + 1
0xy
k1 +1
().
Generally, suppose the arrival time fxyk1 +j1 (x, y, t) has an inverse uncertainty distribution 1
xyk1 +j1 (x, y, t, ). Then fxyk1 +j (x, y, t) has an inverse uncertainty distribution
1
xy
(x, y, t, ) = 1
xy
k1 +j
k1 +j1
k1 +j1
xyk1 +j ()
m
X
gk (x, y)
(3.29)
k=1
where
yP
k 1
D
Dxj xj+1 + Dxyk 0 , if yk > yk1
0xyk1 +1 +
gk (x, y) =
j=yk1 +1
0,
if yk = yk1
for k = 1, 2, , m.
118
(3.30)
If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,
min g(x, y)
x,y,t
subject to:
M{fi (x, y, t) bi } i , i = 1, 2, , n
1 xi n, i = 1, 2, , n
x
i 6= j, i, j = 1, 2, , n
i 6= xj ,
0 y1 y2 ym1 n
xi , yj , i = 1, 2, , n, j = 1, 2, , m 1,
(3.31)
integers
which is equivalent to
min g(x, y)
x,y,t
subject to:
1
i = 1, 2, , n
i (x, y, t, i ) bi ,
1 xi n, i = 1, 2, , n
xi 6= xj , i 6= j, i, j = 1, 2, , n
0 y1 y2 ym1 n
xi , yj , i = 1, 2, , n, j = 1, 2, , m 1,
(3.32)
integers
where 1
i (x, y, t, ) are the inverse uncertainty distributions of fi (x, y, t)
for i = 1, 2, , n, respectively.
Numerical Experiment
Assume that there are 3 vehicles and 7 customers with the following time
windows,
Node
Window
1
[7 : 00, 9 : 00]
2
[7 : 00, 9 : 00]
3
[15 : 00, 17 : 00]
4
[15 : 00, 17 : 00]
Node
Window
5
[15 : 00, 17 : 00]
6
[19 : 00, 21 : 00]
7
[19 : 00, 21 : 00]
119
and each customer is visited within time windows with confidence level 0.90.
We also assume that the distances are
Dij = |i j|,
i, j = 0, 1, 2, , 7
i, j = 0, 1, 2, , 7.
(3.33)
3.5
Project scheduling problem is to determine the schedule of allocating resources so as to balance the total cost and the completion time. The study
of project scheduling problem with uncertain factors was started by Liu [120]
in 2010. This section presents an uncertain programming model for project
scheduling problem in which the duration times are assumed to be uncertain
variables with known uncertainty distributions.
Project scheduling is usually represented by a directed acyclic network
where nodes correspond to milestones, and arcs to activities which are basically characterized by the times and costs consumed.
Let (V, A) be a directed acyclic graph, where V = {1, 2, , n, n + 1} is
the set of nodes, A is the set of arcs, (i, j) A is the arc of the graph (V, A)
from nodes i to j. It is well-known that we can rearrange the indexes of the
nodes in V such that i < j for all (i, j) A.
Before we begin to study project scheduling problem with uncertain activity duration times, we first make some assumptions: (a) all of the costs
needed are obtained via loans with some given interest rate; and (b) each
activity can be processed only if the loan needed is allocated and all the
foregoing activities are finished.
In order to model the project scheduling problem, we introduce the following indices and parameters:
ij : uncertain duration time of activity (i, j) in A;
120
(3.34)
(3.35)
From the starting time T1 (x, ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ) = x2 (x1 + 12 )
(3.36)
whose inverse uncertainty distribution may be written as
1
1
2 (x, ) = x2 (x1 + 12 ()).
(3.37)
Generally, suppose that the starting time Tk (x, ) of all activities (k, i) in A
has an inverse uncertainty distribution 1
k (x, ). Then the starting time
Ti (x, ) of all activities (i, j) in A should be
Ti (x, ) = xi max (Tk (x, ) + ki )
(3.38)
(k,i)A
(k,i)A
1
1
k (x, ) + ki () .
(3.39)
121
max
(k,n+1)A
()
.
k
k,n+1
(3.40)
(3.41)
(k,n+1)A
Total Cost
Based on the completion time T (x, ), the total cost of the project can be
written as
X
dT (x,)xi e
C(x, ) =
cij (1 + r)
(3.42)
(i,j)A
where dae represents the minimal integer greater than or equal to a. Note that
C(x, ) is a discrete uncertain variable whose inverse uncertainty distribution
is
X
d1 (x;)xi e
1 (x, ) =
cij (1 + r)
(3.43)
(i,j)A
E[C(x, )]
min
x
subject to:
(3.44)
M{T (x, ) T0 } 0
x 0, integer vector
where T0 is a due date of the project, 0 is a predetermined confidence level,
T (x, ) is the completion time defined by (3.40), and C(x, ) is the total cost
defined by (3.42). This model is equivalent to
Z 1
min
1 (x, )d
0
subject to:
(3.45)
(x,
0
0
x 0, integer vector
122
(i, j) A
(i, j) A.
In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is 0 = 0.85. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x = (7, 24, 17, 16, 35, 33, 30).
(3.46)
In other words, the optimal allocating times
ities are
Date
7 16 17 24
Node 1
4 3 2
Loan 12 11 27 7
3.6
min
x
(3.47)
subject to:
M{gj (x, ) 0} j , j = 1, 2, , p
where fi (x, ) are return functions for i = 1, 2, , m, and gj (x, ) are constraint functions for j = 1, 2, , p.
123
i = 1, 2, , m
(3.48)
m
P
i E[fi (x, )]
min
x i=1
subject to:
M{gj (x, ) 0} j ,
(3.49)
j = 1, 2, , p
m
P
min
i (E[fi (x, )] fi )2
x i=1
subject to:
M{gj (x, ) 0} j ,
(3.51)
j = 1, 2, , p
124
3.7
The concept of goal programming was presented by Charnes and Cooper [8] in
1961 and subsequently studied by many researchers. Goal programming can
be regarded as a special compromise model for multiobjective optimization
and has been applied in a wide variety of real-world problems. In multiobjective decision-making problems, we assume that the decision-maker is able
to assign a target level for each goal and the key idea is to minimize the deviations (positive, negative, or both) from the target levels. In the real-world
situation, the goals are achievable only at the expense of other goals and
these goals are usually incompatible. In order to balance multiple conflicting
objectives, a decision-maker may establish a hierarchy of importance among
these incompatible goals so as to satisfy as many goals as possible in the
order specified. For multiobjective decision-making problems with uncertain
parameters, Liu and Chen [129] proposed an uncertain goal programming,
l
m
P
P
min
Pj
(uij d+
i + vij di )
j=1
i=1
subject to:
(3.52)
+
E[fi (x, )] + d
i di = bi , i = 1, 2, , m
M{gj (x, ) 0} j ,
j = 1, 2, , p
+
di , di 0,
i = 1, 2, , m
where Pj is the preemptive priority factor which expresses the relative importance of various goals, Pj Pj+1 , for all j, uij is the weighting factor
corresponding to positive deviation for goal i with priority j assigned, vij
is the weighting factor corresponding to negative deviation for goal i with
priority j assigned, d+
i is the positive deviation from the target of goal i, di
is the negative deviation from the target of goal i, fi is a function in goal constraints, gj is a function in real constraints, bi is the target value according
to goal i, l is the number of priorities, m is the number of goal constraints,
and p is the number of real constraints. Note that the positive and negative
deviations are calculated by
(
E[fi (x, )] bi , if E[fi (x, )] > bi
+
di =
(3.53)
0,
otherwise
and
(
d
i
otherwise
(3.54)
for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m
)
m
m
X
X
X
+
lexmin
(ui1 di + vi1 di ),
(ui2 di + vi2 di ), ,
(uil di + vil di )
i=1
i=1
i=1
125
3.8
E[F (x, y 1 , y 2 , , y m , )]
min
x
subject to:
M{G(x, ) 0}
(y 1 , y 2 , , y m ) solves problems (i = 1, 2, , m)
(3.57)
min
E[f
(x,
y
,
y
,
,
y
,
)]
i
1
2
m
yi
subject to:
M{gi (x, y 1 , y 2 , , y m , ) 0} i .
Definition 3.4 Let x be a feasible control vector of the leader. A Nash
equilibrium of followers is the feasible array (y 1 , y 2 , , y m ) with respect to
126
x if
E[fi (x, y 1 , , y i1 , y i , y i+1 , , y m , )]
E[fi (x, y 1 , , y i1 , y i , y i+1 , , y m , )]
(3.58)
(3.59)
3.9
Bibliographic Notes
Uncertain programming was founded by Liu [115] in 2009 and was applied to
machine scheduling problem, vehicle routing problem and project scheduling
problem by Liu [120] in 2010.
As extensions of uncertain programming theory, Liu and Chen [129] developed an uncertain multiobjective programming and an uncertain goal programming. In addition, Liu and Yao [128] suggested an uncertain multilevel
programming for modeling decentralized decision systems with uncertain factors.
After that, the uncertain programming has obtained fruitful results in
both theory and practice. For exploring more books and papers, the interested reader may visit the website at http://orsc.edu.cn/online.
Chapter 4
Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting experts
experimental data by uncertainty theory. This chapter will design a questionnaire survey for collecting experts experimental data, and introduce the empirical uncertainty distribution (i.e., linear interpolation method), the principle of least squares, the method of moments, and the Delphi method for
determining uncertainty distributions from experts experimental data.
4.1
Uncertain statistics is based on experts experimental data rather than historical data. How do we obtain experts experimental data? Liu [120] proposed
a questionnaire survey for collecting experts experimental data. The starting point is to invite one or more domain experts who are asked to complete
a questionnaire about the meaning of an uncertain variable like how far
from Beijing to Tianjin.
We first ask the domain expert to choose a possible value x (say 110km)
that the uncertain variable may take, and then quiz him
How likely is less than or equal to x?
(4.1)
Denote the experts belief degree by (say 0.6). Note that the experts belief
degree of greater than x must be 1 due to the self-duality of uncertain
measure. An experts experimental data
(x, ) = (110, 0.6)
(4.2)
(4.3)
128
............................................................................
...........................................................................
.....
.....
.....
...
.....
.....
.....
...
.....
.....
.
.
.....
.
.
.
.
..... ... ......
..... .. .....
..... .. .....
. ..
....................................................................................................................................................................................................................................
..
...
..
M{ x}
M{ x}
4.2
Questionnaire Survey
Beijing is the capital of China, and Tianjin is a coastal city. Assume that
the real distance between them is not exactly known for us. It is more acceptable to regard such an unknown quantity as an uncertain variable than
a random variable or a fuzzy variable. Chen and Ralescu [15] employed uncertain statistics to estimate the travel distance between Beijing and Tianjin.
The consultation process is as follows:
Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?
A1: 100km. (an experts experimental data (100, 0) is acquired)
Q2: What do you think is the maximum distance?
A2: 150km. (an experts experimental data (150, 1) is acquired)
Q3: What do you think is a likely distance?
A3: 130km.
Q4: What is the belief degree that the real distance is less than 130km?
A4: 0.6. (an experts experimental data (130, 0.6) is acquired)
Q5: Is there another number this distance may be?
A5: 140km.
Q6: What is the belief degree that the real distance is less than 140km?
A6: 0.9. (an experts experimental data (140, 0.9) is acquired)
Q7: Is there another number this distance may be?
A7: 120km.
129
Q8: What is the belief degree that the real distance is less than 120km?
A8: 0.3. (an experts experimental data (120, 0.3) is acquired)
Q9: Is there another number this distance may be?
A9: No idea.
By using the questionnaire survey, five experts experimental data of the
travel distance between Beijing and Tianjin are acquired from the domain
expert,
(100, 0), (120, 0.3), (130, 0.6), (140, 0.9), (150, 1).
(4.4)
4.3
(4.5)
0 1 2 n 1.
(4.6)
0,
if x < x1
(i+1 i )(x xi )
, if xi x xi+1 , 1 i < n
i +
(4.7)
(x) =
xi+1 xi
1,
if x > xn
Essentially, it is a type of linear interpolation method.
The empirical uncertainty distribution determined by (4.7) has an expected value
n1
X i+1 i1
n1 + n
1 + 2
x1 +
xi + 1
xn .
(4.8)
E[] =
2
2
2
i=2
If all xi s are nonnegative, then the k-th empirical moments are
k
E[ ] =
1 xk1
n1 k
1 XX
k
+
(i+1 i )xji xkj
i+1 + (1 n )xn .
k + 1 i=1 j=0
(4.9)
Example 4.1: Recall that five experts experimental data (100, 0), (120, 0.3),
130, 0.6), (140, 0.9), (150, 1) of the travel distance between Beijing and Tianjin have been acquired in Section 4.2. Based on those experts experimental
data, an empirical uncertainty distribution of travel distance is shown in
Figure 4.3.
130
(x)
1
....
........
..
...
..
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...............................................
...
...
.
...
...
.........
...
...............
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
5
5
..........
.
.
.
.
.
.
.
.
.
4
4
.
.
...
.
.
.
....
...
..
.
...
.
.
...
...
...
...
...
...
..
...
.
...
...
...
...
2
2
...
..........................................
...
...
.
.
...
3
3
..
...
...
...
...
...
...
...
.
.
...
..
...
...
...
...
...
...
...
.
.
...
.
1
1 .....
...
...
.....
...
...
....
...................................................................................................................................................................................................................................................................................
....
.
(x , )
(x , )
(x , )
(x , )
(x , )
4.4
Assume that an uncertainty distribution to be determined has a known functional form (x|) with an unknown parameter . In order to estimate the
parameter , Liu [120] employed the principle of least squares that minimizes
the sum of the squares of the distance of the experts experimental data to
the uncertainty distribution. This minimization can be performed in either
the vertical or horizontal direction. If the experts experimental data
(x1 , 1 ), (x2 , 2 ), , (xn , n )
(4.10)
n
X
((xi |) i )2 .
(4.11)
i=1
The optimal solution b of (4.11) is called the least squares estimate of , and
b
then the least squares uncertainty distribution is (x|).
Example 4.2: Assume that an uncertainty
with two unknown parameters a and b, i.e.,
0,
(x a)/(b a),
(x) =
1,
(4.12)
(4.13)
131
(x)
1
....
........
.
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........
........................................
....
...
........
...
.........
........
.
.
...
.
...
...
....
...
.....
.....
...
....
.
.
.
...
...
...
....
.....
...
.....
...
....
.
.
.
...
..
.
...
....
.....
...
.....
...
....
.
.
.
...
...
...
.....
...
.....
.....
...
....
.
.
...
.
.....
...
.....
...
.......
......
...
......
.
.
.
.
.
...
.
.......
...
......
...
.......
.......
...
......
.
.
.
.
...
.
....
.....................................
.................................................................................................................................................................................................................................
....
(140, 0.9)
(150, 1)
(130, 0.6)
(120, 0.3)
(100, 0)
.
......
...
......
...
.....
...
.....
....
.
.
...
.
.
....
...
....
...
....
........
...
...
.
.
...
.
.
...
...
...
...
....
...
....
.. .......
...
.. ....
...
......
.
...
.....
......
...
.........................
.
..................................................................................................................................................................................................................................................
..
...
...
..
0,
if x 0.2273
1,
if x 4.7727.
Example 4.3: Assume that an uncertainty distribution has a lognormal
form with two unknown parameters e and , i.e.,
1
(e ln x)
(x|e, ) = 1 + exp
.
3
(4.15)
132
(4.16)
4.5
Method of Moments
(4.18)
0 1 2 n 1,
(4.20)
Wang and Peng [207] proposed a method of moments to estimate the unknown parameters of uncertainty distribution. At first, the kth empirical
moments of the experts experimental data are defined as that of the corresponding empirical uncertainty distribution, i.e.,
k = 1 xk1 +
n1 k
1 XX
k
(i+1 i )xji xkj
i+1 + (1 n )xn .
k + 1 i=1 j=0
(4.21)
(1 ( k x | 1 , 2 , , p ))dx = k ,
k = 1, 2, , p
(4.22)
(4.23)
133
Then the first three empirical moments are 2.5100, 7.7226 and 29.4936. We
also assume that the uncertainty distribution to be determined has a zigzag
form with three unknown parameters a, b and c, i.e.,
0,
if x a
(x a)/2(b a),
if a x b
(x|a, b, c) =
(4.24)
(x + c 2b)/2(c b), if b x c
1,
if x c.
From the experts experimental data, we may believe that the unknown parameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution (x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
It follows from the method of moments that the unknown parameters a, b, c
should solve the system of equations,
a + 2b + c = 4 2.5100
a2 + ab + 2b2 + bc + c2 = 6 7.7226
(4.25)
3
a + a2 b + ab2 + 2b3 + b2 c + bc2 + c3 = 8 29.4936.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
yield that the moment estimates are (a, b, c) = (0.9804, 2.0303, 4.9991) and
the corresponding uncertainty distribution is
0,
if x 0.9804
1,
if x 4.9991.
4.6
Assume there are m domain experts and each produces an uncertainty distribution. Then we may get m uncertainty distributions 1 (x), 2 (x), , m (x).
It was suggested by Liu [120] that the m uncertainty distributions should be
aggregated to an uncertainty distribution
(x) = w1 1 (x) + w2 2 (x) + + wm m (x)
(4.27)
134
where w1 , w2 , , wm are convex combination coefficients (i.e., they are nonnegative numbers and w1 + w2 + + wn = 1) representing weights of the
domain experts. For example, we may set
wi =
1
,
m
i = 1, 2, , n.
(4.28)
Since 1 (x), 2 (x), , m (x) are uncertainty distributions, they are increasing functions taking values in [0, 1] and are not identical to either 0 or 1. It
is easy to verify that their convex combination (x) is also an increasing
function taking values in [0, 1] and (x) 6 0, (x) 6 1. Hence (x) is also
an uncertainty distribution by Peng-Iwamura theorem.
4.7
Delphi Method
The Delphi method was originally developed in the 1950s by the RAND
Corporation based on the assumption that group experience is more valid
than individual experience. This method asks the domain experts answer
questionnaires in two or more rounds. After each round, a facilitator provides
an anonymous summary of the answers from the previous round as well as the
reasons that the domain experts provided for their opinions. Then the domain
experts are encouraged to revise their earlier answers in light of the summary.
It is believed that during this process the opinions of domain experts will
converge to an appropriate answer. Wang, Gao and Guo [205] recast the
Delphi method as a process to determine the uncertainty distributions. The
main steps are listed as follows:
Step 1. The m domain experts provide their experts experimental data,
(xij , ij ),
j = 1, 2, , ni , i = 1, 2, , m.
(4.29)
4.8
135
Bibliographic Notes
The study of uncertain statistics was started by Liu [120] in 2010 in which a
questionnaire survey for collecting experts experimental data was designed.
It was shown among others by Chen and Ralescu [15] that the questionnaire
survey may successfully acquire the experts experimental data.
Parametric uncertain statistics assumes that the uncertainty distribution
to be determined has a known functional form but with unknown parameters. In order to estimate the unknown parameters, Liu [120] suggested the
principle of least squares, and Wang and Peng [207] proposed the method of
moments.
Nonparametric uncertain statistics does not rely on the experts experimental data belonging to any particular uncertainty distribution. In order to
determine the uncertainty distributions, Liu [120] introduced the linear interpolation method (i.e., empirical uncertainty distribution), and Chen and
Ralescu [15] proposed a series of spline interpolation methods.
When multiple domain experts are available, Wang, Gao and Guo [205]
recast the Delphi method as a process to determine the uncertainty distributions.
Chapter 5
5.1
Loss Function
A system usually contains some factors 1 , 2 , , n that may be understood as lifetime, strength, demand, production rate, cost, profit, and resource. Generally speaking, some specified loss is dependent on those factors.
Although loss is a problem-dependent concept, usually such a loss may be
represented by a loss function.
Definition 5.1 Consider a system with factors 1 , 2 , , n . A function f
is called a loss function if some specified loss occurs if and only if
f (1 , 2 , , n ) > 0.
(5.1)
Example 5.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables 1 , 2 , , n . Such a system works whenever
all elements work. Thus the system lifetime is
= 1 2 n .
(5.2)
If the loss is understood as the case that the system fails before the time T ,
then we have a loss function
f (1 , 2 , , n ) = T 1 2 n .
(5.3)
138
Input ........................................... 1
Output
(5.4)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T 1 2 n .
(5.5)
Input
Output
(5.6)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T k-max [1 , 2 , , n ].
(5.7)
Hence the system fails if and only if f (1 , 2 , , n ) > 0. Note that a series
system is an n-out-of-n system, and a parallel system is a 1-out-of-n system.
Example 5.4: Consider a standby system in which there are n redundant
elements whose lifetimes are 1 , 2 , , n . For this system, only one element
is active, and one of the redundant elements begins to work only when the
active element fails. Thus the system lifetime is
= 1 + 2 + + n .
(5.8)
139
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T (1 + 2 + + n ).
(5.9)
Input
Output
5.2
Risk Index
In practice, the factors 1 , 2 , , n of a system are usually uncertain variables rather than known constants. Thus the risk index is defined as the
uncertain measure that some specified loss occurs.
Definition 5.2 (Liu [119]) Assume that a system contains uncertain factors
1 , 2 , , n and has a loss function f . Then the risk index is the uncertain
measure that the system is loss-positive, i.e.,
Risk = M{f (1 , 2 , , n ) > 0}.
(5.10)
Theorem 5.1 (Liu [119], Risk Index Theorem) Assume a system contains
independent uncertain variables 1 , 2 , , n with regular uncertainty distributions 1 , 2 , , n , respectively. If the loss function f (1 , 2 , , n ) is
strictly increasing with respect to 1 , 2 , , m and strictly decreasing with
respect to m+1 , m+2 , , n , then the risk index is just the root of the
equation
1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) = 0.
(5.11)
Remark 5.2: Keep in mind that sometimes the equation (5.11) may not
have a root. In this case, if
1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) < 0
(5.12)
140
(5.13)
5.3
Series System
Consider a series system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T 1 2 n
(5.14)
(5.15)
(5.16)
5.4
(5.17)
Parallel System
Consider a parallel system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T 1 2 n
(5.18)
(5.19)
(5.20)
(5.21)
5.5
141
k-out-of-n System
Consider a k-out-of-n system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T k-max [1 , 2 , , n ]
(5.22)
(5.23)
(5.24)
(5.25)
5.6
Standby System
Consider a standby system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T (1 + 2 + + n )
(5.26)
(5.27)
(5.28)
142
5.7
Hazard Distribution
0,
if (x) (t)
(x)
0.5, if (t) < (x) (1 + (t))/2
(5.29)
(x|t) =
1 (t)
0,
if x t
xa
0.5, if t < x (b + t)/2
(x|t) =
bt
xt
1, if (b + t)/2 x.
bt
Theorem 5.2 (Liu [119], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors 1 , 2 , , n , and has a loss function f .
Suppose 1 , 2 , , n are independent uncertain variables with uncertainty
distributions 1 , 2 , , n , respectively, and f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and strictly decreasing with respect to
m+1 , m+2 , , n . If it is observed that all elements are working at some
time t, then the risk index is just the root of the equation
1
1
1
f (1
1 (1 |t), , m (1 |t), m+1 (|t), , n (|t)) = 0
0,
if i (x) i (t)
i (x)
0.5, if i (t) < i (x) (1 + i (t))/2
i (x|t) =
1 i (t)
(5.30)
(5.31)
143
for i = 1, 2, , n.
Proof: It follows from Definition 5.3 that each hazard distribution of element is determined by (5.31). Thus the conditional risk index is obtained by
Theorem 5.1 immediately.
5.8
Consider a structural system in which the strengths and loads are assumed
to be uncertain variables. We will suppose that a structural system fails
whenever for each rod, the load variable exceeds its strength variable. If the
structural risk index is defined as the uncertain measure that the structural
system fails, then
( n
)
[
Risk = M
(i < i )
(5.32)
i=1
where 1 , 2 , , n are strength variables, and 1 , 2 , , n are load variables of the n rods.
Example 5.5: (The Simplest Case) Assume there is only a single strength
variable and a single load variable with continuous uncertainty distributions and , respectively. In this case, the structural risk index is
Risk = M{ < }.
It follows from the risk index theorem that the risk index is just the root
of the equation
1 () = 1 (1 ).
(5.33)
Especially, if the strength variable has a normal uncertainty distribution
N (es , s ) and the load variable has a normal uncertainty distribution
N (el , l ), then the structural risk index is
Risk =
1 + exp
(e el )
s
3(s + l )
1
.
(5.34)
i=1
144
That is,
Risk = 1 (c1 ) 2 (c2 ) n (cn ).
(5.35)
i=1
That is,
Risk = 1 2 n
(5.36)
(5.37)
for i = 1, 2, , n, respectively.
However, generally speaking, the load variables 1 , 2 , , n are neither
constants nor independent. For examples, the load variables 1 , 2 , , n
may be functions of independent uncertain variables 1 , 2 , , m . In this
case, the formula (5.36) is no longer valid. Thus we have to deal with those
structural systems case by case.
Example 5.8: (Series System) Consider a structural system shown in Figure 5.4 that consists of n rods in series and an object. Assume that the
strength variables of the n rods are uncertain variables 1 , 2 , , n with
uncertainty distributions 1 , 2 , , n , respectively. We also assume that
the gravity of the object is an uncertain variable with uncertainty distribution . For each i (1 i n), the load variable of the rod i is just the
gravity of the object. Thus the structural system fails whenever the load
variable exceeds at least one of the strength variables 1 , 2 , , n . Hence
the structural risk index is
( n
)
[
Risk = M
(i < ) = M{1 2 n < }.
i=1
145
Since the loss function f is strictly increasing with respect to and strictly
decreasing with respect to 1 , 2 , , n , it follows from the risk index theorem that the risk index is just the root of the equation
1
1
1 (1 ) 1
1 () 2 () n () = 0.
(5.38)
(5.39)
(5.40)
////////////////
....................................................................................................................................................................................
...
...
...
...
...
...
............
........
...
...
...
...
...
..
..
.
.
.
.... .....
......
...
...
...
...
...
..
.
.
.
.... .....
......
...
...
...
...
...
...
.........................................
...
...
...
...
....
...
...
...
...
...
..
..
.....................................
Example 5.9: Consider a structural system shown in Figure 5.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
right rods are uncertain variables 1 and 2 with uncertainty distributions
1 and 2 , respectively. We also assume that the gravity of the object is an
uncertain variable with uncertainty distribution . In this case, the load
variables of left and right rods are respectively equal to
sin 2
,
sin(1 + 2 )
sin 1
.
sin(1 + 2 )
Thus the structural system fails whenever for any one rod, the load variable
146
=M
<
<
sin 2
sin(1 + 2 )
sin 1
sin(1 + 2 )
1
2
=M
<
sin 2 sin 1
sin(1 + 2 )
Define the loss function as
f (1 , 2 , ) =
1
2
.
sin(1 + 2 ) sin 2 sin 1
Then
Risk = M{f (1 , 2 , ) > 0}.
Since the loss function f is strictly increasing with respect to and strictly
decreasing with respect to 1 , 2 , it follows from the risk index theorem that
the risk index is just the root of the equation
1 (1 ) 1
() 1
()
1
2
= 0.
sin(1 + 2 )
sin 2
sin 1
(5.41)
(5.42)
(5.43)
5.9
(5.44)
Assume that an investor has n projects whose returns are uncertain variables
1 , 2 , , n . If the loss is understood as the case that total return 1 + 2 +
+ n is below a predetermined value c (e.g., the interest rate), then the
investment risk index is
Risk = M{1 + 2 + + n < c}.
(5.45)
If 1 , 2 , , n are independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively, then the investment risk index is just
the root of the equation
1
1
1
1 () + 2 () + + n () = c.
(5.46)
147
////////////////
.......................................................................................................................................................................................
...
.
..
...
...
...
...
...
...
...
..
...
..
.
...
.
.
..
...
...
...
...
..
...
...
..
...
..
...
...
.
.
.
...
...
...
...
..
...
..
...
...
..
...
...
...
...
..
.
...
..
..
...
...
.
...
...
... 1 ... 2 ....
...
.
.
.
.
...
.
... .. .....
... .. ..
... . ...
... . ...
........
.......................................
...
...
...
...
....
...
...
...
...
...
...
..
......................................
5.10
Bibliographic Notes
Uncertain risk analysis was proposed by Liu [119] in 2010 in which a risk
index was defined and a risk index theorem was proved.
As a substitute of risk index, Peng [171] suggested a concept of valueat-risk that is the maximum possible loss when the right tail distribution is
ignored.
Chapter 6
Uncertain Reliability
Analysis
Uncertain reliability analysis is a tool to deal with system reliability via
uncertainty theory. This chapter will introduce a definition of reliability
index and provide some useful formulas for calculating reliability index.
6.1
Structure Function
150
Example 6.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,
f (x1 , x2 , , xn ) = x1 x2 xn .
................................
................................
................................
...
...
...
....
....
....
..................................
..................................
.....................................
.
..
...
...
.
...
.
.
.
.
.
.
.
.
.
..............................
...............................
................................
Input .......................................... 1
.
(6.4)
Output
(6.5)
Input
Output
6.2
Reliability Index
The element in a Boolean system is usually represented by a Boolean uncertain variable, i.e.,
(
1 with uncertain measure a
=
(6.7)
0 with uncertain measure 1 a.
In this case, we will say is an uncertain element with reliability a. Reliability
index is defined as the uncertain measure that the system is working.
151
Definition 6.2 (Liu [119]) Assume a Boolean system has uncertain elements 1 , 2 , , n and a structure function f . Then the reliability index
is the uncertain measure that the system is working, i.e.,
Reliability = M{f (1 , 2 , , n ) = 1}.
(6.8)
Theorem 6.1 (Liu [119], Reliability Index Theorem) Assume that a system
contains uncertain elements 1 , 2 , , n , and has a structure function f . If
1 , 2 , , n are independent uncertain elements with reliabilities a1 , a2 , ,
an , respectively, then the reliability index is
sup
min i (xi ),
if
sup
min i (xi ) < 0.5
1
sup
min i (xi ),
if
sup
min i (xi ) 0.5
(6.10)
for i = 1, 2, , n, respectively.
Proof: Since 1 , 2 , , n are independent Boolean uncertain variables and
f is a Boolean function, the equation (6.9) follows from Definition 6.2 and
Theorem 2.24 immediately.
6.3
Series System
6.4
(6.12)
Parallel System
152
It follows from the reliability index theorem that the reliability index is
Reliability = M{1 2 n = 1} = a1 a2 an .
6.5
(6.14)
k-out-of-n System
(6.16)
6.6
General System
Input
Output
153
The Boolean System Calculator, a function in the Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm), may yield the reliability index.
Assume the 5 independent uncertain elements have reliabilities
0.91, 0.92, 0.93, 0.94, 0.95
in uncertain measure. A run of Boolean System Calculator shows that the
reliability index is
Reliability = M{f (1 , 2 , , 5 ) = 1} = 0.92
in uncertain measure.
6.7
Bibliographic Notes
Chapter 7
Uncertain Propositional
Logic
Propositional logic, originated from the work of Aristotle (384-322 BC), is a
branch of logic that studies the properties of complex propositions composed
of simpler propositions and logical connectives. Note that the propositions
considered in propositional logic are not arbitrary statements but are the
ones that are either true or false and not both.
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. This chapter will deal with uncertain propositional logic, including
uncertain proposition, truth value definition, and truth value theorem.
7.1
Uncertain Proposition
156
Example 7.2: John is young with truth value 0.8 is an uncertain proposition, where John is young is a statement, and its truth value is 0.8 in
uncertain measure.
Example 7.3: Beijing is a big city with truth value 0.9 is an uncertain
proposition, where Beijing is a big city is a statement, and its truth value
is 0.9 in uncertain measure.
Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol , conjunction symbol , disjunction symbol , conditional symbol
, and biconditional symbol . Note that
X means not X;
(7.2)
X Y means X and Y ;
(7.3)
X Y means X or Y ;
(7.4)
(7.5)
(7.6)
Z = X1 (X2 ),
Z = X1 X2
(7.8)
157
7.2
Truth Value
(7.10)
(7.11)
(7.12)
(7.13)
T (X Y ) = T (X Y ) = (1 ) .
(7.14)
(7.15)
Proof: It follows from the definition of truth value and property of uncertain
measure that
T (X X) = M{X X = 1} = M{(X = 1) (X = 0)} = M{} = 1.
The theorem is proved.
Theorem 7.2 (Law of Contradiction) Let X be an uncertain proposition.
Then X X is a contradiction, i.e.,
T (X X) = 0.
(7.16)
Proof: It follows from the definition of truth value and property of uncertain
measure that
T (X X) = M{X X = 1} = M{(X = 1) (X = 0)} = M{} = 0.
The theorem is proved.
158
Theorem 7.3 (Law of Truth Conservation) Let X be an uncertain proposition. Then we have
T (X) + T (X) = 1.
(7.17)
Proof: It follows from the duality axiom of uncertain measure that
T (X) = M{X = 1} = M{X = 0} = 1 M{X = 1} = 1 T (X).
The theorem is proved.
Theorem 7.4 Let X be an uncertain proposition. Then X X is a tautology, i.e.,
T (X X) = 1.
(7.18)
Proof: It follows from the definition of conditional symbol and the law of
excluded middle that
T (X X) = T (X X) = 1.
The theorem is proved.
Theorem 7.5 Let X be an uncertain proposition. Then we have
T (X X) = 1 T (X).
(7.19)
Proof: It follows from the definition of conditional symbol and the law of
truth conservation that
T (X X) = T (X X) = T (X) = 1 T (X).
The theorem is proved.
Theorem 7.6 (De Morgans Law) For any uncertain propositions X and Y ,
we have
T ((X Y )) = T ((X) (Y )),
(7.20)
T ((X Y )) = T ((X) (Y )).
(7.21)
7.3
159
Chen-Ralescu Theorem
An important contribution to uncertain propositional logic is the ChenRalescu theorem that provides a numerical method for calculating the truth
values of uncertain propositions.
Theorem 7.8 (Chen-Ralescu Theorem [11]) Assume that X1 , X2 , , Xn
are independent uncertain propositions with truth values 1 , 2 , , n , respectively. Then for a Boolean function f , the uncertain proposition
Z = f (X1 , X2 , , Xn ).
has a truth value
sup
min i (xi ),
if
sup
min i (xi ) < 0.5
1
sup
min i (xi ),
if
sup
min i (xi ) 0.5
1in
(7.23)
(7.24)
(7.25)
for i = 1, 2, , n, respectively.
Proof: Since Z = 1 if and only if f (X1 , X2 , , Xn ) = 1, we immediately
have
T (Z) = M{f (X1 , X2 , , Xn ) = 1}.
Thus the equation (7.24) follows from Theorem 2.24 immediately.
Exercise 7.1: Let X1 , X2 , , Xn be independent uncertain propositions
with truth values 1 , 2 , , n , respectively. Then
Z = X1 X2 Xn
(7.26)
(7.27)
(7.28)
160
(7.29)
(7.30)
f (1, 0) = 0,
f (0, 1) = 0,
f (0, 0) = 1.
At first, we have
sup
sup
sup
min i (xi ) = 1 (1 1 ) (1 2 ) = 1 2 .
sup
min i (xi ) = (1 1 ) 2 .
sup
min i (xi ) = 1 (1 2 ).
161
sup
Thus we have
T (Z) =
7.4
min i (xi ) = 1 1 2 = (1 1 ) (1 2 ).
1 2 ,
(1 1 ) 2 ,
1 (1 2 ),
(1 1 ) (1 2 ),
if
if
if
if
1
1
1
1
0.5
0.5
< 0.5
< 0.5
and
and
and
and
2
2
2
2
0.5
< 0.5
0.5
< 0.5.
(7.31)
Boolean System Calculator is a software that may compute the truth value
of uncertain formula. This software may be downloaded from the website at
http://orsc.edu.cn/liu/resources.htm. For example, assume 1 , 2 , 3 , 4 , 5
are independent uncertain propositions with truth values 0.1, 0.3, 0.5, 0.7, 0.9,
respectively. Consider an uncertain formula,
X = (1 2 ) (2 3 ) (3 4 ) (4 5 ).
(7.32)
1, if x1 + x2 = 2
1, if x2 + x3 = 2
1, if x3 + x4 = 2
f (x1 , x2 , x3 , x4 , x5 ) =
1, if x4 + x5 = 2
0, otherwise.
A run of Boolean System Calculator shows that the truth value of X is 0.7
in uncertain measure.
7.5
Bibliographic Notes
Uncertain propositional logic was designed by Li and Liu [91] in which every proposition is abstracted into a Boolean uncertain variable and the truth
value is defined as the uncertain measure that the proposition is true. An important contribution is Chen-Ralescu theorem [11] that provides a numerical
method for calculating the truth value of uncertain propositions.
Chapter 8
Uncertain Entailment
Uncertain entailment is a methodology for calculating the truth value of an
uncertain formula via the maximum uncertainty principle when the truth
values of other uncertain formulas are given. In some sense, uncertain propositional logic and uncertain entailment are mutually inverse, the former attempts to compose a complex proposition from simpler ones, while the latter
attempts to decompose a complex proposition into simpler ones.
This chapter will present an uncertain entailment model. In addition,
uncertain modus ponens, uncertain modus tollens and uncertain hypothetical
syllogism are deduced from the uncertain entailment model.
8.1
Assume X1 , X2 , , Xn are independent uncertain propositions with unknown truth values 1 , 2 , , n , respectively. Also assume that
Yj = fj (X1 , X2 , , Xn )
(8.1)
are uncertain propositions with known truth values cj , j = 1, 2, , m, respectively. Now let
Z = f (X1 , X2 , , Xn )
(8.2)
be an additional uncertain proposition. What is the truth value of Z? This
is just the uncertain entailment problem. In order to solve it, let us consider
what values 1 , 2 , , n may take. The first constraint is
0 i 1,
i = 1, 2, , n.
(8.3)
(8.4)
164
sup
min i (xi ),
if
sup
min i (xi ) < 0.5
1
sup
min i (xi ),
if
sup
min i (xi ) 0.5
(8.5)
for j = 1, 2, , m and
(
i (xi ) =
i ,
if xi = 1
1 i , if xi = 0
(8.6)
sup
min i (xi ),
if
sup
min i (xi ) < 0.5
1
sup
min i (xi ),
if
sup
min i (xi ) 0.5.
1in
f (x1 ,x2 , ,xn )=1
Since the truth values 1 , 2 , , n are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) 0.5| via choosing appreciate values of
1 , 2 , , n . The uncertain entailment model is thus written by Liu [117]
as follows,
subject to:
(8.8)
0 i 1, i = 1, 2, , n
T (Yj ) = cj , j = 1, 2, , m
where T (Z), T (Yj ), j = 1, 2, , m are functions of unknown truth values
1 , 2 , , n .
Example 8.1: Let A and B be independent uncertain propositions. It is
known that
T (A B) = a, T (A B) = b.
(8.9)
165
Y2 = A B,
Z = A B.
It is clear that
T (Y1 ) = 1 2 = a,
T (Y2 ) = 1 2 = b,
T (Z) = (1 1 ) 2 .
In this case, the uncertain entailment model (8.8) becomes
subject to:
0 1 1
0 2 1
1 2 = a
1 2 = b.
(8.10)
1 a, if a b and a + b < 1
a or b, if a b and a + b = 1
T (A B) =
(8.11)
b,
if a b and a + b > 1
illness, if a < b.
8.2
Uncertain modus ponens was presented by Liu [117]. Let A and B be independent uncertain propositions. Assume A and A B have truth values a
166
and b, respectively. What is the truth value of B? Denote the truth values
of A and B by 1 and 2 , respectively, and write
Y2 = A B,
Y1 = A,
Z = B.
It is clear that
T (Y1 ) = 1 = a,
T (Y2 ) = (1 1 ) 2 = b,
T (Z) = 2 .
In this case, the uncertain entailment model (8.8) becomes
min |2 0.5|
subject to:
0 1 1
0 2 1
1 = a
(1 1 ) 2 = b.
(8.12)
When a + b > 1, there is a unique feasible solution and then the optimal
solution is
1 = a, 2 = b.
Thus T (B) = 2 = b. When a + b = 1, the feasible set is {a} [0, b] and the
optimal solution is
1 = a, 2 = 0.5 b.
Thus T (B) = 2 = 0.5 b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from
T (A) = a,
we entail
T (B) =
T (A B) = b
b,
if a + b > 1
0.5 b, if a + b = 1
illness, if a + b < 1.
(8.13)
(8.14)
This result coincides with the classical modus ponens that if both A and
A B are true, then B is true.
8.3
Uncertain modus tollens was presented by Liu [117]. Let A and B be independent uncertain propositions. Assume A B and B have truth values a
167
and b, respectively. What is the truth value of A? Denote the truth values
of A and B by 1 and 2 , respectively, and write
Y1 = A B,
Y2 = B,
Z = A.
It is clear that
T (Y1 ) = (1 1 ) 2 = a,
T (Y2 ) = 2 = b,
T (Z) = 1 .
In this case, the uncertain entailment model (8.8) becomes
min |1 0.5|
subject to:
0 1 1
0 2 1
(1
1 ) 2 = a
2 = b.
(8.15)
When a > b, there is a unique feasible solution and then the optimal solution
is
1 = 1 a,
2 = b.
2 = b.
T (B) = b
(8.16)
we entail
1 a,
if a > b
(1 a) 0.5, if a = b
T (A) =
illness,
if a < b.
(8.17)
This result coincides with the classical modus tollens that if A B is true
and B is false, then A is false.
168
8.4
Y2 = B C,
Z = A C.
It is clear that
T (Y1 ) = (1 1 ) 2 = a,
T (Y2 ) = (1 2 ) 3 = b,
T (Z) = (1 1 ) 3 .
In this case, the uncertain entailment model (8.8) becomes
subject to:
0 1 1
0 2 1
0 3 1
(1 1 ) 2 = a
(1 2 ) 3 = b.
(8.18)
T (B C) = b
(8.19)
we entail
0.5,
if a + b 1 and a b < 0.5
T (A C) =
illness, if a + b < 1.
(8.20)
This result coincides with the classical hypothetical syllogism that if both
A B and B C are true, then A C is true.
8.5
169
Bibliographic Notes
Uncertain entailment was proposed by Liu [117] for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given. From the
uncertain entailment model, Liu [117] also deduced uncertain modus ponens,
uncertain modus tollens, and uncertain hypothetical syllogism.
Chapter 9
Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts that are essentially sets but their boundaries
are not sharply described (because of the ambiguity of human language).
Some typical examples include young, tall, warm, and most.
This chapter will introduce the concepts of uncertain set, membership
function, independence, expected value, variance, entropy, and distance. This
chapter will also introduce the operational law for uncertain sets via membership functions or inverse membership functions, and uncertain statistics
for determining membership functions.
9.1
Uncertain Set
Roughly speaking, an uncertain set is a measurable function from an uncertainty space to a collection of sets. A formal definition is given as follows.
Definition 9.1 (Liu [118]) An uncertain set is a measurable function from
an uncertainty space (, L, M) to a collection of sets, i.e., both {B } and
{ B} are events for any Borel set B.
Remark 9.1: It is clear that uncertain set (Liu [118]) is very different from
random set (Robbins [184] and Matheron [158]) and fuzzy set (Zadeh [234]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.
Example 9.1: Take an uncertainty space (, L, M) to be {1 , 2 , 3 } with
power set L. Then the set-valued function
[1, 3], if = 1
[2, 4], if = 2
() =
(9.1)
[3, 5], if = 3
172
..
.........
....
........
..................................................
...
... ...
...
.... ....
...
.
... ...
...............................
.......
... ...
... ..
....
... ...
... ...
...
... ...
... ...
...
.
.......
..................................................
.....
.
.
...
... ...
... ...
..
...
..
..... ....
..... ....
...
.
.
..
.
.
.
.
...............................
.
......
..
..... ...
....
..
..
...
... ....
..
... ..
...
...
......
..
.
..........
...
..
..
..
...
..
...
...
...
.
.
.
.
.
............................................................................................................................................................................
..
...
1
2
3
(9.2)
(9.5)
(9.6)
173
(9.7)
(2, 3), if = 1
[1, 2], if = 1
[1, 3], if = 2
(2, 4), if = 2
() =
() =
[1, 4], if = 3 ,
(2, 5), if = 3 .
Then their union is
[1, 3), if = 1
[1, 4), if = 2
( )() =
[1, 5), if = 3 ,
their intersection is
( )() =
if = 1
(2, 3], if = 2
(2, 4], if = 3 ,
(, 1) (2, +),
c
(, 1) (3, +),
() =
(, 1) (4, +),
(, 2] [3, +),
c
(, 2] [4, +),
() =
(, 2] [5, +),
if = 1
if = 2
if = 3 ,
if = 1
if = 2
if = 3 .
Theorem 9.3 Let be an uncertain set and let < be the set of real numbers.
Then
< = <, < = .
(9.8)
Proof: For each , it follows from the definition of uncertain set that
the union is
( <)() = () < = <.
Thus we have < = <. In addition, the intersection is
( <)() = () < = ().
Thus we have < = .
174
Theorem 9.4 Let be an uncertain set and let be the empty set. Then
= ,
= .
(9.9)
Proof: For each , it follows from the definition of uncertain set that
the union is
( )() = () = ().
Thus we have = . In addition, the intersection is
( )() = () = .
Thus we have = .
Theorem 9.5 (Idempotent Law) Let be an uncertain set. Then we have
= ,
= .
(9.10)
Proof: For each , it follows from the definition of uncertain set that
the union is
( )() = () () = ().
Thus we have = . In addition, the intersection is
( )() = () () = ().
Thus we have = .
Theorem 9.6 (Double-Negation Law) Let be an uncertain set. Then we
have
( c )c = .
(9.11)
Proof: For each , it follows from the definition of complement that
( c )c () = ( c ())c = (()c )c = ().
Thus we have ( c )c = .
Theorem 9.7 (Law of Excluded Middle and Law of Contradiction) Let be
an uncertain set and let c be its complement. Then
c = <,
c = .
(9.12)
Proof: For each , it follows from the definition of uncertain set that
the union is
( c )() = () c () = () ()c = <.
Thus we have c <. In addition, the intersection is
( c )() = () c () = () ()c = .
Thus we have c .
175
( ) = ( ).
(9.14)
Proof: For each , it follows from the definition of uncertain set that
(( ) )() = (() ()) ()
= () (() ()) = ( ( ))().
Thus we have ( ) = ( ). In addition, it follows that
(( ) )() = (() ()) ()
= () (() ()) = ( ( ))().
Thus we have ( ) = ( ).
Theorem 9.10 (Distributive Law) Let , , be uncertain sets. Then we
have
( ) = ( ) ( ),
( ) = ( ) ( ).
(9.15)
Proof: For each , it follows from the definition of uncertain set that
( ( ))() = () (() ())
= (() ()) (() ())
= (( ) ( ))().
Thus we have ( ) = ( ) ( ). In addition, it follows that
( ( ))() = () (() ())
= (() ()) (() ())
= (( ) ( ))().
Thus we have ( ) = ( ) ( ).
176
( )c = c c .
(9.17)
(9.18)
[1, 2], if = 1
(2, 3), if = 1
[1, 3], if = 2
(2, 4), if = 2
() =
() =
[1, 4], if = 3 ,
(2, 5), if = 3 .
177
(3, 5), if = 1
(3, 7), if = 2
( + )() =
(3, 9), if = 3 ,
and their product is
(2, 6), if = 1
(2, 12), if = 2
( )() =
(2, 20), if = 3 .
9.2
Membership Function
(9.20)
M{ B} = 1 sup (x).
(9.21)
xB
xB c
(x)
....
........
....
..
...... ...........
...
....
....
...
....
...
...
..
...
.
...
...
...
...
..
...
.
...
..
...
...
.
...
..
...
.
...
..
...
.
...
..
...
.
.
.. ...................................................................
............
.
......
... .... ...
.. .....
. ...
.
..
xB
.. ......
......
.
.
...
.
.
.. .......
.
.
.
.....
... ...
.
.
..
.
.
..
.
..
.... ...
.
............................................................................................................................................................................
.. ..
.
.
.
....
.............................
.............................
..
sup (x)
inf (x)
0
....
........
....
..
...... ...........
...
....
....
...
....
...
...
..
...
.
...
...
...
...
..
...
.
.
.................................................................................
... .
.
...
.
... ..
.
....
.
.
... .
xB c
.... .... ...
.....
....
... ... ..
......
... ... ..
..
.. ......
... ....
....
.
.. ......
.
.
...
.....
.
.
.
..
.
.....
.
... ...
.
.
.
.
..
...
.
.
.... ...
.
.
.................................................................................................................................................................................
..............................
.
.
....
..............................
...
..
xB c
Remark 9.2: It is not true that every uncertain set has a membership
function. For example, the uncertain set
(
[1, 3] with uncertain measure 0.6
=
(9.22)
[0, 2] with uncertain measure 0.4
has no membership function.
178
(9.23)
Remark 9.4: The value of (x) represents the membership degree that x
belongs to the uncertain set . If (x) = 1, then x completely belongs to ;
if (x) = 0, then x does not belong to at all. Thus the larger the value of
(x) is, the more true x belongs to .
Remark 9.5: If an element x belongs to an uncertain set with membership
degree , then x does not belong to the uncertain set with membership degree
1 . This fact follows from the duality property of uncertain measure. In
other words, if the uncertain set has a membership function , then for any
real number x, we have M{x 6 } = 1 M{x } = 1 (x). That is,
M{x 6 } = 1 (x).
(9.24)
Remark 9.6: Note that the membership functions may be defined for not
only uncertain set but also fuzzy set and random set. If the membership
function is denoted by (x), then the membership degree of x belonging to
an uncertain set is (x) in uncertain measure; the membership degree of x
belonging to a fuzzy set is (x) in possibility measure; and the membership
degree of x belonging to a random set is (x) in probability measure.
Example 9.6: Let us take an uncertainty space (, L, M) to be [0, 1] with
M{[0, ]} = for each [0, 1]. Then the uncertain set
h p
i
p
() = 1 , 1
(9.25)
has a membership function
(
(x) =
1 x2 , if x [1, 1]
0,
otherwise.
(9.26)
Example 9.7: The set < of real numbers is a special uncertain set () <.
Such an uncertain set has a membership function
(x) 1,
x <.
(9.27)
x <.
(9.28)
179
(9.31)
that takes values either the singleton {c} or the empty set . This fact states
that uncertainty may exist even when there is a single element in the universe.
Example 9.11: By a rectangular uncertain set we mean the uncertain set
fully determined by the pair (a, b) of crisp numbers with a < b, whose membership function is
(
1, if a x b
(x) =
0, otherwise.
Example 9.12: By a triangular uncertain set we mean the uncertain set
fully determined by the triplet (a, b, c) of crisp numbers with a < b < c,
whose membership function is
xa
, if a x b
ba
(x) =
x c , if b x c.
bc
Example 9.13: By a trapezoidal uncertain set we mean the uncertain set
fully determined by the quadruplet (a, b, c, d) of crisp numbers with a < b <
c < d, whose membership function is
xa
, if a x b
ba
1,
if b x c
(x) =
xd
, if c x d.
cd
180
(x)
(x)
(x)
...
...
...
..........
..........
..........
..
..
..
... . . . . ......................................................... . . . . . . . . . . . ..... . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . ... . . . . . . . . ........................................
.
.
.
.
.
....
....
....
.....
.
.
.......
....
....
.
.
.
.. . ....
....
.
...
.
...
. ...
.
.
... . ...
... .
...
....
...
.
.
.
.. ..
.. .. ....
.
. .....
.
.
.
.
.
...
...
.. . ....
. ..
.
.
..
.
.. .
.
.
.
...
.
. .
.
.
. . ..
. ...
.
.
.
.
.
.
.
.
.
.
...
.
. .
.
.
.. ..
..
.. .. ....
. ....
.
.
.
.
....
...
.
...
.. .
. ..
.
.
.
..
..
.
.
.
.
.
...
...
.
.
. ...
.
.
.
. .
.
.
.
.
.
.
.
...
.
.
. .
.
.
.
...
.
. ..
..
.
.
. ....
.
.
.
.
.
.
....
...
...
.
.
.
.
.
.
.
.
...
.
.
..
.
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
...
...
.
.
.
.
.
.
...
.
.
..
.
.
.
.
.
.
.
.
.
....
...
.
...
...
.
.
.
.
.
.
.
.
.
.
..
.
.
.
...
.
.
..
... ...
.
.
.
.
.
.
.
. ....
.
.
.
.
.
.
.
.
.
.
.
.
..............................................................................................................................
.......................................................................................................
........................................................................................................
..
..
..
...
...
...
...
...
...
a b
c d
0,
if x 15
(x 15)/5, if 15 x 20
1,
if 20 x 35
(9.32)
(x) =
(45
x)/10,
if
35
45
0,
if x 45.
Note that we do not say young if the age is below 15.
(x)
...
..........
...
.........................................................................................
...
....
......
...
.....
.. ....
...
... .
.. ....
.. ...
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
..
...
.
.
...............................................................................................................................................................................................................................................................
...
..
15yr 20yr
35yr
45yr
What is tall?
Sometimes we say those sportsmen are tall. What heights (centimeters)
can be considered tall? In this case, tall may be regarded as an uncertain
181
0,
(x 180)/5,
1,
(x) =
(200 x)/5,
0,
if
if
if
if
if
x 180
180 x 185
185 x 195
195 x 200
x 200.
(9.33)
180cm 185cm
195cm 200cm
What is warm?
Sometimes we say those days are warm. What temperatures can be considered warm? In this case, warm may be regarded as an uncertain set
whose membership function is
0,
(x
15)/3,
1,
(x) =
(28 x)/4,
0,
if
if
if
if
if
x 15
15 x 18
18 x 24
24 x 28
28 x.
(9.34)
What is most?
Sometimes we say most students are boys. What percentages can be considered most? In this case, most may be regarded as an uncertain set
182
(x)
....
........
..
...
........................................................................
...
......
.....
...
.. ...
... .
.. ...
...
.. ....
.
.. ..
...
.. ....
.
.. ..
...
...
..
.
...
.. ..
...
..
.
...
..
..
...
..
.
...
..
..
...
.
.
...
.
..
..
...
...
.
.
.
..
...
..
...
.
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
..
...
.
...
.
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
.
..
.
.
.
...................................................................................................................................................................................................................................
..
...
15 C 18 C
24 C
28 C
0,
20(x 0.7),
1,
(x) =
20(0.9 x),
0,
if
if
if
if
if
0 x 0.7
0.7 x 0.75
0.75 x 0.85
0.85 x 0.9
0.9 x 1.
(9.35)
(x)
...
..........
...
...................................................................
...
....
......
...
.....
.. ...
...
... .
.. ....
.. ...
...
.
.. ....
.. ..
...
.
.. ...
.. ..
...
.
.. ....
.. ..
...
.
...
..
..
...
...
...
..
..
...
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
.
.
...........................................................................................................................................................................................................................
...
....
70% 75%
85% 90%
183
(ii) (2 ) (1 ).
(9.37)
(9.39)
(9.41)
x<
Proof: Since the membership function exists, it follows from the measure
inversion formula that
M{ = } = 1 sup (x) = 1 sup (x).
xc
x<
184
(x)
....
........
.....
..
........ ...............
...
.....
.....
.....
...
.....
.....
....
.
.
...
.
.....
..
.
.....
...
.
.....
...
...
.
....
.
.
.
.............
.
.
..............................................
....
.
.. ......
.. ..
...
.
. .
.. .......
.
...
.
.....
.. .
..
...
.....
... ...
..
.....
... .......
.....
..
...
... .......
.....
..
..
......
......
.
...
.
.
.......
.
.
.
.
.
.
.......
... ...
.
.
.
.
.
.
.
....
.
..
......
...
....
.................................................................................................................................................................................................................
.
.... .
.
....
........................ 1
............................
..
()
(9.45)
(9.46)
if > .
(9.48)
185
(9.49)
M{ 1 ()} 1 .
(9.50)
Proof: For each x 1 (), we have (x) . It follows from the measure
inversion formula that
M{1 () } =
inf
x1 ()
(x) .
For each x 6 1 (), we have (x) < . It follows from the measure inversion
formula that
M{ 1 ()} = 1
sup
(x) 1 .
x61 ()
(9.51)
186
(9.52)
is called the right inverse membership function. It is clear that the left inverse
membership function 1
l () is increasing, and the right inverse membership
function 1
()
is
decreasing
with respect to .
r
Conversely, suppose an uncertain set has a left inverse membership
1
function 1
l () and right inverse membership function r (). Then the
membership function is determined by
0, if x 1
l (0)
1
, if (0) x 1 (1) and 1 () = x
l
l
l
1
1
(9.53)
(x) =
1, if l (1) x r (1)
1
1
1
0, if x 1
r (0).
Note that the values of and may not be unique. In this case, we will take
the maximum values.
9.3
Independence
M
(i Bi ) =
M {i Bi }
(9.54)
i=1
i=1
and
(
M
n
[
(i
i=1
Bi )
n
_
M {i Bi }
(9.55)
i=1
B1 ) (2
B1 ) (2
B1 ) (2c
B1 ) (2c
B2 )} = M{1
B2 )} = M{1c
B2 )} = M{1
B2 )} = M{1c
B1 } M{2
B1 } M{2
B1 } M{2c
B1 } M{2c
B2 },
B2 },
B2 },
B2 }.
187
Also note that (9.55) represents other 2n equations. For example, when
n = 2, the four equations are
M{(1
M{(1c
M{(1
M{(1c
B1 ) (2
B1 ) (2
B1 ) (2c
B1 ) (2c
B2 )} = M{1
B2 )} = M{1c
B2 )} = M{1
B2 )} = M{1c
B1 } M{2
B1 } M{2
B1 } M{2c
B1 } M{2c
B2 },
B2 },
B2 },
B2 }.
Theorem 9.18 Let 1 , 2 , , n be uncertain sets, and let i be arbitrarily chosen uncertain sets from {i , ic }, i = 1, 2, , n, respectively. Then
1 , 2 , , n are independent if and only if 1 , 2 , , n are independent.
Proof: Let i be arbitrarily chosen uncertain sets from {i , ic }, i =
1, 2, , n, respectively. Then 1 , 2 , , n and 1 , 2 , , n represent
the same 2n combinations. This fact implies that (9.54) and (9.55) are equivalent to
( n
)
n
^
\
M {i Bi } ,
(9.56)
M
(i Bi ) =
i=1
i=1
(
M
n
[
)
(i Bi )
n
_
M {i Bi } .
(9.57)
i=1
i=1
M
(i 6 Bi ) =
M {i 6 Bi }
(9.58)
i=1
and
(
M
n
[
(i
i=1
i=1
)
6 Bi )
n
_
M {i 6 Bi }
(9.59)
i=1
M
(i 6 Bi ) = 1 M
(i Bi ) ,
(9.60)
i=1
i=1
188
M {i 6 Bi } = 1
i=1
(
M
M{i Bi },
(9.61)
i=1
n
[
(i
i=1
n
_
n
_
(
=1M
6 Bi )
n
\
)
(i
Bi ) ,
(9.62)
i=1
M {i 6 Bi } = 1
i=1
n
^
M{i Bi }.
(9.63)
i=1
It follows from (9.60), (9.61), (9.62) and (9.63) that (9.58) and (9.59) are
valid if and only if
( n
)
n
\
^
M
(i Bi ) =
M{i Bi },
(9.64)
i=1
n
[
i=1
)
(i Bi )
i=1
n
_
M{i Bi }.
(9.65)
i=1
The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.
Theorem 9.20 The uncertain sets 1 , 2 , , n are independent if and only
if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^
M
(Bi i ) =
M {Bi i }
(9.66)
i=1
and
(
M
n
[
i=1
)
(Bi i )
i=1
n
_
M {Bi i }
(9.67)
i=1
c
c
M
(Bi i ) = M
(i Bi ) ,
(9.68)
i=1
n
^
i=1
M {Bi i } =
i=1
(
M
n
^
)
(Bi
i )
(
=M
i=1
i=1
(9.69)
i=1
n
[
n
_
M{ic Bic },
M {Bi i } =
n
_
i=1
n
[
(ic
i=1
Bic )
M{ic Bic }.
(9.70)
(9.71)
189
It follows from (9.68), (9.69), (9.70) and (9.71) that (9.66) and (9.67) are
valid if and only if
( n
)
n
\
^
c
c
M
(i Bi ) =
M{ic Bic },
(9.72)
i=1
n
[
i=1
)
(ic Bic )
i=1
n
_
M{ic Bic }.
(9.73)
i=1
The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.
Theorem 9.21 The uncertain sets 1 , 2 , , n are independent if and only
if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^
M
(Bi 6 i ) =
M {Bi 6 i }
(9.74)
i=1
and
(
M
i=1
n
[
(Bi 6 i )
i=1
n
_
M {Bi 6 i }
i=1
{i , ic },
(9.75)
i = 1, 2, , n, respectively.
i }
M
(Bi 6 i ) = 1 M
(Bi i ) ,
(9.76)
i=1
n
^
i=1
M {Bi 6 i } = 1
i=1
( n
[
n
_
M{Bi i },
(9.77)
i=1
)
(Bi 6
i )
(
=1M
i=1
n
_
n
^
i=1
i=1
n
\
)
(Bi
i )
(9.78)
i=1
M {Bi 6 i } = 1
M{Bi i }.
(9.79)
It follows from (9.76), (9.77), (9.78) and (9.79) that (9.74) and (9.75) are
valid if and only if
( n
)
n
\
^
M
(Bi i ) =
M{Bi i },
(9.80)
i=1
(
M
n
[
i=1
)
(Bi i )
i=1
n
_
M{Bi i }.
(9.81)
i=1
The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.
190
9.4
This section will discuss the union, intersection and complement of independent uncertain sets via membership functions.
Union of Uncertain Sets
Theorem 9.22 (Liu [124]) Let and be independent uncertain sets with
membership functions and , respectively. Then their union has a
membership function
(x) = (x) (x).
(9.82)
Proof: In order to prove is a membership function of , we must
verify the two measure inversion formulas. Let B be any Borel set, and write
= inf (x) (x).
xB
Then B
()
(9.83)
(9.84)
(9.85)
The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of and , we have
M{( ) B} = M{( B) ( B)} = M{ B} M{ B}
= 1 sup (x) 1 sup (x)
xB c
xB c
191
That is,
M{( ) B} = 1 sup (x) (x).
(9.86)
xB c
(x)
(x)
....
.........
....
........
..
...... ..........
..... .........
....
...
...
....
...
...
...
...
...
...
..
...
.
..
...
.
.
.
...
...
...
...
...
...
...
..
..
...
...
.
.
...
...
..
..
...
...
.
.
...
.
.
...
..
...
.
.
...
.
...
.
..
...
.
.
... ..
...
..
...
.
.
.
... ..
...
..
...
.
.
..
...
.
..
.
...
...
.
.. ...
...
..
...
.
.
.
.
....
..
.
.
.
...
.
.....
..
...
..
.
.
...
.
.....
.
.
.
...
...
.
.
......
.
...
.
.
.
.
...
.......
..
.
.
... ...........
.
.
...............................................................................................................................................................
.....................................................................................................
....
..
(9.87)
xB
xB
That is,
M{B ( )} = inf (x) (x).
xB
(9.88)
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
= sup (x) (x).
xB c
192
(9.89)
xB c
(9.90)
xB c
(9.91)
xB c
(x)
(x)
...
..........
.........
.........
...
..
..
..
..
....
..
..
..
..
..
..
..
..
..
.
.
...
.
.
..
..
.
.
...
.
..
..
.
..
...
.
.
..
..
.
.
...
.
.
.
..
.
.
..
.
.
...
..
.
.. ..
.
..
...
.. ..
..
..
...
.
.
...
.
..
...
.
.
.
..
.....
.
.
...
.
..
. ....
..
.
...
.
.
..
....
.
..
.
.
.
..
.
.
...
.
.
.....
..
..
.
..
.
.
.
.
...
.
.....
.
...
...
.
.
.
.
.
.
.
... .....
.
.
...
.........
......
.
.
.
.
.
.
.
.
.
.
.....................................................................................................................................................................................................................................
........................
..
...
.
193
(9.92)
x(B c )c
xB c
(x)
...
..........
.........
.
...............
..
.... ....................
...
.........
..
.......
..
...
.......
..
......
..
.
......
...
.....
..
.....
..
.
.....
.
.
.
.
...
.
..
..
.
.....
...
..
.....
..
.....
..
.....
..
....
...
.... ...
.. ........
...
.... .
.. ....
.....
...
.
...
...
.....
......
... ...
.. .....
...
...
...
..
..
...
...
..
...
..
.
.
.
.
...
.
..
....
..
....
..
....
...
..
.....
....
...
.
.
.
.
.
...
.
.....
...
..
...
.
.
.
.
.
.
...
.
.
....
.....
..
...
.
.
......
.
.....
.
.... ........
.
.
...................................................................................................................................................................................................................................................
..................................
..
....
9.5
This section will present an arithmetic operational law of independent uncertain sets via inverse membership functions, including addition, subtraction,
multiplication and division.
Theorem 9.25 (Liu [124]) Let 1 , 2 , , n be independent uncertain sets
1
1
with inverse membership functions 1
1 , 2 , , n , respectively. If f is a
measurable function, then
= f (1 , 2 , , n )
(9.93)
(9.94)
194
Proof: For simplicity, we only prove the case n = 2. Let B be any Borel
set, and write
= inf (x).
xB
= .
Thus
M{B } inf (x).
xB
(9.95)
(1 ) (1 ) = 1
and then
M{B } = 1 M{B 6 } + .
Letting 0, we get
M{B } = inf (x).
xB
(9.96)
(9.97)
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
= sup (x).
xB c
Then for any given number > 0, we have 1 ( + ) B. Please note that
1
1 ( + ) = f (1
1 ( + ), 2 ( + )). By the independence of 1 and 2 ,
195
we obtain
1
M{ B} M{ 1 ( + )} = M{ f (1
1 ( + ), 2 ( + ))}
1
M{(1 1
1 ( + )) (2 2 ( + ))}
1
= M{1 1
1 ( + )} M{2 2 ( + )}
(1 ) (1 ) = 1 .
Letting 0, we get
M{ B} 1 sup (x).
(9.98)
xB c
( ) ( ) =
and then
M{ B} = 1 M{ 6 B} 1 + .
Letting 0, we get
M{ B} 1 = 1 sup (x).
(9.99)
xB c
(9.100)
xB c
(9.101)
(9.102)
It follows from the operational law that the sum + has an inverse membership function,
1 () = [(1 )(a1 + b1 ) + (a2 + b2 ), (a2 + b2 ) + (1 )(a3 + b3 )]. (9.103)
196
(9.104)
Example 9.18: Let = (a1 , a2 , a3 ) and = (b1 , b2 , b3 ) be two independent triangular uncertain sets. It follows from the operational law that the
difference has an inverse membership function,
1 () = [(1 )(a1 b3 ) + (a2 b2 ), (a2 b2 ) + (1 )(a3 b1 )]. (9.105)
In other words, the difference is also a triangular uncertain set, and
= (a1 b3 , a2 b2 , a3 b1 ).
(9.106)
(9.107)
That is, the product k is a triangular uncertain set (ka1 , ka2 , ka3 ). When
k < 0, the product k has an inverse membership function,
1 () = [(1 )(ka3 ) + (ka2 ), (ka2 ) + (1 )(ka1 )].
(9.108)
That is, the product k is a triangular uncertain set (ka3 , ka2 , ka1 ). In
summary, we have
(
k =
(9.109)
Exercise 9.2: Let = (a1 , a2 , a3 , a4 ) and = (b1 , b2 , b3 , b4 ) be two independent trapezoidal uncertain sets, and k a real number. Show that
+ = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ),
(9.110)
= (a1 b4 , a2 b3 , a3 b2 , a4 b1 ),
(9.111)
(
k =
(9.112)
197
(9.113)
(9.114)
1
1
1
1
1
r () = f (1r (), , mr (), m+1,l (), , nl ()),
(9.115)
1
1
1
1
where 1
l , 1l , 2l , , nl are left inverse membership functions, and r ,
1
1
1
1r , 2r , , nr are right inverse membership functions of , 1 , 2 , , n ,
respectively.
1
1
Proof: Note that 1
1 (), 2 (), , n () are intervals for each . Since
f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and
strictly decreasing with respect to xm+1 , xm+2 , , xn , the value
1
1
1
1 () = f (1
1 (), , m (), m+1 (), , n ())
is also an interval. Thus has a regular membership function, and its left and
right inverse membership functions are determined by (9.114) and (9.115),
respectively.
Exercise 9.3: Let and be independent uncertain sets with left inverse
membership functions 1
and l1 and right inverse membership functions
l
1
1
r and r , respectively. Show that the sum + is an uncertain set with
left and right inverse membership functions,
1
1
1
l () = l () + l (),
(9.116)
1
1
1
r () = r () + r ().
(9.117)
Exercise 9.4: Let and be independent uncertain sets with left inverse
membership functions 1
and l1 and right inverse membership functions
l
1
1
r and r , respectively. Show that the difference is an uncertain set
with left and right inverse membership functions,
1
1
1
l () = l () r (),
(9.118)
198
(9.119)
Exercise 9.5: Let and be independent and positive uncertain sets with
left inverse membership functions 1
and l1 and right inverse membership
l
1
1
functions r and r , respectively. Show that
(9.120)
9.6
1
l () =
1
l ()
,
1
()
+ r1 ()
l
(9.121)
1
r () =
1
r ()
.
1
r () + l1 ()
(9.122)
Expected Value
M{ r}dr
M{ r}dr
(9.123)
r
6< r
199
M{ r} =
(9.125)
1, if r 1
0.7,
if 1 < r 2
0.3, if 2 < r 3
M{ r} =
0.1, if 3 < r 4
0, if r > 4,
M{ r} 0,
Thus
Z
E[] =
Z
1dr +
Z
0.7dr +
r 0.
3
Z
0.3dr +
0.1dr = 2.1.
3
200
Proof: Since the uncertain set has a membership function , the second
measure inversion formula tells us that
M{ x} = 1 sup (y),
y<x
Thus (9.126) follows from (9.124) immediately. We may also prove (9.127)
similarly.
Theorem 9.28 (Liu [120]) Let be an uncertain set with regular membership function . If the expected value exists, then
Z
Z
1 x0
1 +
(x)dx
(x)dx
(9.128)
E[] = x0 +
2 x0
2
where x0 is a point such that (x0 ) = 1.
Proof: Since is increasing on (, x0 ] and decreasing on [x0 , +), it
follows from Theorem 9.27 that for almost all x, we have
(
1 (x)/2, if x x0
M{ x} =
(9.129)
(x)/2,
if x x0
and
(
M{ x} =
(x)/2,
if x x0
1 (x)/2, if x x0
M{ x}dx
Z +
Z 0
(x)
(x)
(x)
dx +
dx
dx
=
1
2
2
2
x0
0
Z
Z
1 x0
1 +
(x)dx
(x)dx.
= x0 +
2 x0
2
x0
If x0 < 0, then
Z
M{ x}dx
E[] =
M{ x}dx
Z x0
Z 0
(x)
(x)
(x)
dx
dx
1
dx
2
2
2
0
x0
Z
Z
1 x0
1 +
= x0 +
(x)dx
(x)dx.
2 x0
2
Z
(9.130)
201
yx
yx
sup 1 ()d x0 .
inf 1 ()d.
202
1
2
Z
0
1
1
l () + r () d
(9.136)
1
where 1
l () and r () are determined by
1
1
1
1
1
l () = f (1l (), , ml (), m+1,r (), , nr ()),
(9.137)
1
1
1
1
1
r () = f (1r (), , mr (), m+1,l (), , nl ()).
(9.138)
1 1 1
1
r ()
l ()
E
=
+ 1
d.
(9.140)
2 0
r1 ()
l ()
Exercise 9.11: Let and be independent and positive uncertain sets with
regular membership functions and , respectively. Show that
Z
1
1 1
1
r ()
l ()
=
+
d. (9.141)
E
1
1
+
2 0
1
1
r () + l ()
l () + r ()
203
(9.142)
204
9.7
Variance
The variance of uncertain set provides a degree of the spread of the membership function around its expected value.
Definition 9.9 (Liu [121]) Let be an uncertain set with finite expected
value e. Then the variance of is defined by
V [] = E[( e)2 ].
(9.143)
This definition says that the variance is just the expected value of ( e)2 .
Since ( e)2 is a nonnegative uncertain set, we also have
Z
V [] =
(9.144)
1
M{( e)2 r} + 1 M{( e)2 < r} .
2
(9.145)
205
sup
(y).
(ye)2 x
sup
(y) = 1
(ye)2 <x
(e +
x) (e
x)dx.
(9.149)
Exercise 9.12: Let be a rectangular uncertain set (a, b). Show that its
variance is
(b a)2
.
(9.150)
V [] =
8
Exercise 9.13: Let be a symmetric triangular uncertain set (a, b, c). Show
that its variance is
(c a)2
V [] =
.
(9.151)
24
9.8
Entropy
206
Definition 9.10 (Liu [121]) Suppose that is an uncertain set with membership function . Then its entropy is defined by
Z
H[] =
S((x))dx
(9.152)
S((xi )).
(9.153)
i=1
H[] =
S((x))dx =
0dx = 0.
207
Theorem 9.36 Let be an uncertain set on the interval [a, b]. Then
H[] (b a) ln 2
(9.156)
and equality holds if has a membership function (x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum value ln 2 at t = 0.5.
Theorem 9.37 Let be an uncertain set, and let c be its complement. Then
H[ c ] = H[].
(9.157)
1
H[] =
(1
d.
(9.158)
l () r ()) ln
1
0
Proof: It is clear that S() = ln (1 ) ln(1 ) is a derivable
function whose derivative is
.
S 0 () = ln
1
Let x0 be a point such that (x0 ) = 1. Then we have
Z +
Z x0
Z
H[] =
S((x))dx =
S((x))dx +
x0
(x)
S 0 ()ddx +
1
1
(1
l () r ()) ln
S 0 ()ddx.
1
r ()
S 0 ()dxd
x0
1
0
(1
r () x0 )S ()d
1
0
(1
r () l ())S ()d
(x)
=
Z
0
(x0 1
l ())S ()d +
x0
S((x))dx
x0
d.
1
208
(9.159)
Proof: Assume the uncertain sets and have membership functions and
, respectively.
Step 1: We prove H[a] = |a|H[]. If a > 0, then the left and right
inverse membership functions of a are
1
1
l () = al (),
1
1
r () = ar ().
d = aH[] = |a|H[].
1
Z
H[a] =
0
1
(a1
r () al ()) ln
d = (a)H[] = |a|H[].
1
1
1
1
r () = r () + r ().
Z
=
0
d
1
1
1
1
1
(1
l () + l () r () r ()) ln
d
1
= H[] + H[].
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[a + b] = H[a] + H[b] = |a|H[] + |b|H[].
209
9.9
Distance
Definition 9.11 (Liu [121]) The distance between uncertain sets and is
defined as
d(, ) = E[| |].
(9.161)
That is, the distance between and is just the expected value of | |.
Since | | is a nonnegative uncertain set, we have
Z
d(, ) =
M{| | r}dr.
(9.162)
1
(M{| | r} + 1 M{| | < r}) .
2
(9.163)
9.10
|yb|x
(9.165)
|yb|<x
210
M{(B )( A)}
M{(B )( A)}
,
if
< 0.5
M{
A}
M{ A}
M{(B 6 )( A)}
M{(B 6 )( A)}
M{B |A} =
1
, if
< 0.5
M{ A}
M{ A}
0.5,
otherwise,
,
if
< 0.5
M{
A}
M{ A}
M{ A}
M{ A}
0.5,
otherwise.
Definition 9.12 Let be an uncertain set, and let A be an event with
M{A} > 0. Then the conditional uncertain set given A is said to have
a membership function (x|A) if for any Borel set B, we have
M{B |A} = inf (x|A),
(9.166)
(9.167)
xB
xB c
9.11
Uncertain Statistics
(9.168)
Assume the experts belief degree is in uncertain measure. Note that the
experts belief degree of x not belonging to must be 1 due to the duality
of uncertain measure. An experts experimental data (x, ) is thus acquired
from the domain expert. Repeating the above process, the following experts
experimental data are obtained by the questionnaire,
(x1 , 1 ), (x2 , 2 ), , (xn , n ).
(9.169)
211
(9.170)
0,
otherwise.
(x)
...
..........
...
...
....
......................................................
...
..
.....
...
.....
...
...
....
.
.
...
.
...
..
.
.
.
...
...
.
...
...
...
.
..
.
...
.
...........
..
...
.
.....
..
...
.
.....
..
...
.....
.
.....
..
...
....
...
.
.
....
...
.
...
.
.
...
...
.
...
.
...
.
...
.
...
...
.
.
...
.
...
...
.
.
...
.
...
...
.
.
...
...
.
...
...
.
.
...
.
...
...
.
.
...
.
...
....
.
...
.
...
...
...
.
.
.
.
.
.
.................................................................................................................................................................................................................................................
...
.
(9.171)
n
X
i=1
((xi |) i )2 .
(9.172)
212
(9.173)
213
....
.........
....
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
......................................
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
..
..
...
...
...
...
...
...
...
...
...
....
...
...
...
...
...
...
...
.
.
...
...
...
....
...
...
...
...
.
...
.
...
...
...
....
...
...
...
...
.
...
.
...
..
...
.
.
.
......
...
.
...
...
...
.
...
..
...
.
...
..
...
.
...
..
...
.
...
.
.
...
.
...
...
...
.
...
...
..
.
...
...
..
.
...
...
..
...
.
...
.
.
...
.
...
...
...
.
...
...
..
.
...
...
.
.
.
..
...
................................................................................................................................................
.............................................................
.......................................................
.
(95, 1)
(90, 0.5)
(80, 0)
(105, 1)
(110, 0.5)
(120, 0)
9.12
Bibliographic Notes
In order to model unsharp concepts like young, tall and most, the concept of uncertain set was first proposed by Liu [118] in 2010. As a key concept
in uncertain set theory, the independence of uncertain sets was defined by
Liu [127]. In addition, Liu [124] presented the concepts of membership function and inverse membership function so that a rigorous uncertain set theory
was successfully founded. Liu [124] also provided a set operational law of
uncertain sets via membership functions, and an arithmetic operational law
via inverse membership functions.
214
The expected value of uncertain set was defined by Liu [118]. Then Liu
[120] gave a formula for caluculating the expected value by membership function, and Liu [124] provided a formula by inverse membership function. Based
on expected value operator, Liu [121] presented the concept of variance and
distance between uncertain sets.
The concept of entropy was given by Liu [121] and the positive linearity
of entropy was proved by Yao [226]. As an extension of entropy, Yao [226]
proposed the concept of cross entropy for comparing a membership function
against a reference membership function.
In order to determine membership functions, a questionnaire survey for
collecting experts experimental data was designed by Liu [121]. Based on
experts experimental data, Liu [121] also suggested the linear interpolation
method and the principle of least squares to determine membership functions. When multiple domain experts are available, the Delphi method was
introduced to uncertain statistics by Wang and Wang [208].
Chapter 10
Uncertain Logic
Uncertain logic is a methodology for calculating the truth values of uncertain
propositions via uncertain set theory. This chapter will introduce individual
feature data, uncertain quantifier, uncertain subject, uncertain predicate,
uncertain proposition, and truth value. Uncertain logic may provide a flexible
means for extracting linguistic summary from a collection of raw data.
10.1
(10.2)
(10.3)
whose elements are ages in years. When we talk about those sportsmen
are tall, we should know the individual feature data of all sportsmen, for
example,
175, 178, 178, 180, 183, 184, 186, 186
A=
(10.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.
216
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
A = (28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(10.5)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
whose elements are ages and heights in years and centimeters, respectively.
10.2
Uncertain Quantifier
1, if x = n
0, otherwise.
(10.9)
0, if x = 0
1, otherwise.
(10.11)
217
Example 10.3: The quantifier there does not exist one on the universe A
is a special uncertain quantifier
Q {0}
(10.12)
1, if x = 0
0, otherwise.
(10.13)
1, if x = m
0, otherwise.
(10.15)
(10.16)
1, if m x n
0, if 0 x < m.
(10.17)
(10.18)
1, if 0 x m
0, if m < x n.
(10.19)
0,
if 0 x n 5
(x n + 5)/3, if n 5 x n 2
(x) =
(10.20)
1,
if n 2 x n.
218
(x)
....
........
..
...................................................................................
...............................
..
...
.....
.. .
..
...
.. ...
.
..
.
...
... ..
..
...
.
.. ..
..
...
.
.
..
..
.
...
.
.
..
..
.
...
.
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
.
.
..
.
.
...
..
...
...
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
.
.
.
.
..........................................................................................................................................................................................................................................................
..
..
n5
n2 n
1,
if 0 x 2
(5 x)/3, if 2 x 5
(x) =
(10.21)
0,
if 5 x n.
(x)
.....
.......
...................................
......
...
.. ...
...
.. ....
...
.. ....
...
.. ...
...
...
..
...
...
..
...
...
..
...
...
...
..
...
...
..
...
...
..
...
...
..
...
...
...
..
...
...
..
...
...
..
...
...
..
...
...
..
.
...
..........................................................................................................................................................................................................................................................
...
..
0,
if 0 x 7
(x 7)/2, if 7 x 9
1,
if 9 x 11
(x) =
(10.22)
(13 x)/2, if 11 x 13
0,
if 13 x n.
Example 10.10: In many cases, it is more convenient for us to use a percentage than an absolute quantity. For example, we may use the uncertain
219
(x)
....
........
..
.................................................
..................................................................
.. ..
..
...
....
.. ....
.. ..
..
...
.. ..
.. ....
.
..
.
...
.
.. ...
... ..
..
...
.
.. ....
.. ..
..
...
.
...
..
..
..
.
...
.
...
..
..
..
..
...
...
.
.
..
..
..
...
.
.
...
..
..
..
...
..
.
...
.
...
..
..
..
.
.
...
...
..
.
..
..
.
.
...
...
..
..
...
...
...
...
.
..
..
...
..
..
.
...
...
..
..
..
..
.
...
...
..
..
..
..
.
...
...
..
..
..
..
...
.
...
..
..
.
..
.
.
.
.
..........................................................................................................................................................................................................................................................
..
..
10
11
13
0,
if 0 x 0.6
20(0.8
x),
if
0.75
0.8
0,
if 0.8 x 1.
(x)
...
..........
.....................................................................................................
.....
....
...
.....
.....
...
.. ....
... ..
...
.. ...
...
.... ..
.. ....
... ..
...
. .
.. ...
.
...
.
.. ....
.... ..
...
. .
.. ...
.
...
...
..
...
....
...
...
..
.
..
.
...
...
..
..
...
...
....
..
.
..
...
.
...
..
...
..
...
....
...
..
..
...
...
...
..
.
.
.
...
..
...
..
...
....
...
..
..
...
...
.
.........................................................................................................................................................................................................................................................................
...
.
60% 65%
75% 80%
220
The uncertain quantifiers almost all and almost none are monotone,
but about 10 and about 70% are not monotone. Note that both increasing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.
Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 10.4 Let Q be an uncertain quantifier. Then the negated quantifier Q is the complement of Q in the sense of uncertain set, i.e.,
Q = Qc .
(10.24)
Example 10.12: Let = {n} be the universal quantifier. Then its negated
quantifier
{0, 1, 2, , n 1}.
(10.25)
Example 10.13: Let = {1, 2, , n} be the existential quantifier. Then
its negated quantifier is
{0}.
(10.26)
Theorem 10.1 Let Q be an uncertain quantifier whose membership function
is . Then the negated quantifier Q has a membership function
(x) = 1 (x).
(10.27)
Proof: This theorem follows from the operational law of uncertain set immediately.
Example 10.14: Let Q be the uncertain quantifier almost all defined by
(10.20). Then its negated quantifier Q has a membership function
1,
if 0 x n 5
(n x 2)/3, if n 5 x n 2
(x) =
(10.28)
0,
if n 2 x n.
Example 10.15: Let Q be the uncertain quantifier about 70% defined by
(10.23). Then its negated quantifier Q has a membership function
1,
if 0 x 0.6
1,
if 0.8 x 1.
221
(x)
(x)
n5
n2
(x)
(x)
(x)
60% 65%
75% 80%
(10.31)
222
(10.33)
(10.36)
1,
if 0 x 2
(5 x)/3, if 2 x 5
(x) =
(10.37)
0,
if 5 x n.
Example 10.21: Let Q be the uncertain quantifier about 70% defined by
(10.23). Then its dual quantifier Q has a membership function
0,
if 0 x 0.2
1,
if 0.25 x 0.35
(x) =
(10.38)
0,
if 0.4 x 1.
223
.........
...
................................
....... ....... ...
...
...
..
...
...
...
...
...
...
.
.
...
.
...
..
...
...
...
..
...
...
...
...
...
.
...
...
...
...
...
...
...
..
...
...
...
...
...
...
..
...
...
...
...
...
...
.
...
...
...
.
...
...
..
..
..
..............................................................................................................................................................................................................................................................
...
.
(x)
(x)
n5
..........
....
...
................................
..... ....... .......
..
..
...
.
...
...
...
..
...
..
...
..
..
...
.
...
...
...
...
..
...
..
..
.
...
.
.
...
...
...
...
...
.
..
...
....
.
...
..
...
...
...
...
...
...
...
..
..
...
...
...
..
...
...
...
..
...
..
...
..
...
.
.
...
.
..
...
.
...
...
...
....
.
...
..
.
.
...
...
.
...
...
...
...
..
..
.
..
..
...
.
.
.
................................................................................................................................................................................................................................................................................
....
(x)
20%
(x)
40%
60%
80%
10.3
Uncertain Subject
224
0,
if x 15
(x 15)/3, if 15 x 18
1,
if 18 x 24
(x) =
(28
x)/4,
if 24 x 28
0,
if 28 x.
(10.39)
(x)
....
.........
..
........................................................................
...
...
......
.....
.. ...
...
... .
.. ...
.. ....
...
.
.. ..
.. ....
...
.
...
.. ..
..
...
.
...
.. ..
..
...
.
...
..
..
..
...
.
...
..
..
.
...
...
.
.
..
..
...
.
...
.
..
...
..
...
...
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
..
..
...
...
...
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
..
..
...
...
.
...
.
..
.
.
.
.
.
........................................................................................................................................................................................................................................................................
..
...
15 C 18 C
24 C
28 C
0,
if x 15
(x 15)/5, if 15 x 20
1,
if 20 x 35
(10.40)
(x) =
(45
x)/10,
if
35
45
0,
if x 45.
Example 10.24: Tall students are heavy is a statement in which tall
students is an uncertain subject that is an uncertain set on the universe of
all students, whose membership function may be defined by
0,
if x 180
0,
if x 200.
Let S be an uncertain subject with membership function on the universe
A = {a1 , a2 , , an } of individuals. Then S is an uncertain set of A such
225
(x)
....
........
..
...
.............................................................................................
...
.....
.. .....
...
... .
.. ....
.. ...
...
.
.. ...
.. ..
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
.
..
.
.
.
.
.
............................................................................................................................................................................................................................................................
..
...
15yr 20yr
35yr
45yr
180cm 185cm
195cm 200cm
i = 1, 2, , n.
(10.42)
(10.43)
that will play a new universe of individuals we are talking about, and the
individuals out of S will be ignored at the confidence level .
Theorem 10.7 Let 1 and 2 be confidence levels with 1 > 2 , and let S1
and S2 be subuniverses with confidence levels 1 an 2 , respectively. Then
S1 S2 .
(10.44)
226
10.4
Uncertain Predicate
0,
if x 15
(x
15)/3,
if
15 x 18
1,
if 18 x 24
(x) =
(28 x)/4, if 24 x 28
0,
if 28 x.
(10.45)
(x)
....
........
..
...
........................................................................
...
......
.....
...
.. ...
... .
.. ...
...
.. ....
.
.. ..
...
.. ....
.
.. ..
...
...
..
.
...
.. ..
...
..
.
...
..
..
...
..
.
...
..
..
...
.
.
...
.
..
..
...
...
.
.
.
..
...
..
...
.
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
..
...
.
...
.
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
.
..
.
.
.
.
.
..........................................................................................................................................................................................................................................................................
..
...
15 C 18 C
24 C
28 C
0,
(x 15)/5,
1,
(x) =
(45 x)/10,
0,
if
if
if
if
if
x 15
15 x 20
20 x 35
35 x 45
x 45.
(10.46)
227
(x)
....
........
..
...
.............................................................................................
...
.....
.. .....
...
... .
.. ....
.. ...
...
.
.. ...
.. ..
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
.
..
.
.
.
.
.
............................................................................................................................................................................................................................................................
..
...
15yr 20yr
35yr
45yr
0,
if x 180
(200
x)/5,
if
195
200
0,
if x 200.
(x)
....
........
....
........................................................................................
..
.....
......
...
....
.. ....
.. .
...
.. ....
.. ...
.
...
.
. .
.. ...
...
... ..
.. ....
..
..
...
.
...
..
..
..
...
...
.
..
...
..
.
...
.
..
..
...
...
...
..
..
...
..
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
..
.
...
.
...
.
..
..
...
...
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
..
...
..
...
.
..
..
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
...
..
180cm 185cm
195cm 200cm
Negated Predicate
Definition 10.8 Let P be an uncertain predicate. Then its negated predicate
P is the complement of P in the sense of uncertain set, i.e.,
P = P c .
(10.48)
(10.49)
228
Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.
Example 10.28: Let P be the uncertain predicate warm defined by
(10.45). Then its negated predicate P has a membership function
1,
if x 15
(18
x)/3,
if 15 x 18
0,
if 18 x 24
(x) =
(10.50)
(x 24)/4, if 24 x 28
1,
if 28 x.
...
..........
.........................................................
............................................
...
.. ....... ....... ....... ....... ....... ..
...
...
...
..
...
...
...
...
...
...
..
...
...
.
.
.
.
.
...
.
...
.
..
...
...
.
...
..
... ...
...
.. ....
...
...
... ...
... .....
...
....
....
...
.
.
.....
...
... ...
... .....
...
... ..
...
..
...
.
.
...
...
..
...
...
...
...
...
...
...
...
..
...
...
...
..
..
...
...
...
.
...
..
..
..
...
...............................................................................................................................................................................................................................................................................
..
...
(x)
15 C 18 C
(x)
(x)
24 C
28 C
1,
if x 15
(20 x)/5, if 15 x 20
0,
if 20 x 35
(10.51)
(x) =
(x
35)/10,
if
35
45
1,
if x 45.
Example 10.30: Let P be the uncertain predicate tall defined by (10.47).
Then its negated predicate P has a membership function
1,
if x 180
0,
if 185 x 195
(x) =
(10.52)
1,
if x 200.
229
(x)
(x)
(x)
15yr 20yr
35yr
45yr
(x)
(x)
(x)
180cm 185cm
195cm 200cm
10.5
Uncertain Proposition
(10.53)
230
Example 10.31: Almost all students are young is an uncertain proposition in which the uncertain quantifier Q is almost all, the uncertain subject
S is students (the universe itself) and the uncertain predicate P is young.
Example 10.32: Most young students are tall is an uncertain proposition
in which the uncertain quantifier Q is most, the uncertain subject S is
young students and the uncertain predicate P is tall.
Theorem 10.10 (Liu [121], Logical Equivalence Theorem) Let (Q, S, P ) be
an uncertain proposition. Then
(Q , S, P ) = (Q, S, P )
(10.54)
(10.55)
10.6
(10.56)
Truth Value
(10.57)
aK
KK
(10.58)
231
where
K = {K S | (|K|) } ,
(10.59)
K = {K S | (|S | |K|) } ,
(10.60)
S = {a A | (a) } .
(10.61)
Remark 10.5: Keep in mind that the truth value formula (10.58) is vacuous
if the individual feature data of the universe A are not available.
Remark 10.6: The symbol |K| represents the cardinality of the set K. For
example, || = 0 and |{2, 5, 6}| = 3.
Remark 10.7: Note that is the membership function of the negated
predicate of P , and
(a) = 1 (a).
(10.62)
Remark 10.8: When the subset K of individuals becomes an empty set ,
we will define
inf (a) = inf (a) = 1.
(10.63)
a
T (Q, A, P ) = sup
01
KK aK
aK
KK
(10.66)
where
K = {K A | (|K|) } ,
(10.67)
K = {K A | (|A| |K|) } .
(10.68)
K = {}.
(10.69)
232
Show that
T (, A, P ) = inf (a).
aA
(10.70)
(10.71)
(10.72)
T (, A, P ) = sup (a).
(10.73)
aA
(10.74)
(10.75)
T (, A, P ) = 1 inf (a).
(10.76)
Show that
aA
K = {A}.
(10.77)
Show that
T (, A, P ) = 1 sup (a).
(10.78)
aA
Theorem 10.11 (Liu [121], Truth Value Theorem) Let (Q, S, P ) be an uncertain proposition in which Q is a unimodal uncertain quantifier with membership function , S is an uncertain subject with membership function ,
and P is an uncertain predicate with membership function . Then the truth
value of (Q, S, P ) is
T (Q, S, P ) = sup ( (k ) (k ))
(10.79)
k = min {x | (x) } ,
(10.80)
01
where
(k ) = the k -th largest value of {(ai ) | ai S },
k
(k )
= the
k -th
= |S | max{x | (x) },
largest value of {1 (ai ) | ai S }.
(10.81)
(10.82)
(10.83)
233
Proof: Since the supremum is achieved at the subset with minimum cardinality, we have
sup inf (a) =
KK aK
aK
KK
sup
inf (a) = (k ),
sup
inf (a) = (k ).
KS ,|K|=k aK
aK
KS ,|K|=k
= |S | max x
x
|S |
.
(10.85)
(10.86)
k = min {x | (x) } ,
(10.87)
01
where
(10.88)
(10.89)
(10.90)
(10.91)
234
10.7
Algorithm
(10.92)
(10.93)
Note that the uncertain quantifier is Q = {2, 3}. We also suppose the uncertain predicate P = warm has a membership function
0,
if x 15
(x 15)/3, if 15 x 18
1,
if 18 x 24
(10.94)
(x) =
(28 x)/4, if 24 x 28
0,
if 28 x.
It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not warm at all (in fact, they are hot). Intuitively, the uncertain proposition two or three days are warm should be completely true. The truth
value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that the truth
value is
T (two or three days are warm) = 1.
(10.95)
This is an intuitively expected result. In addition, we also have
T (two days are warm) = 0.25,
(10.96)
(10.97)
235
Example 10.36: Assume that in a class there are 15 students whose ages
are
21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40
(10.98)
in years. Consider an uncertain proposition
(Q, A, P ) = almost all students are young.
(10.99)
0,
if 0 x 10
(x 10)/3, if 10 x 13
(x) =
(10.100)
1,
if 13 x 15,
and the uncertain predicate P = young
0,
(x 15)/5,
1,
(x) =
(45 x)/10,
0,
x 15
15 x 20
20 x 35
35 x 45
x 45.
(10.101)
(10.102)
0,
20(x 0.6),
1,
(x) =
20(0.8
x),
0,
(10.104)
0 x 0.6
0.6 x 0.65
0.65 x 0.75
0.75 x 0.8
0.8 x 1
(10.105)
236
0,
if x 180
0,
if x 200.
The truth value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that
the uncertain proposition has a truth value
T (about 70% of sportsmen are tall) = 0.8.
(10.107)
Example 10.38: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
(10.108)
(10.109)
0,
if 0 x 0.7
20(x
0.7),
if
0.7 x 0.75
1,
if 0.75 x 0.85
(10.110)
(x) =
0,
if 0.9 x 1.
Note that each individual a is described by a feature data (y, z), where y
represents ages and z represents heights. For this case, the uncertain subject
S = young students has a membership function
0,
if y 15
(y 15)/5, if 15 y 20
1,
if 20 y 35
(a) = (y, z) =
(10.111)
(45
y)/10,
if
35
45
0,
if y 45.
237
0,
if z 180
(z
180)/5,
if 180 z 185
1,
if 185 z 195
(a) = (y, z) =
(10.112)
0,
if z 200.
The truth value algorithm yields that the uncertain proposition has a truth
value
T (most young students are tall) = 0.8.
(10.113)
10.8
Linguistic Summarizer
Linguistic summary is a human language statement that is concise and easyto-understand by humans. For example, most young students are tall is
a linguistic summary of students ages and heights. Thus a linguistic summary is a special uncertain proposition whose uncertain quantifier, uncertain
subject and uncertain predicate are linguistic terms. Uncertain logic provides a flexible means that is capable of extracting linguistic summary from
a collection of raw data.
What inputs does the uncertain logic need? First, we should have some
raw data (i.e., the individual feature data),
A = {a1 , a2 , , an }.
(10.114)
Next, we should have some linguistic terms to represent quantifiers, for example, most and all. Denote them by a collection of uncertain quantifiers,
Q = {Q1 , Q2 , , Qm }.
(10.115)
Then, we should have some linguistic terms to represent subjects, for example, young students and old students. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , , Sn }.
(10.116)
Last, we should have some linguistic terms to represent predicates, for example, short and tall. Denote them by a collection of uncertain predicates,
P = {P1 , P2 , , Pk }.
(10.117)
238
Find Q, S and P
subject to:
SS
P P
T (Q, S, P ) .
(10.119)
Each solution (Q, S, P ) of the linguistic summarizer (10.119) produces a linguistic summary Q of S are P .
Example 10.39: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
(10.120)
0,
if 0 x 0.4
0,
if 0.6 x 1,
0,
20(x
0.7),
1,
most (x) =
20(0.9 x),
0,
(
all (x) =
if
if
if
if
if
0 x 0.7
0.7 x 0.75
0.75 x 0.85
0.85 x 0.9
0.9 x 1,
1, if x = 1
0, if 0 x < 1,
(10.122)
(10.123)
(10.124)
239
0,
if y 15
(y 15)/5, if 15 y 20
1,
if 20 y 35
young (a) = young (y, z) =
(10.125)
(45 y)/10, if 35 y 45
0,
if y 45,
0,
if y 40
(y 40)/5, if 40 y 45
1,
if 45 y 55
(10.126)
middle (a) = middle (y, z) =
(60 y)/5, if 55 y 60
0,
if y 60,
0,
if y 55
(y
55)/5,
if 55 y 60
1,
if 60 y 80
(10.127)
old (a) = old (y, z) =
(85 y)/5, if 80 y 85
1,
if y 85,
respectively. Denote the collection of uncertain subjects by
S = {young students, middle-aged students, old students}. (10.128)
Finally, we suppose that there are two linguistic terms short and tall as
uncertain predicates whose membership functions are
0,
if z 145
1,
if 150 z 155
(10.129)
short (a) = short (y, z) =
0,
if z 200,
0,
if z 180
(200
z)/5,
if
195
200
0,
if z 200,
respectively. Denote the collection of uncertain predicates by
P = {short, tall}.
(10.131)
240
(10.132)
S = young students,
P = tall
and then extracts a linguistic summary most young students are tall.
10.9
Bibliographic Notes
Based on uncertain set theory, uncertain logic was designed by Liu [121]
in 2011 for dealing with human language by using the truth value formula
for uncertain propositions. As an application of uncertain logic, Liu [121]
also proposed a linguistic summarizer that provides a means for extracting
linguistic summary from a collection of raw data.
Chapter 11
Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowledge via uncertain set theory. This chapter will introduce a family of uncertain inference rules, uncertain system, and uncertain control with application
to an inverted pendulum system.
11.1
Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
if X is then Y is
(11.1)
where and are two uncertain sets. We first introduce the following inference rule.
Inference Rule 11.1 (Liu [118]) Let X and Y be two concepts. Assume a
rule if X is an uncertain set then Y is an uncertain set . From X is a
constant a we infer that Y is an uncertain set
= |a
(11.2)
242
(y)
,
if (y) < (a)/2
(a)
(y) + (a) 1
(y) =
, if (y) > 1 (a)/2
(a)
0.5,
otherwise.
(11.4)
Proof: It follows from the inference rule 11.1 that has a membership
function
(y) = M{y |a }.
By using the definition of conditional uncertainty, we have
M{y }
M{y }
,
if
< 0.5
M{a
}
M{a }
M{y 6 }
M{y 6 }
M{y |a } =
1
, if
< 0.5
M{a
}
M{a }
0.5,
otherwise.
The equation (11.4) follows from M{y } = (y), M{y 6 } = 1 (y)
and M{a } = (a) immediately. The theorem is proved.
...
..........
....
.. . . . . . . . . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
......
.. .
....
.... ......
.. ...
...
..... ......
... .....
...
...
...... .....
...
...
.. ... ..........
..
...
.
.
..
.
...
... ...
...
... ..
...
.
...
...
... ... ..... .....
.....
...
.. ..
... ...
...
...
... .
.. ....
...
.
.. .
... ...
...
.
.
. .
... . . . . . . . . . . ........ . ... . . . . . . . . ........ . . . . . . . . . . . . . . . . . . . ............................ . . . . . . ...........................
... ...
...
...
.
..
.. ..
... ..
...
.
...
... ...
...
... ...
...
.
.. ..
...
...
... ...
.
...
.. .....
.
...
... ...
...
.
.
.
.
...
. ...
.
......
..
.
...
.
.
...
....
.
.....
.
..
.
.
.
...
...
.
......
.
....
.
.
.
.
.
.
...
......
...
....
..
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................................................................
...
...
0.5
(11.5)
(11.6)
243
(z) =
(z)
,
(a) (b)
if (z) <
(a) (b)
2
(11.7)
otherwise.
Proof: It follows from the inference rule 11.2 that has a membership
function
(z) = M{z |(a ) (b )}.
By using the definition of conditional uncertainty, M{z |(a ) (b )}
is
M{z }
M{z }
,
if
< 0.5
M{(a ) (b )}
M{(a ) (b )}
M{z 6 }
M{z 6 }
1
, if
< 0.5
M{(a
(b
)}
M{(a
) (b )}
0.5,
otherwise.
The theorem follows from M{z } = (z), M{z 6 } = 1 (z) and
M{(a ) (b )} = (a) (b) immediately.
Inference Rule 11.3 (Gao, Gao and Ralescu [43]) Let X and Y be two
concepts. Assume two rules if X is an uncertain set 1 then Y is an uncertain
set 1 and if X is an uncertain set 2 then Y is an uncertain set 2 . From
X is a constant a we infer that Y is an uncertain set
=
M{a 1 } 1 |a1
M{a 2 } 2 |a2
+
.
M{a 1 } + M{a 2 } M{a 1 } + M{a 2 }
(11.8)
(11.9)
Theorem 11.3 Let 1 , 2 , 1 , 2 be independent uncertain sets with membership functions 1 , 2 , 1 , 2 , respectively. If is a constant a, then the
inference rule 11.3 yields
=
1 (a)
2 (a)
+
(11.10)
244
where 1 and 2 are uncertain sets whose membership functions are respectively given by
1 (y) =
2 (y) =
1 (y)
,
1 (a)
1 (y) + 1 (a) 1
, if 1 (y) > 1 1 (a)/2
1 (a)
0.5,
2 (y)
,
2 (a)
otherwise,
if 2 (y) < 2 (a)/2
2 (y) + 2 (a) 1
, if 2 (y) > 1 2 (a)/2
2 (a)
0.5,
(11.11)
(11.12)
otherwise.
Proof: It follows from the inference rule 11.3 that the uncertain set is
just
M{a 1 } 1 |a1
M{a 2 } 2 |a2
=
+
.
M{a 1 } + M{a 2 } M{a 1 } + M{a 2 }
The theorem follows from M{a 1 } = 1 (a) and M{a 2 } = 2 (a)
immediately.
Inference Rule 11.4 Let X1 , X2 , , Xm be concepts. Assume rules if X1
is i1 and and Xm is im then Y is i for i = 1, 2, , k. From X1 is a1
and and Xm is am we infer that Y is an uncertain set
=
k
X
ci i |(a1 i1 )(a2 i2 )(am im )
i=1
c1 + c2 + + ck
(11.13)
(11.14)
(11.15)
245
If 1 , 2 , , m
are constants a1 , a2 , , am , respectively, then the inference
rule 11.4 yields
k
X
ci i
=
(11.16)
c + c2 + + ck
i=1 1
i (y) =
i (y)
,
ci
if i (y) < ci /2
i (y) + ci 1
, if i (y) > 1 ci /2
ci
0.5,
(11.17)
otherwise
(11.18)
for i = 1, 2, , k, respectively.
Proof: For each i, since a1 i1 , a2 i2 , , am im are independent
events, we immediately have
m
\
M
(aj ij ) = min M{aj ij } = min il (al )
1jm
1lm
j=1
11.2
Uncertain System
246
Now let us consider an uncertain system in which there are m crisp inputs
1 , 2 , , m , and n crisp outputs 1 , 2 , , n . At first, we infer n uncertain sets 1 , 2 , , n from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),
If 11 and 12 and and 1m then 11 and 12 and and 1n
If 21 and 22 and and 2m then 21 and 22 and and 2n
(11.19)
k
X
ci ij |(1 i1 )(2 i2 )(m im )
i=1
(11.20)
c1 + c2 + + ck
(11.21)
(11.22)
(11.23)
............................
.........................................................................
.
.........................
..........................
1 ...... .. ..... 1
1 ...... ..
...
.
...
.....................
...........................
2 ...... ... ..... 2
2 ..... ..
....
....
....
....
....
...
...
...
...
...
...
.
.
.
...........................
...........................
.
.
..
.
n . ................n
...........................................n
................
..........................
..
.
= E[ ]
= E[ ]
..
.
1
2
..
.
= E[ ]
Theorem 11.5 Assume i1 , i2 , , im , i1 , i2 , , in are independent uncertain sets with membership functions i1 , i2 , , im , i1 , i2 , , in , i =
1, 2, , k, respectively. Then the uncertain system from (1 , 2 , , m ) to
(1 , 2 , , n ) is
k
X
ci E[ij
]
j =
(11.24)
c + c2 + + ck
i=1 1
247
for j = 1, 2, , n, where ij
are uncertain sets whose membership functions
are given by
ij (y)
,
if ij (y) < ci /2
ci
ij (y) + ci 1
ij
(y) =
(11.25)
, if ij (y) > 1 ci /2
ci
0.5,
otherwise
(11.26)
for i = 1, 2, , k, j = 1, 2, , n, respectively.
Proof: It follows from the inference rule 11.4 that the uncertain sets j are
j =
k
X
i=1
ci ij
c1 + c2 + + ck
for j = 1, 2, , n. Since ij
, i = 1, 2, , k, j = 1, 2, , n are independent
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.
Remark 11.1: The uncertain system allows the uncertain sets ij in the
rule-base (11.19) become constants bij , i.e.,
ij = bij
(11.27)
(11.29)
248
(11.31)
for any (1 , 2 , , m ) D.
Proof: Without loss of generality, we assume that the function g is a realvalued function with only two variables 1 and 2 , and the compact set is
a unit rectangle D = [0, 1] [0, 1]. Since g is continuous on D and then is
uniformly continuous, for any given number > 0, there is a number > 0
such that
|g(1 , 2 ) g(10 , 20 )| <
(11.32)
whenever k(1 , 2 ) (10 , 20 )k < . Let k be an integer larger than 1/( 2),
and write
i1
i j1
j
Dij = (1 , 2 )
< 1 ,
< 2
(11.33)
k
k
k
k
for i, j = 1, 2, , k. Note that {Dij } is a sequence of disjoint rectangles
whose diameter is less than . Define rectangular uncertain sets
i1 i
,
, i = 1, 2, , k,
(11.34)
i =
k k
j1 j
j =
, j = 1, 2, , k.
(11.35)
,
k
k
Then we assume a rule-base with k k if-then rules,
Rule ij: If i and j then g(i/k, j/k),
i, j = 1, 2, , k.
(11.36)
if (1 , 2 ) Dij , i, j = 1, 2, , k.
(11.37)
(11.38)
The theorem is thus verified. Hence uncertain systems are universal approximators!
249
11.3
Uncertain Control
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..........................
...
......................
.......................... .............................................................................................. ........................
.
.
.....
.
.
.
.
1
1
1
... ...
...
...
...
1
1
...
....
.... ....
....
....
....
...
. ..
.
.....
.
...
.
.....
..... .....
...
...
.
.
.
.
..........................
.............................. ............................................................................................. ...........................
..............................
...
.
....
...
...
...
2
2
.. 2
..
.. 2
...
...
2
..........
....
....
.
.
.
.
...
.
.
....
..
....
..
...
..
....
.
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
..
..
.
.
.
.
.
.
.
.
.
............................................................................
.
.
.
...
...
...
...
...
..
..
.
.
.
.
.
.
.
.
.
...
.
...
.
.
...
.
...
.
...
.
...
.
.
.
.
.
.
.
.
.
.
.
.
...
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.....
.
.
.
.
.
.
.
.
...
.. ... ..
... ... .
..
..
...........
...........
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.. .. ...
.. .. ...
..
.. ...
.. ... n
....
.
.
.
.
n
n
n
.............m
.
.
.
.
.
.
.
.
........................
................................
......................................
.............................................................................................................
........................................................................................
Inputs of Controller
Outputs of Process
(t)
(t)
..
.
(t)
Process
Inference Rule
Rule Base
Outputs of Controller
Inputs of Process
(t)
(t)
..
.
(t)=E[ (t)]
(t)=E[ (t)]
..
.
(t)
(t)
..
.
(t)
(t)=E[ (t)]
(t)
11.4
Inverted Pendulum
250
A(t)
F (t)
NS
PS
/4
PL
..................................................
......
......
......
..................................................
...
... ...
... ...
... ...
...
...
... .....
... .....
... .....
...
...
...
...
...
..
..
..
..
...
.
.
.
.
.
.
.
.
...
...
...
.
.
.
...
..
... ....
... ....
... ....
... ....
... ...
... ...
... ...
... ..
.....
.....
.....
......
....
......
......
......
.
.
.
.
. .
. .
. .
. .
... ...
... ...
... ...
... ...
... .....
... .....
... .....
... .....
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
...
.
.
.
.
.
.
.
... ...
...
... ...
... ...
..
... ..
...
... ..
... ..
..
.
.
.
.
.
.
.
.
......................................................................................................................................................................................................................................................................................
/2 /4
/2
(rad)
NS
PS
/8
PL
..................................................
......
......
......
..................................................
...
... ..
...
... ..
... ..
...
.. .....
.. .....
.. .....
..
...
...
...
...
..
..
..
..
.
.
.
.
...
.
.
.
.
...
...
...
.
.
.
...
...
...
...
...
...
...
...
... ....
... ....
... ....
... ....
... ...
... ...
... ...
... ...
......
......
......
......
.
.
..
..
..
..
... ....
... ....
... ....
... ....
... .....
... .....
... .....
... .....
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
.
.
...
.
.
.
.
.
... ...
... ...
... ...
...
..
... ...
...
... ...
... ...
...
.
.
.
.
.
.
.
.
.
.............................................................................................................................................................................................................................................................................................
/4 /8
/4
(rad/sec)
251
NL
NS
PS
PL
40
20
20
40
...
...
...
...
...
... ...
... ...
... ...
... ...
... ...
... .....
... .....
... .....
... .....
... .....
..
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
.
.
.
...
... .....
... .....
... .....
... .....
...
...
.
.
.
.
...
.
.
.
.
.
... ..
... ..
... ..
... ..
...
..
.
.
.
.
.
.
.
.
.
.
....
....
....
....
...
.
.
.
.
.
.
.
.
.
.
...
...
... .....
... .....
... .....
... .....
.
.
.
.
.
...
...
...
...
...
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
...
...
...
...
..
..
..
...
...
...
.
.
.
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... ..
... ..
... ..
...
...
...
...
.
.
.
.
.
.
.
...
.
.
.
.
.
... ..
... ..
... ..
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.................................................................................................................................................................................................................................................................................
60
60
(N)
NL
NS
PS
PL
PL
PL
PL
PS
Z
PL
PL
PS
Z
NS
PL
PS
Z
NS
NL
PS
Z
NS
NL
NL
Z
NS
NL
NL
NL
A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.
11.5
Bibliographic Notes
The basic uncertain inference rule was initialized by Liu [118] in 2010 by
the tool of conditional uncertain set. After that, Gao, Gao and Ralescu [43]
extended the uncertain inference rule to the case with multiple antecedents
and multiple if-then rules.
Based on the uncertain inference rules, Liu [118] suggested the concept of
uncertain system, and then presented the tool of uncertain controller. As an
important contribution, Peng and Chen [174] proved that uncertain systems
252
are universal approximator and then demonstrated that the uncertain controller is a reasonable tool. As a successful application, Gao [46] balanced an
inverted pendulum by using the uncertain controller.
Chapter 12
Uncertain Process
An uncertain process is essentially a collection of uncertain variables. This
chapter will give the concept of uncertain process, and introduce sample
path, uncertainty distribution, independent increment, stationary increment,
extreme value, first hitting time, and time integral of uncertain process.
12.1
Uncertain Process
(12.1)
(12.2)
254
Sample Path
Definition 12.2 (Liu [114]) Let Xt be an uncertain process. Then for each
, the function Xt () is called a sample path of Xt .
Note that each sample path is a real-valued function of time t. In addition,
an uncertain process may also be regarded as a function from an uncertainty
space to a collection of sample paths.
<.
.
.........
...
....
...... .
.. ... ..... .
..
.... . ..... ........
...
. .. ...... ..... ....... .......
. ..
....
...
.
.......
...
...
......
...
...
.
...
...
........
..
.
...
...
.
.
.
.
.. .... ...... ...
.
..
...
.
.
.
......
. ..... ..... ... ...
.
...
.
.
...
.
.
.
.
.
..... ....
.. ........... ........... .....
...
.
.
.. ... ......
..
.. .......
...
.
.
... ..
.. .......... ... ...... ..............
...
... ...
.......
... ... ... .... . ... ... ...
...
.....
... .. .. .. .
..... ... ... .......
...
..
.......... ........
......
.... ........ ........
...
.
..
...
.
... .. ....
... .........
... ...
......
...............................................................................................................................................................................................................................................................
12.2
Uncertainty Distribution
255
Example 12.3: The linear uncertain process Xt L(at, bt) has an uncertainty distribution,
0,
if x at
x at
, if at x bt
t (x) =
(12.3)
(b a)t
1,
if x bt.
Example 12.4: The zigzag uncertain process Xt Z(at, bt, ct) has an
uncertainty distribution,
0,
if x at
at
if at x bt
2(b a)t ,
(12.4)
t (x) =
x + ct 2bt
,
if
bt
ct
2(c b)t
1,
if x ct.
Example 12.5: The normal uncertain process Xt N (et, t) has an uncertainty distribution,
1
(et x)
t (x) = 1 + exp
.
(12.5)
3t
Example 12.6: The lognormal uncertain process Xt LOGN (et, t) has
an uncertainty distribution,
1
(et ln x)
t (x) = 1 + exp
.
(12.6)
3t
Theorem 12.1 (Liu [133], Sufficient and Necessary Condition) A function
t (x) : T < [0, 1] is an uncertainty distribution of uncertain process
if and only if at each time t, it is a monotone increasing function except
t (x) 0 and t (x) 1.
Proof: If t (x) is an uncertainty distribution of some uncertain process
Xt , then at each time t, t (x) is the uncertainty distribution of uncertain
variable Xt . It follows from Peng-Iwamura theorem that t (x) is a monotone
increasing function with respect to x and t (x) 6 0, t (x) 6 1. Conversely,
if at each time t, t (x) is a monotone increasing function except t (x) 0
and t (x) 1, it follows from Peng-Iwamura theorem that there exists an
uncertain variable t whose uncertainty distribution is just t (x). Define
Xt = t ,
t T.
256
(12.7)
and (ii) if f (x) is a strictly decreasing function and t (x) is continuous with
respect to x, then f (Xt ) has an uncertainty distribution
t (x) = 1 t (f 1 (x)).
(12.8)
lim t (x) = 1.
x+
(12.10)
It is clear that linear uncertainty distribution, zigzag uncertainty distribution, normal uncertainty distribution and lognormal uncertainty distribution
of uncertain process are all regular.
Note that we have stipulated that a crisp initial value X0 has a regular uncertainty distribution. That is, we allow the initial value of regular
uncertain process to be a constant whose uncertainty distribution is
(
1, if x X0
0 (x) =
(12.11)
0, if x < X0
and say 0 (x) is a continuous and strictly increasing function with respect
to x at which 0 < 0 (x) < 1 even though it is discontinuous at X0 .
257
(12.12)
1
0 ()
1
t ()
1
1
t (1) = lim t ().
(12.13)
= 0.9
.......................
....
.......
........
.......
......
....
......
.
.
...
.
.
.
..
.......
...
.........
............................
...
.......................
.......
................
...
.......
.........
.
.
.......
.
.
.
...
.
.
........
.
.....
.
.
.
.
.
............................
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
......
...........................................
.......
...
..........
...........
.......
... ..................... ..................... .........................................................................
.................................
.
.
.
.
.
.
.
................................................................ .............
.
........
.....................
..................................................................
.... ...............
............................................................................................................................................................................................................................
...............................................
................................................ ........................................................................................
.........
.......................................... .......................
...................................
...............................................................
... ................. ................
........
..
..........
...
........
........ .........................................................
..........
...
........
...........
..........................
........
........
...
.........
.......
..............
...
.......
.........................
........
...
..........
...........................
.......
...
......
...
......
......
...
......
...
.......
..........
...
................
...
..
........................................................................................................................................................................................................................................................
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1
(12.14)
Example 12.9: The zigzag uncertain process Xt Z(at, bt, ct) has an
inverse uncertainty distribution,
(
(1 2)at + 2bt,
if < 0.5
1
t () =
(12.15)
(2 2)bt + (2 1)ct, if 0.5.
258
t 3
1
t () = et +
ln
.
(12.16)
1
Example 12.11: The lognormal uncertain process Xt LOGN (et, t) has
an inverse uncertainty distribution,
!
t 3
1
ln
t () = exp et +
.
(12.17)
1
Theorem 12.3 (Liu [133], Sufficient and Necessary Condition) A function
1
t () : T (0, 1) < is an inverse uncertainty distribution of uncertain
process if and only if at each time t, it is a continuous and strictly increasing
function with respect to .
Proof: Suppose 1
t () is an inverse uncertainty distribution of uncertain
process Xt . Then at each time t, 1
t () is an inverse uncertainty distribution of uncertain variable Xt . It follows from Theorem 2.6 that 1
t ()
is a continuous and strictly increasing function with respect to (0, 1).
Conversely, if 1
t () is a continuous and strictly increasing function with
respect to (0, 1), it follows from Theorem 2.6 that there exists an uncertain variable t whose inverse uncertainty distribution is just 1
t (). Define
Xt = t ,
t T.
12.3
Independence
Definition 12.8 (Liu [133]) Uncertain processes X1t , X2t , , Xnt are said
to be independent if for any positive integer k and any times t1 , t2 , , tk ,
the uncertain vectors
i = (Xit1 , Xit2 , , Xitk ),
i = 1, 2, , n
(12.18)
i=1
For any independent uncertain processes X1t , X2t , , Xnt and any times
t1 , t2 , , tn , it is clear that
X1t1 , X2t2 , , Xntn
are independent uncertain variables.
(12.20)
259
Theorem 12.4 (Liu [133]) Uncertain processes X1t , X2t , , Xnt are independent if and only if for any positive integer k, any times t1 , t2 , , tk , and
any k-dimensional Borel sets B1 , B2 , , Bn , we have
( n
)
n
[
_
M
( i Bi ) =
M{ i Bi }
(12.21)
i=1
i=1
(12.22)
Proof: At any time t, it is clear that X1t , X2t , , Xnt are independent un1
certain variables with inverse uncertainty distributions 1
1t (), 2t (), ,
1
nt (), respectively. The theorem follows from the operational law of uncertain variables immediately.
12.4
An independent increment process is an uncertain process that has independent increments. A formal definition is given below.
Definition 12.9 (Liu [114]) An uncertain process Xt is said to have independent increments if
Xt0 , Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(12.24)
260
(12.26)
Furthermore, the uncertain process Xt may have a linear uncertainty distribution, i.e.,
Xt L(at, bt).
(12.27)
Theorem 12.6 Let Xt be an independent increment process. Then for any
real numbers a and b, the uncertain process
Yt = aXt + b
(12.28)
261
That is,
1
1
1
1
t () s () > t () s ().
1
Hence 1
t () s () is a strictly increasing function with respect to .
Conversely, let us prove that there exists a stationary independent increment process whose inverse uncertainty distribution is just 1
t (). Without
loss of generality, we only consider the range of t [0, 1]. For each positive
1
1
integer n, since 1
t () and t () s () are continuous and strictly
increasing functions with respect to , there exist independent uncertain
variables 0n , 1n , , nn such that 0n has an inverse uncertainty distribution
1
1
0n () = 0 ()
k
X
(k = 0, 1, , n)
in , if t =
n
n
Xt =
i=0
linear, otherwise.
It may prove that Xtn converges in distribution as n . Furthermore, we
may verify that the limit is indeed an independent increment process and has
the inverse uncertainty distribution 1
t (). The theorem is verified.
Remark 12.3: It follows from Theorem 12.8 that the uncertainty distribution of independent increment process has a horn-like shape, i.e., for any
s < t and < , we have
1
1
1
1
s () s () < t () t ().
(12.29)
262
1
t ()
...............
.....
..................
.......
...............
.............
....
.................
.
.
...........
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.............
.........
...
........
............
...
...
...........
........
...............
.......
..........
...............
...
.......
.........
.............
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
....
........
......
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
....
........
........
...
...... .............
..........
..................
......
..........
...
...
......................
..... ...... .................
.................
...
.
..... .......
..
.....................
.
.
.
.
.
.
.
.
.
.
.
... ......... ............ ................
.
.
.
.
.
.
............
... ................ ............
..
..
..............
. ... ... .....
..............
...
...
......................................................
..
..
.......................
.
................................................................................................................................................................................
..........................
.
.
.
.
............................ .....................
...
...
.....
. .. ...
.... ................................... ...........................
..
..
...............
..
... ........ .......... .............
............... ........
.........
..... .......
.
.
...
.
.
................
.......
.........
..... .......
.
.
.
.
.
.
.
.
...
.
.
......
.
.
...................
..........
...................
..........
...... ..............
...
..
...........
........
......
...
...........
........
......
............
.........
.......
...
.............
.........
.......
...
.
.
.
..............
..........
.......
...............
..........
........
...
.......
............
........
...
.............
.........
..............
..........
...
................
...........
...
.......
............
..............
...
..................
...
...................
...
..
.....................................................................................................................................................................................................................................
= 0.9
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1
Figure 12.3: Inverse Uncertainty Distribution of Independent Increment Process: A Horn-like Family of Functions of t indexed by
12.5
(12.31)
(12.32)
263
(12.33)
x
(12.34)
(12.35)
t
for any time t > 0. Note that is just the uncertainty distribution of X1 .
264
(12.36)
k
X
i
k
(0) + 1
, if t =
(k = 1, 2, , n)
n
n i=1
n
n
Xt =
linear,
otherwise.
It may prove that Xtn converges in distribution as n . Furthermore, we
may verify that the limit is a stationary independent increment process and
has the inverse uncertainty distribution 1
t (). The theorem is verified.
Example 12.17: The linear uncertain process Xt L(at, bt) has an inverse
uncertainty distribution 1
t () = ()t where
() = (1 )a + b.
(12.37)
It follows from Theorem 12.11 that there exists a stationary independent increment process with linear uncertainty distribution because () is a strictly
increasing function of .
Example 12.18: The zigzag uncertain process Xt Z(at, bt, ct) has an
inverse uncertainty distribution 1
t () = ()t where
(
(1 2)a + 2b,
if < 0.5
() =
(12.38)
(2 2)b + (2 1)c, if 0.5.
1
t ()
265
= 0.9
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1
....
.....
.......
.......
.......
.......
....
....... ............
.
.
...
.
.
.
.
...... .............
...
.......
..
...
....... .............
.......
.......
........
..
...
....... ............. ...............
.
.
.
.
...
.
.
.....
.... ...........
.
.
.
.
......
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
....
....
......
...
....... ....... ...............
........
....... .......
..
.........
...
........
....... .............. ............... ................
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
..
...... ....... ........
........
..........
...
....... ....... ........ .................
..........
...
.
....... ........ .........
..........
..........
....... ....... ........ ..........
............
..........
...
............
.................................... ................ ...................
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.......
........................... ............. ...............
.
.
.
.
.........
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
. . ..
.
..
....
..... .
............... ........ .......... ...........
...
............
...............
.............. ......... ......... .......... ......................
...............
...
...............
..
.............. ....... ........ ..........
.................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... .................................................................................................. .............................
.
.
.
.
.
...................
............................................. .............. ..................
....................
........................................................................................................................................
..............................
................................
...................................................................................
...............................
.........................................................................................................
.........................
..
...............................................................................................................................................................................................................................................
Figure 12.4: Inverse Uncertainty Distribution of Stationary Independent Increment Process: A Family of Linear Functions of t indexed by
It follows from Theorem 12.11 that there exists a stationary independent increment process with zigzag uncertainty distribution because () is a strictly
increasing function of .
Example 12.19: The normal uncertain process Xt N (et, t) has an
inverse uncertainty distribution 1
t () = ()t where
() = e +
ln
.
(12.39)
t 3
1
t () = exp et +
ln
.
(12.40)
1
It is clear that 1
t () is not a family of linear functions of t indexed by
. Hence there does not exist any stationary independent increment process
whose uncertainty distribution is lognormal.
Theorem 12.12 (Liu [120]) Let Xt be a stationary independent increment
process. Then there exist two real numbers a and b such that
E[Xt ] = a + bt
for any time t 0.
(12.41)
266
(12.42)
Proof: It follows from Theorem 12.12 that there exists a real number b such
that E[Xt ] = bt for any time t 0. Hence
E[Xs+t ] = b(s + t) = bs + bt = E[Xs ] + E[Xt ].
Theorem 12.14 (Chen [14]) Let Xt be a stationary independent increment
process with a crisp initial value X0 . Then there exists a real number b such
that
V [Xt ] = bt2
(12.43)
for any time t 0.
Proof: It follows from Theorem 12.10 that Xt and (1 t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have
V [Xt ] = V [(1 t)X0 + tX1 ] = t2 V [X1 ].
Hence (12.43) holds for b = V [X1 ].
Theorem 12.15 (Chen [14]) Let Xt be a stationary independent increment
process with a crisp initial value X0 . Then for any times s and t, we have
p
p
p
V [Xs+t ] = V [Xs ] + V [Xt ].
(12.44)
Proof: It follows from Theorem 12.14 that there exists a real number b such
that V [Xt ] = bt2 for any time t 0. Hence
p
p
p
12.6
267
This section will present a series of extreme value theorems for samplecontinuous independent increment processes. Note that a discrete-time uncertain process will be considered sample-continuous in this section.
Theorem 12.16 (Liu [126], Extreme Value Theorem) Let Xt be a samplecontinuous independent increment process with uncertainty distribution t (x).
Then the supremum
sup Xt
(12.45)
0ts
(12.46)
0ts
(12.47)
(12.48)
0ts
1in
1in
0ts
and
min ti (x) inf t (x)
1in
0ts
268
1in
1in
0ts
and
max ti (x) sup t (x)
1in
0ts
(12.49)
0ts
(12.50)
0ts
(12.51)
(12.52)
0ts
=M
sup Xt f 1 (x)
0ts
= inf t (f 1 (x)).
0ts
269
Similarly, we have
(x) = M
=M
inf f (Xt ) x
0ts
inf Xt f
0ts
(x)
= sup t (f 1 (x)).
0ts
(12.53)
0ts
(12.54)
0ts
(12.55)
(12.56)
0ts
(12.58)
0ts
(12.59)
(12.60)
270
Exercise 12.3: Let Xt be a sample-continuous and nonnegative independent increment process with uncertainty distribution t (x). Show that the
supremum
sup Xt2
(12.61)
0ts
(12.62)
0ts
(12.63)
0ts
(12.64)
0ts
Theorem 12.18 (Liu [126]) Let Xt be a sample-continuous independent increment process with continuous uncertainty distribution t (x). Then the
supremum
sup f (Xt )
(12.65)
0ts
(12.66)
0ts
(12.67)
0ts
(12.68)
=1M
0ts
1
inf Xt < f (x) = 1 sup t (f 1 (x)).
0ts
0ts
Similarly, we have
(x) = M
inf f (Xt ) x = M sup Xt f 1 (x)
0ts
=1M
0ts
sup Xt < f 1 (x) = 1 inf t (f 1 (x)).
0ts
0ts
271
(12.70)
0ts
0ts
(12.71)
(12.72)
1
;
x
1
0ts Xt
inf
(12.74)
(12.75)
12.7
1
.
x
(12.76)
272
X. t
...
..........
...
....
....... ..
..
... ... ....... ..
...
........... ....... .................
.. ..
...
.......
...
...
..
........
..
...
.............................................................................................
..
.
...
.
...
.....
........
......
...
.
.
.
...
..
... .....
.. ... ........ ....
.
.
...
.
..... ... ...
...
.... ........ ... .... .......
........
....
......... .....
...
.............
...
.
..
. ... .....
.
.
... ... ...
...
..
... ...
.. .... .... .... .......... ...............
...
...
... ...
.......
...... ... ... ... ... .
...
..
.....
.... .... .. ... .
..... ... ... ......
...
..
..
............... .... ...
.....
.
.
.
.
...
.
.
..
..
. . .
...
..
..... . ..
..
...
..
... ..... ....
..
... ... ...
..
......
..
......
.
.....................................................................................................................................................................................................................................................
M sup Xt z , if X0 < z
0ts
(12.78)
(s) =
M inf Xt z , if X0 > z.
0ts
Proof: When X0 < z, it follows from the definition of first hitting time that
z s if and only if sup Xt z.
0ts
sup Xt z .
0ts
When X0 > z, it follows from the definition of first hitting time that
z s if and only if
inf Xt z.
0ts
inf Xt z .
0ts
273
strictly increasing function and z is a given level, then the first hitting time
z that f (Xt ) reaches the level z has an uncertainty distribution,
(s) =
1 0ts
sup t (f 1 (z)),
if z < f (X0 ).
(12.79)
0ts
0ts
When z < f (X0 ), it follows from the extreme value theorem that
(s) = M{z s} = M
inf f (Xt ) z
0ts
= sup t (f 1 (z)).
0ts
(s) =
sup t (f 1 (z)),
if z > f (X0 )
0ts
(12.80)
0ts
0ts
When z < f (X0 ), it follows from the extreme value theorem that
(s) = M{z s} = M
The theorem is verified.
inf f (Xt ) z
0ts
= 1 inf t (f 1 (z)).
0ts
274
12.8
Time Integral
This section will give a definition of time integral that is an integral of uncertain process with respect to time.
Definition 12.12 (Liu [114]) Let Xt be an uncertain process. For any partition of closed interval [a, b] with a = t1 < t2 < < tk+1 = b, the mesh is
written as
= max |ti+1 ti |.
(12.81)
1ik
Xt dt = lim
Xti (ti+1 ti )
(12.82)
i=1
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.
Since Xt is an uncertain variable at each time t, the limit in (12.82) is
also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is time integrable if and only if the
limit in (12.82) is an uncertain variable.
Theorem 12.22 If Xt is a sample-continuous uncertain process on [a, b],
then it is time integrable on [a, b].
Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
lim
k
X
Xti (ti+1 ti )
i=1
exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.
Theorem 12.23 If Xt is a time integrable uncertain process on [a, b], then
it is time integrable on each subinterval of [a, b]. Moreover, if c [a, b], then
Z
Z
Xt dt =
Z
Xt dt +
Xt dt.
(12.83)
275
the limit
lim
k
X
Xti (ti+1 ti )
i=1
n1
X
Xti (ti+1 ti )
i=m
exists almost surely and is finite. Hence Xt is time integrable on the subinterval [a0 , b0 ]. Next, for the partition
a = t1 < < tm = c < tm+1 < < tk+1 = b,
we have
k
X
Xti (ti+1 ti ) =
m1
X
Xti (ti+1 ti ).
i=m
i=1
i=1
k
X
Xti (ti+1 ti ) +
Note that
b
Xt dt = lim
a
c
Xt dt = lim
Xt dt = lim
k
X
Xti (ti+1 ti ),
i=1
m1
X
Xti (ti+1 ti ),
i=1
k
X
Xti (ti+1 ti ).
i=m
Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b
k
X
(Xt + Yt )dt = lim
(Xti + Yti )(ti+1 ti )
0
= lim
0
Z
=
k
X
i=1
Z
Xt dt +
i=1
Yt dt
a
k
X
i=1
Yti (ti+1 ti )
276
(12.86)
12.9
Bibliographic Notes
The study of uncertain process was started by Liu [114] in 2008 for modeling
the evolution of uncertain phenomena. Uncertainty distribution is an important concept for describing uncertain process, and a sufficient and necessary
condition for it was proved by Liu [133]. In addition, independence concept
of uncertain processes was also discussed by Liu [133].
Independent increment process was initialized by Liu [114], and a sufficient and necessary condition was proved by Liu [133] for its inverse uncertainty distribution. In addition, Liu [126] presented an extreme value
theorem and obtained the uncertainty distribution of first hitting time, and
Yao [230] provided a formula for calculating the uncertainty distribution of
time integral of independent increment process.
Stationary independent increment process was initialized by Liu [114], and
a sufficient and necessary condition was proved by Liu [133] for its inverse
uncertainty distribution. Furthermore, Liu [120] showed that the expected
value is a linear function of time, and Chen [14] verified that the variance is
proportional to the square of time.
Chapter 13
Uncertain Renewal
Process
As an important type of uncertain process, an uncertain renewal process is an
uncertain process in which events occur continuously and independently of
one another in uncertain times. This chapter will introduce uncertain renewal
process, delayed renewal process, renewal reward process, and alternating
renewal process. This chapter will also provide an uncertain insurance model.
13.1
Definition 13.1 (Liu [114]) Let 1 , 2 , be iid positive uncertain interarrival times. Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the
uncertain process
Nt = max {n | Sn t}
(13.1)
n0
278
N. t
4
3
2
1
0
...
..........
...
..
..........
..............................
..
....
..
...
..
..
..........
.........................................................
..
....
..
..
..
..
..
...
..
..
..........
.......................................
..
...
.
..
..
..
...
..
..
...
..
..
..
..
...
..
.........
.........................................................
..
..
..
..
..
....
..
..
..
...
..
..
..
..
.
....
.....................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..
S0
S1
S2
S3
S4
(13.3)
(13.4)
t
bxc + 1
.
279
t (x)
....
........
..
...
...
t
...
.........................................
...
..
...
..
t
...
............................................
...
..
..
...
.
..
..
t
...
..
.
..........................................
..
...
..
..
..
..
...
.
..
..
..
..
...
.
..
.
..
.
...
..
..
..
..
...
t
.
..
..
.
..
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
...
.
.
..
..
..
..
...
.
..
..
..
..
..
..
...
..
.
..
..
..
..
...
..
..
..
..
t
...
.
..
..
..
............................................
.
..
...
.
..
..
..
..
.
...
.
.
.
..
..
..
..
..............................................
.
..
..
..
..
.
.
..
.
.
.
t
..
..
..
..
...
....
.
..
..
..
.
..
.
.
.
.
.......................................................................................................................................................................................................................................................................................
..
..
....
...
(5)
(4)
(3)
(2)
(1)
(0)
t
1
(13.7)
280
lim
=
dx.
(13.9)
t
t
x
0
If the uncertainty distribution is regular, then
Z 1
E[Nt ]
1
lim
=
d.
1 ()
t
t
(13.10)
1
.
x
Note that t (x) G(x) and t (x) G(x). It follows from Lebesgue dominated convergence theorem and the existence of E[1/1 ] that
Z +
Z +
E[Nt ]
1
= lim
(1 t (x))dx =
lim
(1 G(x))dx = E
.
t 0
t
t
1
0
Furthermore, since 1/ has an inverse uncertainty distribution 1/1 (1 ),
we get
Z 1
Z 1
1
1
1
E
=
d
=
d.
1 (1 )
1 ()
0
0
The theorem is proved.
Exercise 13.1: A renewal process Nt is called linear if 1 , 2 , are iid
linear uncertain variables L(a, b) with a > 0. Show that
lim
E[Nt ]
ln b ln a
=
.
t
ba
(13.11)
(
3 exp(e) csc( 3), if < / 3
E[Nt ]
lim
=
(13.13)
t
t
+,
if / 3.
281
Example 13.1: (Yao [225]) Block replacement policy means that an element
is always replaced at failure or periodically with time s. Assume that the
lifetimes of the elements are iid uncertain variables 1 , 2 , with a common
uncertainty distribution . Then replacement times before the given time s
form an uncertain renewal process Nt . Let a denote the failure replacement
cost of replacing an element when it fails earlier than s, and b the planned
replacement cost of replacing an element at planned time s. It is clear that
the cost of one period is aNs + b and the average cost is
aNs + b
.
s
(13.14)
s
n=1
and then
aNs + b
1
E
=
s
s
X
n=1
s
n
!
+b .
(13.15)
s
X
1
min
a
+b .
(13.16)
s s
n
n=1
13.2
282
Theorem 13.5 (Zhang, Ning and Meng [240]) Let Dt be a delayed renewal
process with uncertain interarrival times 1 , 2 , If 1 has an uncertainty
distribution and 2 , 3 , have a common uncertainty distribution , then
Dt has an uncertainty distribution
ts
t (x) = 1 sup (s)
, x0
(13.18)
bxc
0st
where bxc represents the maximal integer less than or equal to x. Here we
set (t s)/bxc = + and ((t s)/bxc) = 1 when bxc = 0.
Proof: It follows from the definition of uncertain delayed renewal process
that the uncertainty distribution of Dt meets
t (n) = M{Dt n} = 1M{Sn+1 t} = 1M {1 + (2 + + n+1 ) t}
for any nonnegative integer n. By using the independence of uncertain interarrival times, we have
ts
.
t (n) = 1 sup (s)
n
0st
Since an uncertain delayed renewal process can only take integer values, we
have
ts
t (x) = t (bxc) = 1 sup (s)
.
bxc
0st
The theorem is verified.
Theorem 13.6 (Zhang, Ning and Meng [240]) Let Dt be a delayed renewal
process with uncertain interarrival times 1 , 2 , Then the average renewal
number
1
Dt
(13.19)
t
2
in the sense of convergence in distribution as t .
Proof: It follows from the equation (13.18) that Dt /t has an uncertainty
distribution
ts
.
Ft (x) = M{Dt tx} = 1 sup (s)
btxc
0st
It is easy to verify that
lim Ft (x) 1
1
.
x
On the other hand, the uncertain variable 1/2 has an uncertainty distribution
1
1
1
G(x) = M
x = M 2
=1
2
x
x
283
x
0
For any time t 1, it is easy to verify that
(
1,
if 0 x 1
ts
sup (s)
btxc
(2/x), if 1 x < .
0st
That is, the above sequence of functions indexed by t is dominated by an
integrable function of x. Note that
ts
1
sup (s)
btxc
x
0st
as t . It follows from Lebesgue dominated convergence theorem that
Z +
ts
E[Dt ]
= lim
sup (s)
dx
lim
t 0
t
t
btxc
0st
Z +
ts
=
dx
lim sup (s)
t 0st
btxc
0
Z +
1
1
=
dx = E
.
x
2
0
The theorem is proved.
13.3
Nt
X
i=1
(13.21)
284
is called a renewal reward process, where Nt is the renewal process with uncertain interarrival times 1 , 2 ,
A renewal reward process Rt denotes the total reward earned by time t.
In addition, if i 1, then Rt degenerates to a renewal process Nt . Please
also note that Rt = 0 whenever Nt = 0.
Theorem 13.8 (Liu [120]) Let Rt be a renewal reward process with uncertain interarrival times 1 , 2 , and uncertain rewards 1 , 2 , Assume
those interarrival times and rewards have uncertainty distributions and ,
respectively. Then Rt has an uncertainty distribution
x
t
.
(13.22)
t (x) = max 1
k0
k+1
k
Here we set x/k = + and (x/k) = 1 when k = 0.
Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards 1 , 2 , , and Rt
has an uncertainty distribution
(N
)
(
)
k
t
X
[
X
t (x) = M
i x = M
(Nt = k)
i x
i=1
(
=M
i=1
k=0
(Nt = k) 1
k=0
x
k
)
(this is a polyrectangle)
n
x o
= max M (Nt k) 1
(polyrectangular theorem)
k0
k
n
o
x
= max M {Nt k} M 1
(independence)
k0
k
x
t
.
= max 1
k0
k+1
k
The theorem is proved.
Theorem 13.9 (Liu [120]) Assume that Rt is a renewal reward process with
uncertain interarrival times 1 , 2 , and uncertain rewards 1 , 2 , Then
the reward rate
Rt
1
(13.23)
t
1
in the sense of convergence in distribution as t .
Proof: It follows from Theorem 13.8 that the uncertainty distribution of Rt
is
x
t
t (x) = max 1
.
k0
k+1
k
285
t (x)
....
........
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
...
........ ..
.
. ..
.. .. .. .. .. .
......
...
.. .. .. .. .. .. .
....
........
.
.
.
.
.
...... .. ..
.
...
.
.
.
.
........
...
.
.
.
................
.
...
...
.
.
.
.
....
.............
.
.
...
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
... .. .. .. .. .. .. .. .. .. .. ....... .. .. .. .. .. .. .. .. .. ......... .. .. .. .. .. .. .. .. ..................................................................
.
..
..
...
.....
......
....
...
....
..
......
...
....
..
...
......
....
..
.....
...
..
....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................
.
.
.
..
....
....
...
..
...
...
....
..
...
...
..
...
....
...
...
..
...
...
.
...
.
.
.
...
.
.
.
..
...
...
...
...
..
...
...
...
...
..
...
...
.... .. .. .. .. ..................................
...
...
.
.
.
.
.
.
...
.
.
.
.
.
...
..
...
..
...
..
...
...
...
...
..
...
..
..
...
..
..
...
.
.
..
.
.
.
...
...
.
.. ...
..
...
..
.
..
...
... ..... .... ..... ......
............... .. .... ....
... ... ... .... ......
.. .. .. ... ....
...................................................................................................................................................................................................................................................................
....
....
.
k0
k+1
k
When t , we have
t (x) sup(1 (y)) (xy)
y0
k0
k+1
k
286
Note that Ft (x) G(x) and Ft (x) G(x). It follows from Lebesgue dominated convergence theorem and the existence of E[1 /1 ] that
Z +
Z +
1
E[Rt ]
(1 G(x))dx = E
(1 Ft (x))dx =
lim
= lim
.
t
t 0
t
1
0
Finally, since 1 /1 has an inverse uncertainty distribution 1 ()/1 (1
), the equation (13.25) is verified.
13.4
Nt
Nt
Nt
X
X
X
,
if
(
+
t
<
(i + i ) + Nt +1
t
i
i
i
i=1
i=1
i=1
(13.26)
At =
N
Nt
N
t +1
t +1
X
X
X
(i + i )
(i + i ) + Nt +1 t <
i ,
if
i=1
i=1
i=1
i At
N
t +1
X
(13.27)
i=1
for each time t. We are interested in the limit property of the rate at which
the system is on, i.e., At /t.
Theorem 13.11 (Yao and Li [218]) Assume that At is an alternating renewal process with uncertain on-times 1 , 2 , and uncertain off-times 1 , 2 ,
Then the availability rate
At
1
t
1 + 1
in the sense of convergence in distribution as t .
(13.28)
287
Proof: Write the uncertainty distributions of 1 and 1 by and , respectively. Then the uncertainty distribution of 1 /(1 + 1 ) is
(x) = sup (xy) (1 (y xy)).
y0
13.5
Nt
X
i=1
(13.31)
288
with iid uncertain interarrival times 1 , 2 , and iid uncertain claim amounts
1 , 2 , Then the capital of the insurance company at time t is
Zt = a + bt Rt
(13.32)
....
.........
..
...
..
..
...
.......
.......
...
...... ...
...... ...
...... .... .......... ....
...
.....
..
.
...
.
.
.
.
.
.
...
.
.
.
..
...
..
...
......
...
..
.....
...
...
......
..
.
.
.
...
.
.
...
..
....
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
...
.
........
... ...
...
.
.
.
.
.
.
.
... .......... ..
.
.
... .......... ....
.
.
.
..
.... .... .........
..
.
...
.
.
... .........
... ........
.
.........
...
..
..
...
.
.
.
.
.
.
.
.
.
.
.......
.
..
...
..
..
.....
...
...
..
.....
...
..
..
...
.. ..........
..
...
..
..
...
...........
..
...
..
..
...
..
...
..
..
..
...
...
..
..
..
..
...
..
...
..
..
..
...
..
...
..
..
..
...
..
...
..
..
...
..
..
...
..
...
..
..
.
...
.
............................................................................................................................................................................................................................................................................................
....
....
....
.
....
...
......
1
2
3
4
..
...
.
......
...
...
.... .......
...
..
...
........
.
Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.
Definition 13.5 (Liu [126]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,
Ruin = M inf Zt < 0 .
t0
(13.33)
It is clear that the ruin index is a special case of the risk index in the
sense of Liu [119].
Theorem 13.13 (Liu [126], Ruin Index Theorem) Let Zt = a + bt Rt be
an insurance risk process where a and b are positive numbers, and Rt is a
renewal reward process with iid uncertain interarrival times 1 , 2 , and iid
uncertain claim amounts 1 , 2 , If 1 and 1 have continuous uncertainty
distributions and , respectively, then the ruin index is
x
xa
Ruin = max sup
1
.
(13.34)
k1 x0
kb
k
289
Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = 1 + 2 + + k
whose uncertainty distribution is (s/k). Define an uncertain process indexed by k as follows,
Yk = a + bSk (1 + 2 + + k ).
It is easy to verify that Yk is an independent increment process with respect
to k. In addition, Yk is just the capital at the arrival time Sk and has an
uncertainty distribution
x
z+xa
1
.
Fk (z) = sup
kb
k
x0
Since a ruin occurs only at the arrival times, we have
Ruin = M inf Zt < 0 = M min Yk < 0 .
t0
k1
13.6
= M{ < +}.
(13.36)
290
(13.37)
i=1
t
1X
f (i s).
t i=1
(13.39)
Theorem 13.14 (Yao and Ralescu [221]) Assume 1 , 2 , are iid uncertain variables and s is a positive number. Then
N
t
1X
f (1 s)
f (i s)
t i=1
1 s
(13.40)
13.7
Bibliographic Notes
The concept of uncertain renewal process was first proposed by Liu [114] in
2008. Two years later, Liu [120] proved an uncertain elementary renewal theorem for determining the average renewal number. Liu [120] also provided
291
the concept of uncertain renewal reward process and verified an uncertain renewal reward theorem for determining the long-run reward rate. In addition,
Zhang, Ning and Meng [240] introduced the concept of uncertain delayed renewal process and showed an uncertain elementary delayed renewal theorem.
Furthermore, Yao and Li [218] presented the concept of uncertain alternating renewal process and proved an uncertain alternating renewal theorem for
determining the availability rate.
Based on the theory of uncertain renewal process, Liu [126] presented an
uncertain insurance model by assuming the claim is an uncertain renewal
reward process, and proved a formula for calculating ruin index. In addition,
Yao [225] discussed the uncertain block replacement policy, and Yao and
Ralescu [221] investigated the uncertain age replacement policy and obtained
the long-run average replacement cost.
Chapter 14
Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. This chapter will introduce Liu process, Liu integral, fundamental theorem, chain rule, change of variables, and
integration by parts.
14.1
Liu Process
t (x) = 1 + exp
(14.1)
3t
and inverse uncertainty distribution is
t 3
1
t () =
ln
(14.2)
294
that are homogeneous linear functions of time t for any given . See Figure 14.1.
1
t ()
= 0.9
.......
...
.........
..........
........
.........
...
........
.
.
.
.
.
.
....
.
...
..
.........
.........
...
.....
........
.............
...
.........
.............
........
.
.
.
.............
.
...
.
.
.
.....
.............
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.....
....
........
...
.............
.....................
.........
.............
.....................
........
...
........ .......................... ........................................
...
.........
.
...
........................................................................... ....................................................................................
.
.
.
.
.
...
.
.
. ......................................................................................................
.....................................................................................................................................................................................................................................................................................
..........................................................................................
...........................................
....................... ......................
....
...........................................
..
......... .............
...
....
......... .......................................................
.....................
..
.........
...
.....................
......... .........................
...
.....................
.............
.........
.........
.............
.........
...
.............
.........
.
...
.
.............
.........
.............
.........
...
.............
.........
.........
...
.........
.........
...
.........
...
.........
.........
...
.........
...
.........
.........
...
...
...
...
.
.
...................................................................................................................................................................................................................................................................
..
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1
k
1 X
k
i
(k = 0, 1, , n)
, if t =
n
n + 1 i=0
n
n
Xt =
linear,
otherwise.
We may prove that Xtn converges in distribution as n and the limit
meets the conditions of canonical Liu process. Hence there exists a canonical
Liu process.
Theorem 14.2 Let Ct be a canonical Liu process. Then for each time t >
0, the ratio Ct /t is a normal uncertain variable with expected value 0 and
variance 1. That is,
Ct
N (0, 1)
(14.3)
t
for any t > 0.
295
Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution
1
x
(x) = t (tx) = 1 + exp
.
3
Hence Ct /t is a normal uncertain variable with expected value 0 and variance
1. The theorem is verified.
Theorem 14.3 (Liu [120]) Let Ct be a canonical Liu process. Then for each
time t, we have
t2
E[Ct2 ] t2 .
(14.4)
2
Proof: Note that Ct is a normal uncertain variable and has an uncertainty
distribution t (x) in (14.1). It follows from the definition of expected value
that
Z +
Z +
E[Ct2 ] =
M{Ct2 x}dx =
M{(Ct x) (Ct x)}dx.
0
Z
=
(1 t ( x) + t ( x))dx = t2 .
E[Ct2 ]
M{Ct x}dx =
0
t2
(1 t ( x))dx = .
2
(14.5)
Proof: For exploring the proof of (14.5), please consult Iwamura and Xu
[63]. An open problem is to improve the bounds of the variance of the square
of canonical process.
Theorem 14.5 (Yao, Gao and Gao [219]) Let Ct be a canonical Liu process.
Then there exists a nonnegative uncertain variable K such that K() is a
Lipschitz constant of the sample path Ct () for each , and
lim M{K x} = 1.
x+
(14.6)
296
Definition 14.2 Let Ct be a canonical Liu process. Then for any real numbers e and > 0, the uncertain process
At = et + Ct
(14.7)
is called an arithmetic Liu process, where e is called the drift and is called
the diffusion.
It is clear that the arithmetic Liu process At is a type of stationary independent increment process. In addition, the arithmetic Liu process At has
a normal uncertainty distribution with expected value et and variance 2 t2 ,
i.e.,
At N (et, t)
(14.8)
whose uncertainty distribution is
1
(et x)
t (x) = 1 + exp
3t
(14.9)
t 3
= et +
ln
.
(14.10)
Definition 14.3 Let Ct be a canonical Liu process. Then for any real numbers e and > 0, the uncertain process
Gt = exp(et + Ct )
(14.11)
is called a geometric Liu process, where e is called the log-drift and is called
the log-diffusion.
Note that the geometric Liu process Gt has a lognormal uncertainty distribution, i.e.,
Gt LOGN (et, t)
(14.12)
whose uncertainty distribution is
1
(et ln x)
t (x) = 1 + exp
3t
(14.13)
t 3
= exp et +
ln
.
(
t 3 exp(et) csc(t 3), if t < /( 3)
E[Gt ] =
+,
if t /( 3).
(14.14)
(14.15)
297
14.2
Liu Integral
(14.16)
1ik
Xt dCt = lim
(14.17)
i=1
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.
Since Xt and Ct are uncertain variables at each time t, the limit in (14.17)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (14.17) is an uncertain variable.
Example 14.1: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
Z
k
X
dCt = lim
(Cti+1 Cti ) Cs C0 = Cs .
i=1
That is,
Z
dCt = Cs .
(14.18)
Example 14.2: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
Cs2 =
k
X
Ct2i+1 Ct2i
i=1
k
X
Cti+1 Cti
i=1
2
+2
k
X
i=1
Z
0+2
Ct dCt
0
298
as 0. That is,
s
Ct dCt =
0
1 2
C .
2 s
(14.19)
Example 14.3: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
k
X
sCs =
i=1
k
X
Cti+1 (ti+1 ti ) +
i=1
Z s
k
X
ti (Cti+1 Cti )
i=1
s
Z
Ct dt +
tdCt
as 0. That is,
s
Z
Ct dt +
tdCt = sCs .
(14.20)
k
X
i=1
exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable function. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .
Theorem 14.7 If Xt is an integrable uncertain process on [a, b], then it is
integrable on each subinterval of [a, b]. Moreover, if c [a, b], then
Z
Z
Xt dCt =
Z
Xt dCt +
Xt dCt .
(14.21)
Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is an integrable uncertain process on [a, b], for any partition
a = t1 < < tm = a0 < tm+1 < < tn = b0 < tn+1 < < tk+1 = b,
299
the limit
lim
k
X
i=1
n1
X
i=m
m1
X
i=m
i=1
i=1
k
X
Note that
k
X
Xt dCt = lim
m1
X
Xt dCt = lim
Xt dCt = lim
i=1
k
X
i=1
i=m
Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b
k
X
(Xt + Yt )dCt = lim
(Xti + Yti )(Cti+1 Cti )
0
= lim
0
Z
=
k
X
i=1
Z
Xt dCt +
i=1
Yt dCt
a
k
X
i=1
300
(14.24)
i=1
That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
k
X
Z
|f (ti )|(ti+1 ti )
|f (t)|dt
0
i=1
|f (t)|dt .
i=1
301
1
(1 )x
1 + exp
.
3s1
(14.29)
Definition 14.5 (Chen and Ralescu [17]) Let Ct be a canonical Liu process
and let Zt be an uncertain process. If there exist uncertain processes t and
t such that
Z t
Z t
Zt = Z0 +
s ds +
s dCs
(14.30)
0
for any t 0, then Zt is called a Liu process with drift t and diffusion t .
Furthermore, Zt has an uncertain differential
dZt = t dt + t dCt .
(14.31)
Example 14.4: It follows from the equation (14.18) that the canonical Liu
process Ct can be written as
Z
Ct =
dCs .
0
Thus Ct is a Liu process with drift 0 and diffusion 1, and has an uncertain
differential dCt .
Example 14.5: It follows from the equation (14.19) that Ct2 can be written
as
Z t
Ct2 = 2
Cs dCs .
0
Ct2
is a Liu process with drift 0 and diffusion 2Ct , and has an uncertain
Thus
differential
d(Ct2 ) = 2Ct dCt .
Example 14.6: It follows from the equation (14.20) that tCt can be written
as
Z
Z
t
tCt =
Cs ds +
0
sdCs .
0
Thus tCt is a Liu process with drift Ct and diffusion t, and has an uncertain
differential
d(tCt ) = Ct dt + tdCt .
Theorem 14.10 (Chen and Ralescu [17]) Liu process is a sample-continuous
uncertain process.
302
Proof: Let Zt be a Liu process. Then there exist two uncertain processes
t and t such that
Z
Zt = Z0 +
Z
s ds +
s dCs .
0
14.3
Fundamental Theorem
h
h
(t, Ct )dt +
(t, Ct )dCt .
t
c
(14.32)
h
h
(t, Ct )t +
(t, Ct )Ct .
t
c
h
(t, c) = t.
c
(14.34)
303
h
(t, c) = .
c
(14.35)
h
(t, c) = h(t, c).
c
(14.36)
14.4
Chain Rule
(14.37)
f (c) = 0,
f (c) = f 0 (c).
t
c
It follows from the fundamental theorem of uncertain calculus that the equation (14.37) holds.
Example 14.10: Let us calculate the uncertain differential of Ct2 . In this
case, we have f (c) = c2 and f 0 (c) = 2c. It follows from the chain rule that
dCt2 = 2Ct dCt .
(14.38)
304
14.5
Change of Variables
That is,
Z
C0
(14.42)
305
14.6
Integration by Parts
Xt = exp(t),
Then
dXt = exp(t)dt,
Yt =
sdCs .
0
Then
dXt = cos(t + 1)dt,
dYt = tdCt .
306
14.7
Bibliographic Notes
The concept of uncertain integral was first proposed by Liu [114] in 2008 in
order to integrate uncertain processes with respect to Liu process. One year
later, Liu [116] recast his work via the fundamental theorem of uncertain
calculus from which the techniques of chain rule, change of variables, and
integration by parts were derived.
Note that uncertain integral may also be defined with respect to other
integrators. For example, Liu and Yao [123] suggested an uncertain integral
with respect to multiple Liu processes. In addition, Chen and Ralescu [17]
presented an uncertain integral with respect to general Liu process. In order
to deal with uncertain process with jumps, Yao integral [217] was defined
as a type of uncertain integral with respect to uncertain renewal process.
Since then, the theory of uncertain calculus was well developed. For further
explorations on the development of uncertain calculus, the interested reader
may consult Chens book [19].
Chapter 15
Uncertain Differential
Equation
Uncertain differential equation is a type of differential equation involving
uncertain processes. This chapter will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and introduce YaoChen formula that represents the solution of an uncertain differential equation
by a family of solutions of ordinary differential equations. On the basis of
this formula, a numerical method for solving uncertain differential equations
is designed. In addition, extreme value, first hitting time and time integral
of solutions are provided.
15.1
(15.1)
(15.3)
308
has a solution
Z
Xt = X0 +
Z
us ds +
vs dCs .
(15.4)
That is,
Xt = X0 + at + bCt .
(15.6)
Z
Xt = X0 exp
us ds +
0
(15.7)
vs dCs .
(15.8)
Z
us ds +
vs dCs .
That is,
Xt = X0 exp (at + bCt ) .
(15.10)
309
(15.11)
has a solution
t
Z
Xt = Ut X0 +
0
where
Z
Ut = exp
u2s
ds +
Us
v2s
dCs
Us
Z
u1s ds +
v1s dCs .
(15.12)
(15.13)
dVt =
v2t
u2t
dt +
dCt .
Ut
Ut
Z
Vt = V0 +
0
u2s
ds +
Us
v2s
dCs .
Us
Z
(a)ds +
0dCs
= exp(at).
(15.14)
310
That is,
Z t
m
m
exp(as)dCs
Xt =
+ exp(at)
+ exp(at) X0
a
a
0
(15.15)
Z
0ds +
dCs
= exp(Ct ).
That is,
Z t
Xt = exp(Ct ) X0 + m
exp(Cs )ds .
(15.18)
(15.19)
(15.20)
and
Theorem 15.4 (Liu [139]) Let f be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = f (t, Xt )dt + t Xt dCt
(15.21)
Xt = Yt1 Zt
(15.22)
Z t
Yt = exp
s dCs
(15.23)
has a solution
where
(15.24)
311
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
s dCs t dCt = Yt t dCt .
dYt = exp
0
(15.25)
X01 + (1 )
1/(1)
exp(( 1)Cs )ds
.
Theorem 15.4 says the uncertain differential equation (15.25) has a solution
Xt = Yt1 Zt , i.e.,
1/(1)
Z t
Xt = exp(Ct ) X01 + (1 )
exp(( 1)Cs )ds
.
0
312
Theorem 15.5 (Liu [139]) Let g be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = t Xt dt + g(t, Xt )dCt
(15.26)
Xt = Yt1 Zt
(15.27)
Z t
Yt = exp
s ds
(15.28)
has a solution
where
(15.29)
(15.30)
313
X01
1/(1)
Z
+ (1 )
exp(( 1)s)dCs
Theorem 15.5 says the uncertain differential equation (15.30) has a solution
Xt = Yt1 Zt , i.e.,
Xt = exp(t)
X01
1/(1)
Z
+ (1 )
exp(( 1)s)dCs
(15.31)
(15.32)
and
Theorem 15.6 (Yao [223]) Let f be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = f (t, Xt )dt + t dCt
(15.33)
Xt = Yt + Zt
(15.34)
has a solution
where
Z
Yt =
s dCs
(15.35)
(15.36)
314
That is,
d(Xt Yt ) = f (t, Xt )dt.
Defining Zt = Xt Yt , we obtain Xt = Yt + Zt and dZt = f (t, Yt + Zt )dt.
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.
Example 15.7: Let and be real numbers with 6= 0. Consider the
uncertain differential equation
dXt = exp(Xt )dt + dCt .
(15.37)
Hence
Xt = X0 + Ct ln 1
exp(X0 + Cs )ds .
Theorem 15.7 (Yao [223]) Let g be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = t dt + g(t, Xt )dCt
(15.38)
Xt = Yt + Zt
(15.39)
has a solution
where
Z
Yt =
s ds
(15.40)
(15.41)
315
(15.42)
Hence
Z t
Xt = X0 + t ln 1
exp(X0 + s)dCs .
0
15.2
Theorem 15.8 (Chen and Liu [9], Existence and Uniqueness Theorem) The
uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt
(15.43)
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| L(1 + |x|), x <, t 0
(15.44)
316
x, y <, t 0 (15.45)
for each . It follows from linear growth condition and Lipschitz condition that
Z s
Z s
(0)
f (v, X0 )dv +
g(v, X0 )dCv ()
Dt () = max
0st
0
0
Z t
Z t
|f (v, X0 )| dv + K
|g(v, X0 )| dv
0
(1 + |X0 |)L(1 + K )t
where K is the Lipschitz constant to the sample path Ct () in Theorem 14.5.
In fact, by using the induction method, we may verify
(n)
Dt () (1 + |X0 |)
for each n. This means that, for each , the sample paths Xt ()
converges uniformly on any given time interval. Write the limit by Xt ()
that is just a solution of the uncertain differential equation because
Z t
Z t
Xt = X0 +
f (s, Xs )ds +
g(s, Xs )ds.
0
Next we prove that the solution is unique. Assume that both Xt and Xt
are solutions of the uncertain differential equation. Then for each , it
follows from linear growth condition and Lipschitz condition that
Z t
|Xt () Xt ()| L(1 + K )
|Xv () Xv ()|dv.
0
317
15.3
Stability
|X0 Y0 |0
M{|Xt Yt | > } = 0,
t > 0
(15.46)
(15.47)
|X0 Y0 |0
M{|Xt Yt | > } =
lim
|X0 Y0 |0
M{|X0 Y0 | > } = 0.
Z
Yt = exp(t)Y0 + b exp(t)
exp(s)dCs .
0
318
Theorem 15.9 (Yao, Gao and Gao [219], Stability Theorem) The uncertain
differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt
(15.49)
is stable if the coefficients f (t, x) and g(t, x) satisfy linear growth condition
|f (t, x)| + |g(t, x)| K(1 + |x|),
x <, t 0
(15.50)
x, y <, t 0 (15.51)
x, y <, t 0. (15.52)
L(s)ds
.
Since
Z
M K() > / |X0 Y0 |
L(s)ds
0
as |X0 Y0 | 0, we obtain
lim
|X0 Y0 |0
M{|Xt Yt | > } = 0.
319
(15.54)
is stable since its coefficients satisfy linear growth condition and strong Lipschitz condition.
15.4
Yao-Chen Formula
Yao-Chen formula relates uncertain differential equations and ordinary differential equations, just like that Feynman-Kac formula relates stochastic
differential equations and partial differential equations.
Definition 15.3 (Yao and Chen [222]) Let be a number with 0 < < 1.
An uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt
(15.55)
() =
3
ln
.
(15.57)
(15.58)
320
Xt
..
.........
...
...
.............
....
............
..
............
.............
...
..............
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.............
...
..............
...
..............
...............
...
...............
................................
..............................................
... ........................................ .......................
...............
... ...........................................
...............
................
... .................................... ...........
................
... ................................... ............
................
...................... ........ ..........
................
...
........
.................
................ ....... .......
...
.................
........
................. ...... ........
.
....
.
.
.
.
.
...
........
... ........ ..... ...... .......
.........
... .... ..... ..... ....... ........
...
.........
... ..... ...... ...... ...... ........
.
...
.........
.......
... ..... ..... ...... .......
.
.
.
.
.
...
.........
...
... ..... ..... ...... ......
.........
... ..... ..... ..... ....... .............
...
..........
.......
... .... ..... ...... .......
...
..........
.......
... ..... ...... ...... ......
..........
.......
...
... ..... ..... ...... .......
..........
........
......
... ..... ...... ......
..
...
........
.......
.... ..... ..... ......
.
.
.
...
........
.... .... ..... .....
.........
.... ..... ..... ...... .............
...
.........
..... .... .....
.
.
.
.
.
.
.
.
.
.
.
...
.
......
.........
.......
..... ...... ......
....
.........
.......
..
..
.....
...
...
........
..... .......... ........... ............
...
........
..
..
.....
.
.........
...
..... .......... ........... ..............
.........
......
.......
.....
........
...
.........
.....
......
.
.
.
.
.
.
.
..
.
.
.
.
.
.
...
.
........
.....
......
......
.........
....... ..............
...
.........
........
......
.......
...
.......
........
.......
......
.........
.......
......
...
........
.........
.......
...
..........
........
.......
.........
.......
...
..........
........
...
..........
.........
..
..........
...
..........
...
............
.....
...
...
...
...
..............................................................................................................................................................................................................................................
= 0.9
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1
(15.59)
M{Xt Xt , t} = ,
(15.60)
M{Xt > Xt , t} = 1 .
(15.61)
respectively. Then
Proof: At first, for each -path Xt , we divide the time interval into two
parts,
T + = t g (t, Xt ) 0 ,
T = t g (t, Xt ) < 0 .
It is obvious that T + T = and T + T = [0, +). Write
dCt ()
1
+
+
=
()
for
any
t
T
,
1
dt
dCt ()
(1 ) for any t T
1 =
dt
where 1 is the inverse standard normal uncertainty distribution. Since T +
and T are disjoint sets and Ct has independent increments, we get
M{+
1 } = ,
M{
1 } = ,
M{+
1 1 } = .
321
For any +
1 1 , we always have
dCt ()
|g(t, Xt )|1 (), t.
dt
Hence Xt () Xt for all t and
g(t, Xt ())
M{Xt Xt , t} M{+
1 1 } = .
(15.62)
()
for
any
t
T
,
+
=
2
dt
dCt ()
< (1 ) for any t T
.
2 =
dt
Since T + and T are disjoint sets and Ct has independent increments, we
obtain
M{+
2 } = 1 ,
M{
2 } = 1 ,
M{+
2 2 } = 1 .
For any +
2 2 , we always have
dCt ()
> |g(t, Xt )|1 (), t.
dt
Hence Xt () > Xt for all t and
g(t, Xt ())
Xt ,
(15.63)
Xt ,
(15.64)
Thus (15.60) and (15.61) follow from (15.62), (15.63) and (15.64) immediately.
Remark 15.3: It is also shown that Yao-Chen formula may be written as
M{Xt < Xt , t} = ,
(15.65)
M{Xt Xt , t} = 1 .
(15.66)
Please mention that {Xt < Xt , t} and {Xt Xt , t} are disjoint events
but not opposite. Generally speaking, their union is not the universal set,
and it is possible that
M{(Xt < Xt , t) (Xt Xt , t)} < 1.
(15.67)
(15.68)
322
15.5
Theorem 15.11 (Yao and Chen [222]) Let Xt and Xt be the solution and
-path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,
(15.69)
1
t () = Xt .
(15.70)
(15.71)
(15.72)
In addition, since {Xt Xt } and {Xt > Xt } are opposite events, the duality
axiom makes
M{Xt Xt } + M{Xt > Xt } = 1.
(15.73)
It follows from (15.71), (15.72) and (15.73) that M{Xt Xt } = . The
theorem is thus verified.
Example 15.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an -path Xt = exp at + |b|1 ()t . Thus its solution
Xt has an inverse uncertainty distribution
1
1
()t
(15.74)
t () = exp at + |b|
where 1 is the inverse standard normal uncertainty distribution.
Theorem 15.12 (Yao and Chen [222]) Let Xt and Xt be the solution and
-path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,
(15.75)
distribution 1
t () = Xt . Next, we may have a monotone function become
a strictly monotone function by a small perturbation. When J is a strictly
323
increasing function, it follows from Theorem 2.10 that J(Xt ) has an inverse
uncertainty distribution
1
t () = J(Xt ).
Thus we have
Z
E[J(Xt )] =
1
t ()d =
J(Xt )d.
Thus we have
Z
E[J(Xt )] =
1
t ()d
J(Xt1 )d
Z
=
J(Xt )d.
Z
E[Xt ] =
Xt d,
(15.77)
(Xt K)+ d,
(15.78)
(K Xt )+ d.
(15.79)
E[(Xt K)+ ] =
E[(K Xt )+ ] =
(15.80)
324
Step 2. Solve dXt = f (t, Xt )dt + |g(t, Xt )|1 ()dt by any method of ordinary differential equation and obtain the -path Xt , for example,
by using the recursion formula
Xi+1
= Xi + f (ti , Xi )h + |g(ti , Xi )|1 ()h
(15.81)
1
(15.82)
t () = Xt .
Example 15.13: In order to illustrate the numerical method, let us consider
an uncertain differential equation
p
(15.83)
dXt = (t Xt )dt + (1 + Xt )dCt , X0 = 1.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this equation successfully and obtain an inverse uncertainty distribution
of the solution Xt . Furthermore, we may get
E[X1 ] 0.868.
(15.84)
15.6
Theorem 15.13 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,
(15.87)
respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt )
(15.88)
0ts
1
s () = sup J(Xt );
0ts
(15.89)
325
0ts
(15.90)
1
s () = inf J(Xt ).
0ts
(15.91)
0ts
(15.92)
Similarly, we have
M sup J(Xt ) > sup J(Xt ) M{Xt > Xt , t} = 1 .
(15.93)
(15.94)
0ts
0ts
0ts
0ts
0ts
0ts
0ts
(15.95)
Similarly, we have
(15.96)
(15.97)
0ts
0ts
0ts
0ts
0ts
0ts
326
Exercise 15.3: Let r and K be real numbers. Show that the supremum
sup exp(rt)(Xt K)
0ts
1
s () = sup exp(rt)(Xt K)
0ts
(15.98)
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt )
(15.99)
0ts
(15.100)
0ts
0ts
(15.101)
(15.102)
0ts
(15.103)
0ts
Similarly, we have
M sup J(Xt ) > sup J(Xt1 ) M{Xt < Xt1 , t} = 1 . (15.104)
0ts
0ts
327
(15.105)
0ts
0ts
0ts
(15.106)
Similarly, we have
1
M inf J(Xt ) > inf J(Xt ) M{Xt < Xt1 , t} = 1 . (15.107)
0ts
0ts
0ts
(15.108)
15.7
Theorem 15.15 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt
(15.109)
with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time z that J(Xt ) reaches z has
an uncertainty distribution
inf
sup
J(X
)
z
, if z > J(X0 )
0ts
(s) =
(15.110)
328
0 = inf sup J(Xt ) z .
0ts
Then we have
sup J(Xt0 ) = z,
0ts
{z s} =
sup J(Xt ) z
0ts
{z > s} =
sup J(Xt ) < z
0ts
Then we have
inf J(Xt0 ) = z,
{z s} =
inf J(Xt ) z {Xt Xt0 , t},
0ts
0ts
{z > s} =
inf J(Xt ) > z
0ts
(s) = M{z s} = sup inf J(Xt ) z .
0ts
329
Theorem 15.16 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt
(15.111)
with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time z that J(Xt ) reaches z has
an uncertainty distribution
sup
J(X
)
z
,
if z > J(X0 )
sup
(s) =
0ts
(15.112)
0ts
Then we have
sup J(Xt0 ) = z,
0ts
{z s} =
sup J(Xt ) z
0ts
{z > s} =
sup J(Xt ) < z
0ts
(s) = M{z s} = sup sup J(Xt ) z .
0ts
Then we have
inf J(Xt0 ) = z,
0ts
330
{z s} =
inf J(Xt ) z
0ts
{z > s} =
inf J(Xt ) > z
0ts
15.8
Theorem 15.17 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,
(15.113)
respectively. Then for any time s > 0 and strictly increasing function J(x),
the time integral
Z s
J(Xt )dt
(15.114)
0
(15.115)
M
J(Xt )dt
J(Xt )dt M{Xt Xt , t} = .
0
Similarly, we have
Z s
Z
M
J(Xt )dt >
0
(15.116)
J(Xt )dt
M{Xt > Xt , t} = 1 .
(15.117)
331
(15.118)
(15.119)
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the time integral
Z s
J(Xt )dt
(15.120)
0
(15.121)
(15.122)
Similarly, we have
Z s
Z s
1
M
J(Xt )dt >
J(Xt )dt M{Xt < Xt1 , t} = 1 . (15.123)
0
332
(15.124)
15.9
Bibliographic Notes
333
Chapter 16
Uncertain Finance
This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differential equation.
16.1
Liu [116] supposed that the stock price follows an uncertain differential equation and presented an uncertain stock model in which the bond price Xt and
the stock price Yt are determined by
(
dXt = rXt dt
(16.1)
dYt = eYt dt + Yt dCt
where r is the riskless interest rate, e is the log-drift, is the log-diffusion, and
Ct is a canonical Liu process. Note that the bond price is Xt = X0 exp(rt)
and the stock price is
Yt = Y0 exp(et + Ct )
(16.2)
whose inverse uncertainty distribution is
1
t ()
t 3
ln
.
= Y0 exp et +
(16.3)
European Option
Definition 16.1 A European call option is a contract that gives the holder
the right to buy a stock at an expiration time s for a strike price K.
The payoff from a European call option is (Ys K)+ since the option is rationally exercised if and only if Ys > K. Considering the time value of money
resulted from the bond, the present value of the payoff is exp(rs)(Ys K)+ .
336
Hence the European call option price should be the expected present value
of the payoff.
Definition 16.2 Assume a European call option has a strike price K and
an expiration time s. Then the European call option price is
fc = exp(rs)E[(Ys K)+ ].
(16.4)
Y.t
...
..........
...
....
... ... ...... .
....
.... .. ...... .........
.................................................................................................................................................................. .... ....... . ....
...
s ....
.. ........
........
.
...
.. ...
....
..
.
...
.
.
.
.
.
.
.
.
.
.
...
.
.
.
...
.
..
.. ...... ........ .....
.
.
.
.
...
.
.
.
...
.... ....
.. ......... .... ... .......
....... .
.
...
.
...
....... .......
....
..
............
.
...
. ... .. .
.
...
.
. ........
.
.
.
.
.
...
.
.
... ..
........
.. ... ....... ............ ... .........
..
.......
..
.
.................................................................................................................................................................................................
. .....
.
.
..
.
....
.... ...... .......
...
...
.... .
.
... ... ...
..
... ... .....
.
......
...
......
..
...
..
0 ...
...
...
.
...
.......................................................................................................................................................................................................................................................................................
...
..
.....
Theorem 16.1 (Liu [116]) Assume a European call option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European call option price is
1
Z
fc = exp(rs)
0
!
!+
s 3
Y0 exp es +
ln
K
d.
(16.5)
!
!+
s 3
Y0 exp es +
ln
K
.
It follows from Definition 16.2 that the European call option price formula is
just (16.5).
Remark 16.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.
337
(16.6)
Theorem 16.2 (Liu [116]) Assume a European put option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European put option price is
!!+
Z 1
s 3
ln
d.
(16.7)
fp = exp(rs)
K Y0 exp es +
1
0
Proof: Since (K Ys )+ is a decreasing function with respect to Ys , it has
an inverse uncertainty distribution
!
!+
s 3 1
1
ln
K
.
s () = Y0 exp es +
It follows from Definition 16.4 that the European put option price is
!!+
Z 1
s 3 1
fp = exp(rs)
K Y0 exp es +
ln
d
0
!!+
Z 1
s 3
= exp(rs)
K Y0 exp es +
ln
d.
1
0
The European put option price formula is verified.
Remark 16.2: It is easy to verify that the option price is a decreasing
function of the interest rate r, and is an increasing function of the strike
price K.
338
(16.8)
0ts
Hence the American call option price should be the expected present value
of the payoff.
Definition 16.6 Assume an American call option has a strike price K and
an expiration time s. Then the American call option price is
+
fc = E sup exp(rt)(Yt K) .
(16.9)
0ts
Theorem 16.3 (Chen [10]) Assume an American call option for the uncertain stock model (16.1) has a strike price K and an expiration time s. Then
the American call option price is
!
!+
Z 1
t 3
ln
K
d.
fc =
sup exp(rt) Y0 exp et +
1
0 0ts
Proof: It follows from Theorem 15.13 that sup0ts exp(rt)(Yt K)+ has
an inverse uncertainty distribution
!
!+
t 3
1
s () = sup exp(rt) Y0 exp et +
ln
K
.
1
0ts
Hence the American call option price formula follows from Definition 16.6
immediately.
Remark 16.3: It is easy to verify that the option price is a decreasing
function with respect to either the interest rate r or the strike price K.
339
(16.10)
0ts
Hence the American put option price should be the expected present value
of the payoff.
Definition 16.8 Assume an American put option has a strike price K and
an expiration time s. Then the American put option price is
fp = E sup exp(rt)(K Yt )+ .
(16.11)
0ts
Theorem 16.4 (Chen [10]) Assume an American put option for the uncertain stock model (16.1) has a strike price K and an expiration time s. Then
the American put option price is
Z
fp =
0
!!+
t 3
d.
ln
sup exp(rt) K Y0 exp et +
1
0ts
!!+
t 3 1
= sup exp(rt) K Y0 exp et +
ln
.
0ts
Hence the American put option price formula follows from Definition 16.8
immediately.
Remark 16.4: It is easy to verify that the option price is a decreasing
function of the interest rate r, and is an increasing function of the strike
price K.
340
Z 1
Z
Y0 s
t 3
fc = exp(rs)
exp et +
ln
dt K
d.
s 0
1
0
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Yt dt
0
is
1
s () = Y0
Z
0
t 3
exp et +
ln
dt.
Hence the Asian call option price formula follows from Definition 16.10 immediately.
341
Z 1
Z
Y0 s
t 3
fc = exp(rs)
ln
dt
d.
K
exp et +
s 0
1
0
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Yt dt
0
is
1
s ()
Z
= Y0
0
t 3
ln
dt.
exp et +
Hence the Asian put option price formula follows from Definition 16.12 immediately.
General Stock Model
Generally, we may assume the stock price follows a general uncertain differential equation and obtain a general stock model in which the bond price Xt
and the stock price Yt are determined by
(
dXt = rXt dt
(16.18)
dYt = F (t, Yt )dt + G(t, Yt )dCt
342
where r is the riskless interest rate, F and G are two functions, and Ct is a
canonical Liu process.
Note that the -path Yt of the stock price Yt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time is
s. It follows from Definition 16.2 and Theorem 15.12 that the European call
option price is
Z
1
(Ys K)+ d.
fc = exp(rs)
(16.19)
It follows from Definition 16.4 and Theorem 15.12 that the European put
option price is
Z 1
fp = exp(rs)
(K Ys )+ d.
(16.20)
0
It follows from Definition 16.6 and Theorem 15.13 that the American call
option price is
Z 1
fc =
sup exp(rt)(Yt K)+ d.
(16.21)
0
0ts
It follows from Definition 16.8 and Theorem 15.14 that the American put
option price is
Z 1
fp =
sup exp(rt)(K Yt )+ d.
(16.22)
0
0ts
It follows from Definition 16.9 and Theorem 15.17 that the Asian call option
price is
+ #
Z 1 " Z s
1
Y dt K
d.
(16.23)
fc = exp(rs)
s 0 t
0
It follows from Definition 16.11 and Theorem 15.18 that the Asian put option
price is
+ #
Z
Z 1 "
1 s
fp = exp(rs)
Y dt
d.
(16.24)
K
s 0 t
0
Multifactor Stock Model
Now we assume that there are multiple stocks whose prices are determined
by multiple Liu processes. In this case, we have a multifactor stock model in
which the bond price Xt and the stock prices Yit are determined by
dXt = rXt dt
n
X
(16.25)
dY
=
e
Y
dt
+
ij Yit dCjt , i = 1, 2, , m
it
i
it
j=1
where r is the riskless interest rate, ei are the log-drifts, ij are the logdiffusions, Cjt are independent Liu processes, i = 1, 2, , m, j = 1, 2, , n.
343
Portfolio Selection
For the multifactor stock model (16.25), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (t , 1t , , mt ) (i.e.,
the investment fractions meeting t + 1t + + mt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
dZt = rt Zt dt +
m
X
ei it Zt dt +
i=1
m X
n
X
ij it Zt dCjt .
(16.26)
i=1 j=1
That is,
Z tX
n Z tX
m
m
X
ij is dCjs .
Zt = Z0 exp(rt) exp
(ei r)is ds +
0 i=1
j=1
0 i=1
(16.27)
(16.28)
and
where Zt is determined by (16.26) and represents the wealth at time t.
Theorem 16.7
model (16.25) is
11
21
..
.
m1
12 1n
x1
e1 r
22 2n
x2 e2 r
(16.29)
..
.. .. =
..
..
.
.
. .
.
m2 mn
xn
em r
Z tX
m
n Z tX
m
X
Zt = Z0 exp(rt) exp
(ei r)is ds +
ij is dCjs .
0 i=1
j=1
0 i=1
344
Thus
ln(exp(rt)Zt ) ln Z0 =
Z tX
m
(ei r)is ds +
0 i=1
n Z tX
m
X
j=1
ij is dCjs
0 i=1
(ei r)is ds
0 i=1
and variance
2
n Z t X
m
X
ij is ds .
0
j=1
i=1
Assume the system (16.29) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (t , 1t , , mt ),
suppose
n Z t X
m
X
ij is ds = 0.
0
j=1
Then
m
X
ij is = 0,
i=1
j = 1, 2, , n, s (0, t].
i=1
(ei r)is = 0,
s (0, t]
i=1
and
Z tX
m
(ei r)is ds = 0.
0 i=1
i=1
345
ij i = 0,
j = 1, 2, , n
i=1
and
m
X
i=1
Z tX
m
0 i=1
Thus we have
M{exp(rt)Zt > Z0 } = 1.
Hence the multifactor stock model (16.25) is arbitrage. The theorem is thus
proved.
Theorem 16.8 The multifactor
log-diffusion matrix
11
21
..
.
m1
..
.
1n
2n
..
.
m2
mn
(16.30)
i = 1, 2, , m.
(16.31)
346
16.2
Real interest rates do not remain unchanged. Chen and Gao [18] assumed
that the interest rate follows an uncertain differential equation and presented
an uncertain interest rate model,
dXt = (m aXt )dt + dCt
(16.32)
where m, a, are positive numbers. Chen and Gao [18] also investigated the
uncertain interest rate model,
p
(16.33)
dXt = (m aXt )dt + Xt dCt .
More generally, we may assume the interest rate Xt follows a general uncertain differential equation and obtain a general interest rate model,
dXt = F (t, Xt )dt + G(t, Xt )dCt
(16.34)
Z
exp
0
Xt dt d.
(16.36)
347
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Xt dt
0
is
1
s () =
Xt dt.
Hence the price formula of zero-coupon bond follows from Definition 16.13
immediately.
16.3
Liu, Chen and Ralescu [142] assumed that the exchange rate follows an uncertain differential equation and proposed an uncertain currency model,
(16.38)
t 3
= Z0 exp et +
.
ln
(16.39)
(16.40)
348
On the other hand, the bank receives f for selling the contract at time 0,
and pays (1 K/Zs )+ in foreign currency at the expiration time s. Thus the
expected return of the bank at the time 0 is
f exp(vs)Z0 E[(1 K/Zs )+ ].
(16.41)
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
f + exp(us)E[(Zs K)+ ] = f exp(vs)Z0 E[(1 K/Zs )+ ]. (16.42)
Thus the European currency option price is given by the definition below.
Definition 16.15 (Liu, Chen and Ralescu [142]) Assume a European currency option has a strike price K and an expiration time s. Then the European currency option price is
f=
1
1
exp(us)E[(Zs K)+ ] + exp(vs)Z0 E[(1 K/Zs )+ ].
2
2
(16.43)
Theorem 16.11 (Liu, Chen and Ralescu [142]) Assume a European currency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the European currency option price is
!
!+
Z 1
1
s 3
ln
K
d
Z0 exp es +
f = exp(us)
2
1
0
!!+
Z 1
1
s 3
+ exp(vs)
Z0 K/ exp es +
ln
d.
2
1
0
Proof: Since (Zs K)+ and Z0 (1 K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
!
!+
s 3
1
s () = Z0 exp es +
ln
K
,
1
1
s ()
!!+
s 3
ln
,
Z0 K/ exp es +
respectively. Thus the European currency option price formula follows from
Definition 16.15 immediately.
Remark 16.5: The European currency option price of the uncertain currency model (16.37) is a decreasing function of K, u and v.
Example 16.5: Assume the domestic interest rate u = 0.08, the foreign interest rate v = 0.07, the log-drift e = 0.06, the log-diffusion = 0.32, the initial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =
349
(16.44)
0ts
On the other hand, the bank receives f for selling the contract, and pays
sup exp(vt)(1 K/Zt )+ .
(16.46)
0ts
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
f + E sup exp(ut)(Zt K)+
0ts
=f E
(16.48)
.
0ts
Thus the American currency option price is given by the definition below.
Definition 16.17 (Liu, Chen and Ralescu [142]) Assume an American currency option has a strike price K and an expiration time s. Then the American currency option price is
1
1
f = E sup exp(ut)(Zt K)+ + E sup exp(vt)Z0 (1 K/Zt )+ .
2 0ts
2 0ts
350
Theorem 16.12 (Liu, Chen and Ralescu [142]) Assume an American currency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the American currency option price is
!
!+
Z
t 3
1 1
1
!!+
Z
1 1
t 3
+
sup exp(vt) Z0 K/ exp et +
ln
d.
2 0 0ts
1
Proof: It follows from Theorem 15.13 that sup0ts exp(ut)(Zt K)+ and
sup0ts exp(vt)Z0 (1 K/Zt )+ have inverse uncertainty distributions
!
!+
t 3
1
ln
K
,
s () = sup exp(ut) Z0 exp et +
1
0ts
1
s ()
!!+
t 3
ln
,
= sup exp(vt) Z0 K/ exp et +
1
0ts
respectively. Thus the American currency option price formula follows from
Definition 16.17 immediately.
General Currency Model
If the exchange rate follows a general uncertain differential equation, then we
have a general currency model,
16.4
351
Bibliographic Notes
The classical finance theory assumed that stock price, interest rate, and exchange rate follow stochastic differential equations. However, this preassumption was challenged among others by Liu [125] in which a convincing paradox
was presented to show why the real stock price is impossible to follow any
stochastic differential equations. As an alternative, Liu [125] suggested to
develop a theory of uncertain finance.
Uncertain differential equations were first introduced into finance by Liu
[116] in 2009 in which an uncertain stock model was proposed and European
option price formulas were provided. Besides, Chen [10] derived American
option price formulas, Sun and Chen [198] verified Asian option price formulas, and Yao [224] proved a no-arbitrage theorem for this type of uncertain
stock model. It is emphasized that other uncertain stock models were also
actively investigated by Peng and Yao [170], Yu [233], and Chen, Liu and
Ralescu [16], among others.
Uncertain differential equations were used to simulate interest rate by
Chen and Gao [18] in 2013 and an uncertain interest rate model was presented. On the basis of this model, the price of zero-coupon bond was also
produced.
Uncertain differential equations were employed to model currency exchange rate by Liu, Chen and Ralescu [142] in which an uncertain currency
model was proposed and some currency option price formulas were also derived for the uncertain currency markets.
Appendix A
Probability Theory
Probability theory (Kolmogorov [79]) is a branch of mathematics for studying the behavior of random phenomena. The emphasis in this appendix is
mainly on probability measure, random variable, probability distribution, independence, operational law, expected value, variance, law of large numbers,
conditional probability, stochastic process, stochastic calculus, and stochastic
differential equation.
A.1
Probability Measure
[
X
Pr
Ai =
Pr{Ai }.
(A.1)
i=1
i=1
Definition A.1 The set function Pr is called a probability measure if it satisfies the normality, nonnegativity, and additivity axioms.
Example A.1: Let = {1 , 2 , }, and let A be the power set of .
Assume that p1 , p2 , are nonnegative numbers such that p1 + p2 + = 1.
Define a set function on A as
X
Pr{A} =
pi .
(A.2)
i A
354
(A.4)
Y
Y
Prk {Ak }
(A.6)
Pr
Ak =
k=1
k=1
(A.7)
A.2
Random Variable
Definition A.4 A random variable is a measurable function from a probability space (, A, Pr) to the set of real numbers, i.e., { B} is an event for
any Borel set B.
355
A.3
(A.8)
Probability Distribution
Definition A.6 The probability distribution : < [0, 1] of a random variable is defined by
(x) = Pr { x} .
(A.9)
That is, (x) is the probability that the random variable takes a value less
than or equal to x. A function : < [0, 1] is a probability distribution if
and only if it is an increasing and right-continuous function with
lim (x) = 0;
lim (x) = 1.
x+
(A.10)
0, if x < 1
0.5, if 1 x < 1
(x) =
1, if x 1.
Definition A.7 The probability density function : < [0, +) of a random variable is a function such that
Z x
(x) =
(y)dy
(A.11)
holds for all x <, where is the probability distribution of the random
variable .
356
(y)dy.
(A.12)
Proof: Assume that C is the class of all subsets C of < for which the relation
Z
Pr{ C} =
(y)dy
(A.13)
C
holds. We will show that C contains all Borel sets. On the one hand, we
may prove that C is a monotone class (if Ai C and Ai A or Ai A, then
A C). On the other hand, we may verify that C contains all intervals of the
form (, a], (a, b], (b, ) and since
Z a
Pr{ (, a]} = (a) =
(y)dy,
Z
Pr{ (b, +)} = (+) (b) =
(y)dy,
b
(y)dy,
a
Z
Pr{ } = 0 =
(y)dy
j=1
Cj
That is, C C. Hence we have F C. Since the smallest -algebra containing F is just the Borel algebra, the monotone class theorem (if F C and
(F) is the smallest -algebra containing F, then (F) C) implies that C
contains all Borel sets.
Example A.5: A random variable has a uniform distribution if its probability density function is defined by
(x) =
1
,
ba
a x b.
(A.14)
357
A.4
Independence
i=1
n
Y
i=1
i=1
n
Y
Pr{fi (i ) Bi }.
i=1
358
A.5
Operational Law
Theorem A.3 Let 1 , 2 , , n be independent random variables with probability distributions 1 , 2 , , n , respectively, and f : <n < a measurable function. Then
= f (1 , 2 , , n )
(A.19)
is a random variable with probability distribution
Z
(x) =
d1 (x1 )d2 (x2 ) dn (xn ).
(A.20)
(A.22)
(A.23)
x1 +x2 ++xn x
1 (x y)d2 (y)
(x) =
(A.24)
(A.26)
359
(A.28)
where
(
i (xi ) =
i=1
ai ,
if xi = 1
1 ai , if xi = 0
(A.31)
for i = 1, 2, , n.
Exercise A.4: Let 1 , 2 , , n be independent Boolean random variables
defined by (A.29). Show that
= 1 2 n
(A.32)
(A.33)
(A.34)
(A.35)
(A.36)
360
n
Y
i=1
Pr{ = 1} =
where
ai ,
if xi = 1
1 ai , if xi = 0
i (xi ) =
A.6
(i = 1, 2, , n).
(A.38)
Expected Value
m
X
pi xi .
i=1
Proof: It follows from the probability inversion theorem that for almost all
numbers x, we have Pr{ x} = 1 (x) and Pr{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
Pr{ x}dx
Pr{ x}dx
(1 (x))dx
(x)dx.
361
Proof: It follows from the change of variables of integral and Theorem A.5
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx
0
+
xd(x).
xd(x) =
xd(x) +
Proof: It follows from the change of variables of integral and Theorem A.5
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx
1 ()d +
(0)
Z
0
(0)
1 ()d =
1 ()d.
Theorem A.9 Let 1 , 2 , , n be independent random variables with probability density functions 1 , 2 , , n , respectively, and f : <n < a measurable function. Then = f (1 , 2 , , n ) has an expected value
Z
E[] =
f (x1 , x2 , , xn )1 (x1 )2 (x2 ) n (xn )dx1 dx2 dxn . (A.45)
<n
362
Theorem A.10 Let and be random variables with finite expected values.
Then
E[a + b] = aE[] + bE[]
(A.46)
for any numbers a and b. Furthermore, if the two random variables are also
independent, then
E[] = E[]E[].
(A.47)
A.7
Variance
Theorem A.13 If 1 , 2 , , n are independent random variables with finite variances, then
V [1 + 2 + + n ] = V [1 ] + V [2 ] + + V [n ].
(A.50)
i=1
i=1 j=i+1
363
A.8
(A.51)
(A.52)
(A.54)
A.9
Sn ()
= e,
n
A.
(A.55)
Conditional Probability
We consider the probability of an event A after it has been learned that some
other event B has occurred. This new probability is called the conditional
probability of A given B.
Definition A.11 Let (, A, Pr) be a probability space, and A, B A. Then
the conditional probability of A given B is defined by
Pr{A|B} =
Pr{A B}
Pr{B}
(A.56)
364
which means that the conditional probability is identical to the original probability. This is the so-called memoryless property of exponential distribution.
In other words, it is as good as new if it is functioning on inspection.
Definition A.12 The conditional probability distribution : < [0, 1] of a
random variable given B is defined by
(x|B) = Pr { x|B}
(A.57)
A.10
Stochastic Process
365
Renewal Process
Let i denote the times between the (i 1)th and the ith events, known as
the interarrival times, i = 1, 2, , respectively. Define S0 = 0 and
Sn = 1 + 2 + + n ,
n 1.
(A.60)
Then Sn can be regarded as the waiting time until the occurrence of the nth
event after time t = 0.
Definition A.17 Let 1 , 2 , be iid positive interarrival times. Define
S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the stochastic process
Nt = max {n | Sn t}
(A.61)
n0
(A.62)
(A.63)
(A.64)
x 0.
(A.65)
Let Nt be a Poisson process with rate . Since the sum of n iid exponential
random variables with rate follows an Erlang distribution with parameters
n and , we immediately have
Pr{Nt n} =
X
k=n
exp(t)
(t)k
.
k!
(A.66)
366
Wiener Process
Brownian motion is the irregular movement of pollen grain suspended in
liquid. In 1923 Norbert Wiener modeled Brownian motion by the following
Wiener process.
Definition A.19 A stochastic process Wt is said to be a standard Wiener
process if
(i) W0 = 0 and almost all sample paths are continuous,
(ii) Wt has stationary and independent increments,
(iii) every increment Ws+t Ws is a normal random variable with expected
value 0 and variance t.
Note that the lengths of almost all sample paths of Wiener process are
infinitely long during any fixed time interval, and are differentiable nowhere.
Furthermore, the squared variation of Wiener process on [0, t] is equal to t
both in mean square and almost surely.
A.11
Ito calculus, named after Kiyoshi Ito, is the most popular topic of stochastic
calculus. The central concept is the Ito integral that allows one to integrate
a stochastic process with respect to Wiener process. This section provides a
brief introduction to Ito calculus.
Definition A.20 Let Xt be a stochastic process and let Wt be a standard
Wiener process. For any partition of closed interval [a, b] with a = t1 < t2 <
< tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik
Xt dWt = lim
k
X
(A.67)
i=1
provided that the limit exists in mean square and is a random variable.
Example A.10: Let Wt be a standard Wiener process. It follows from the
definition of Ito integral that
Z s
dWt = Ws ,
0
Wt dWt =
0
1 2 1
W s.
2 s
2
367
Theorem A.16 (Ito Formula) Let Wt be a standard Wiener process, and let
h(t, w) be a twice continuously differentiable function. Then Xt = h(t, Wt )
has an Ito differential,
dXt =
h
h
1 2h
(t, Wt )dt.
(t, Wt )dt +
(t, Wt )dWt +
t
w
2 w2
(A.68)
Example A.11: Ito formula is the fundamental theorem of stochastic calculus. Applying Ito formula, we obtain
d(tWt ) = Wt dt + tdWt ,
d(Wt2 ) = 2Wt dWt + dt.
Definition A.21 Let Wt be a standard Wiener process and let Zt be a
stochastic process. If there exist two stochastic processes t and t such that
Z
Zt = Z0 +
Z
s ds +
s dWs
(A.69)
for any t 0, then Zt is called an Ito process with drift t and diffusion t .
Furthermore, Zt has a stochastic differential
dZt = t dt + t dWt .
A.12
(A.70)
(A.71)
368
Xt = exp
b2
a
2
t + bWt .
Theorem A.17 (Existence and Uniqueness Theorem) The stochastic differential equation
dXt = f (t, Xt )dt + g(t, Xt )dWt
(A.72)
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| L(1 + |x|), x <, t 0
(A.73)
and Lipschitz condition
|f (t, x) f (t, y)| + |g(t, x) g(t, y)| L|x y|,
x, y <, t 0 (A.74)
(A.76)
(A.77)
(A.78)
Appendix B
Chance Theory
Uncertainty and randomness are two basic types of indeterminacy. Chance
theory is a mathematical methodology for modeling complex systems with
not only uncertainty but also randomness. This appendix will introduce the
concepts of chance measure, uncertain random variable, chance distribution,
operational law, expected value, variance, and law of large numbers. As applications of chance theory, this appendix will also provide uncertain random
programming, uncertain random risk analysis, uncertain random reliability
analysis, uncertain random graph, and uncertain random network.
B.1
Chance Measure
(B.1)
(B.2)
The product -algebra L A is the smallest -algebra containing measurable rectangles of the form A, where L and A A. Any element
in L A is called an event in the chance space.
What is the product measure M Pr? In order to answer this question,
let us consider an event in L A. For each , the set
= { | (, ) }
(B.3)
370
(B.4)
Pr{r } =
inf
Pr{A}, if
sup
Pr{A}, if
AA,Ar
AA,Ar
inf
sup
AA,Ar
(B.5)
AA,Ar
0.5,
otherwise
Pr{r }dr.
(B.6)
..
.........
.............................................
....
........
.......
..
.......
......
......
.....
...
.....
.....
.
.
.
...
.
...
...
.
...
...
.
.
.......................................................................................................................
.....
..
...
.
.
.. ...
...
.... ..
.. ....
.. ..
...
.. ...
.... ..
...
.. ...
... ..
...
... .
.. ...
...
... ..
.. ...
...
... ..
.. ...
...
... ..
.. ...
...
... .
.....
.....
...
...
....
...
....
.
......
.
...
. ..
.
.
.. ......
.
.
...
... ..
.
.
.. ........
.
.
...
..
...
......
..
...
.......
......
...
..........
.......
..
...
.....................................
..
..
...
..
..
...
.
.
.
.
.....................................................................................................................................................................
..
.... ....
.
....
..........................................
..............................................
.
371
Ch{ } = 1.
(B.9)
Pr { | M{ | (, ) 2 } r} dr = Ch{2 }.
0
Pr { | M{ | (, ) A} r} dr
Z
=
M{}
Pr{A}dr +
0
M{}
372
(B.10)
Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{} =
Pr { | M{ | (, ) } r} dr
0
Pr { | M{ | (, ) c } 1 r} dr
=
0
(1 Pr { | M{ | (, ) c } > 1 r}) dr
Z
=1
Pr { | M{ | (, ) c } > r} dr
= 1 Ch{c }.
That is, Ch{} + Ch{c } = 1, i.e., the chance measure is self-dual.
Theorem B.3 (Hou [58], Subadditivity Theorem) The chance measure is
subadditive. That is, for any countable sequence of events 1 , 2 , , we
have
(
)
[
X
Ch
i
Ch{i }.
(B.11)
i=1
i=1
[
X
M | (, )
i
M{ | (, ) i }.
i=1
i=1
)
i
)
r
i=1
(
Pr |
X
i=1
)
M{ | (, ) i } r .
373
1
[
[
Ch
i =
Pr | M | (, )
i r dr
0
i=1
i=1
Pr |
Pr |
0
)
M{ | (, ) i } r dr
i=1
)
M{ | (, ) i } r dr
i=1
Z
X
i=1
Pr { | M{ | (, ) i } r} dr
Ch{i }.
i=1
B.2
(B.12)
374
(B.13)
for all (, ) .
Example B.2: Let 1 , 2 , , m be random variables, and let 1 , 2 , , n
be uncertain variables. If f is a measurable function, then
= f (1 , 2 , , m , 1 , 2 , , n )
(B.14)
(B.15)
for all (, ) .
Theorem B.5 (Liu [140]) Let be an uncertain random variable on the
chance space (, L, M) (, A, Pr), and let B be a Borel set. Then { B}
is an uncertain random event with chance measure
Z 1
Ch{ B} =
Pr { | M{ | (, ) B} r} dr.
(B.16)
0
(B.18)
Theorem B.6 (Liu [140]) Let be an uncertain random variable. Then the
chance measure Ch{ B} is a monotone increasing function of B and
Ch{ } = 0,
Ch{ <} = 1.
(B.19)
375
(B.20)
B.3
Chance Distribution
(B.21)
(B.22)
Example B.4: As a special uncertain random variable, the chance distribution of an uncertain variable is just its uncertainty distribution, that
is,
(x) = Ch{ x} = M{ x}.
(B.23)
Theorem B.8 (Liu [140], Sufficient and Necessary Condition for Chance
Distribution) A function : < [0, 1] is a chance distribution if and only if
it is a monotone increasing function except (x) 0 and (x) 1.
376
Pr { | M{ | (, ) x} r} dr 0.
x <
Pr { | M{ | (, ) x} r} dr 1.
x <
Ch{ x} = 1 (x).
(B.24)
B.4
377
Operational Law
(B.25)
(B.26)
(B.27)
<m
Z
=
Z
=
<m
(B.28)
(B.29)
<m
378
Proof: For any given numbers y1 , , ym , it follows from the operational law
of uncertain variables that f (y1 , , ym , 1 , , n ) is an uncertain variable
with uncertainty distribution F (x; y1 , , ym ). By using (B.27), the chance
distribution of is
Z
(x) =
M{f (y1 , , ym , 1 , , n ) x}d1 (y1 ) dm (ym )
<m
Z
=
F (x; y1 , , ym )d1 (y1 ) dm (ym )
<m
(B.30)
(B.31)
where
Z
d1 (y1 )d2 (y2 ) dm (ym )
(y) =
(B.32)
y1 +y2 ++ym y
sup
(B.33)
z1 +z2 ++zn =z
379
(B.34)
(B.35)
where
Z
d1 (y1 )d2 (y2 ) dm (ym )
(y) =
(B.36)
y1 y2 ym y
sup
z1 z2 zn =z
(B.37)
(B.38)
(B.39)
(B.40)
where
is the probability distribution of 1 2 m , and
(x) = 1 (x) 2 (x) n (x)
(B.41)
(B.42)
(B.43)
380
where
(x) = 1 (x)2 (x) m (x)
(B.44)
(B.45)
Proof: It follows from the definition of chance measure that for any numbers
y1 , , ym , the theorem is true if the function G is
G(y1 , , ym ) = M{f (y1 , , ym , 1 , , n ) 0}.
Furthermore, by using Theorem 2.21, we know that G is just the root . The
theorem is proved.
Remark B.5: Sometimes, the equation may not have a root. In this case,
if
1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) < 0
381
1
1
1
Figure B.2: f (y1 , , ym , 1
1 (), , k (), k+1 (1), , n (1))
Theorem B.13 (Liu [143]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m and let 1 , 2 , , n be
independent uncertain variables with regular uncertainty distributions 1 , 2 ,
, n , respectively. If f (1 , , m , 1 , , n ) is strictly increasing with
respect to 1 , , k and strictly decreasing with respect to k+1 , , n , then
Z
Ch{f (1 , , m , 1 , , n ) > 0} =
G(y1 , , ym )d1 (y1 ) dm (ym )
<m
Proof: It follows from the definition of chance measure that for any numbers
y1 , , ym , the theorem is true if the function G is
G(y1 , , ym ) = M{f (y1 , , ym , 1 , , n ) > 0}.
Furthermore, by using Theorem 2.22, we know that G is just the root . The
theorem is proved.
Remark B.7: Sometimes, the equation may not have a root. In this case,
if
1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) < 0
382
1
1
1
Figure B.3: f (y1 , , ym , 1
1 (1), , k (1), k+1 (), , n ())
(B.49)
i=1
where
f (x1 , , xm ) =
sup
min j (yj ),
if
sup
(B.50)
1
sup
min j (yj ),
sup
min j (yj ) 0.5,
if
1jn
f (x1 , ,xm ,y1 , ,yn )=1
383
(
i (xi ) =
(
j (yj ) =
ai ,
if xi = 1
1 ai , if xi = 0
(i = 1, 2, , m),
(B.51)
bj ,
if yj = 1
1 bj , if yj = 0
(j = 1, 2, , n).
(B.52)
Proof: At first, when (x1 , , xm ) is given, f (x1 , , xm , 1 , , n ) is essentially a Boolean function of uncertain variables. It follows from the operational law of uncertain variables that
M{f (x1 , , xm , 1 , , n ) = 1} = f (x1 , , xm )
that is determined by (B.50). On the other hand, it follows from the operational law of uncertain random variables that
!
m
X
Y
Ch{ = 1} =
i (xi ) M{f (x1 , , xm , 1 , , n ) = 1}.
(x1 , ,xm ){0,1}m
i=1
i=1
Remark B.10: When the random variables disappear, the operational law
becomes
sup
min j (yj ),
if
sup
min j (yj ) < 0.5
1
sup
min j (yj ),
if
sup
min j (yj ) 0.5.
1jn
f (y1 ,y2 , ,yn )=1
(B.55)
(B.56)
384
(B.57)
(B.59)
(B.60)
i=1
where
f (x1 , x2 , , xm ) = k-max [x1 , x2 , , xm , b1 , b2 , , bn ],
(
i (xi ) =
B.5
ai ,
if xi = 1
1 ai , if xi = 0
(i = 1, 2, , m).
(B.61)
(B.62)
Expected Value
Ch{ r}dr
E[] =
Ch{ r}dr
(B.63)
(1 (x))dx
E[] =
0
(x)dx.
(B.64)
385
Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ x} = 1 (x) and Ch{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
Ch{ x}dx
Ch{ x}dx
(1 (x))dx
(x)dx.
Proof: It follows from the change of variables of integral and Theorem B.15
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx
0
+
Z
=
xd(x) +
xd(x) =
xd(x).
Proof: It follows from the change of variables of integral and Theorem B.15
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx
1 ()d +
(0)
Z
0
(0)
1 ()d =
1 ()d.
386
, n be uncertain variables (not necessarily independent), then the uncertain random variable
= f (1 , , m , 1 , , n )
(B.67)
(B.68)
<m
where E[f (y1 , , ym , 1 , , n )] is the expected value of the uncertain variable f (y1 , , ym , 1 , , n ) for any real numbers y1 , , ym .
Proof: For simplicity, we only prove the case m = n = 2. Write the
uncertainty distribution of f (y1 , y2 , 1 , 2 ) by F (x; y1 , y2 ) for any real numbers
y1 and y2 . Then
Z +
Z 0
E[f (y1 , y2 , 1 , 2 )] =
(1 F (x; y1 , y2 ))dx
F (x; y1 , y2 )dx.
0
+
Z
1
F (x; y1 , y2 )d1 (y1 )d2 (y2 ) dx
<2
0
0
Z Z
(1 F (x; y1 , y2 ))dx
=
<2
F (x; y1 , y2 )dx d1 (y1 )d2 (y2 )
Z
=
<
387
That is,
E[ + ] = E[] + E[ ].
(B.69)
(B.70)
Theorem B.19 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,
respectively. If f (1 , , m , 1 , , n ) is a strictly increasing function or
a strictly decreasing function with respect to 1 , , n , then the uncertain
random variable
= f (1 , , m , 1 , , n )
(B.71)
has an expected value
Z Z 1
1
E[] =
f (y1 , , ym , 1
1 (), , n ())dd1 (y1 ) dm (ym ).
<m 0
and
Z Z
E[ ] =
<
y 1 () dd(y).
(B.73)
(B.74)
388
Proof: Since 1 and 2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , 1 ) and f2 (y2 , 2 ) are also independent
uncertain variables. Thus
E[f1 (y1 , 1 ) + f2 (y2 , 2 )] = E[f1 (y1 , 1 )] + E[f2 (y2 , 2 )].
Let 1 and 2 be the probability distributions of random variables 1 and
2 , respectively. Then we have
E[f1 (1 , 1 ) + f2 (2 , 2 )]
Z
E[f1 (y1 , 1 ) + f2 (y2 , 2 )]d1 (y1 )d2 (y2 )
=
<2
Z
=
(E[f1 (y1 , 1 )] + E[f2 (y2 , 2 )])d1 (y1 )d2 (y2 )
<2
Z
Z
=
E[f1 (y1 , 1 )]d1 (y1 ) +
E[f2 (y2 , 2 )]d2 (y2 )
<
<
B.6
(B.75)
Variance
Definition B.5 (Liu [140]) Let be an uncertain random variable with finite
expected value e. Then the variance of is
V [] = E[( e)2 ].
(B.76)
(B.78)
389
Theorem B.22 (Liu [140]) Let be an uncertain random variable with expected value e. Then V [] = 0 if and only if Ch{ = e} = 1.
Proof: We first assume V [] = 0. It follows from the equation (B.77) that
Z
V [] =
0
(B.79)
+Z
=
0
<m
+
<m 0
1 F (e + x; y1 , , ym ) + F (e x; y1 , , ym )
where F (x; y1 , , ym ) is the uncertainty distribution of the uncertain variable
f (y1 , , ym , 1 , , n ) and is determined by 1 , 2 , , n . Thus we have
the following stipulation.
390
(1 F (e +
x; y1 , , ym )
(B.81)
<m 0
+F (e
(B.80)
B.7
(1 (e +
x y) + (e
x y))dxd(y).
(B.83)
Theorem B.23 (Yao and Gao [227], Law of Large Numbers) Let 1 , 2 ,
be iid random variables with a common probability distribution , and let
1 , 2 , be iid uncertain variables. If f is a monotone function, then
Sn = f (1 , 1 ) + f (2 , 2 ) + + f (n , n )
is a sequence of uncertain random variables and
Z +
Sn
f (y, 1 )d(y)
n
(B.84)
(B.85)
391
(B.87)
n
n
(B.88)
(B.89)
(B.90)
(B.91)
(B.92)
B.8
(B.93)
j = 1, 2, , p.
(B.94)
392
E[f (x, )]
min
x
(B.95)
subject to:
Ch{gj (x, ) 0} j , j = 1, 2, , p.
Definition B.6 (Liu [141]) A vector x is called a feasible solution to the
uncertain random programming model (B.95) if
Ch{gj (x, ) 0} j
(B.96)
for j = 1, 2, , p.
Definition B.7 (Liu [141]) A feasible solution x is called an optimal solution to the uncertain random programming model (B.95) if
E[f (x , )] E[f (x, )]
(B.97)
<m 0
1
1
f (x, y1 , , ym , 1
1 (), , n ())dd1 (y1 ) dm (ym ).
Theorem B.25 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,
393
(B.99)
(B.100)
<m
(B.101)
Hence the chance constraint (B.99) holds if and only if (B.100) is true. The
theorem is verified.
Remark B.13: Sometimes, the equation (B.101) may not have a root. In
this case, if
1
gj (x, y1 , , ym , 1
(B.102)
1 (), , n ()) < 0
for all , then we set the root = 1; and if
1
gj (x, y1 , , ym , 1
1 (), , n ()) > 0
(B.103)
Theorem B.26 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,
394
E[f (x, 1 , , m , 1 , , n )]
min
x
subject to:
Ch{gj (x, 1 , , m , 1 , , n ) 0} j , j = 1, 2, , p
is equivalent to the crisp mathematical programming
Z Z 1
min
f (x, y1 , , ym , 1
x <m 0
subject to:
<m
(B.104)
for j = 1, 2, , p, respectively.
Proof: It follows from Theorems B.24 and B.25 immediately.
After an uncertain random programming is converted into a crisp mathematical programming, we may solve it by any classical numerical methods
(e.g. iterative method) or intelligent algorithms (e.g. genetic algorithm).
B.9
The study of uncertain random risk analysis was started by Liu and Ralescu
[143] with the concept of risk index.
Definition B.8 (Liu and Ralescu [143]) Assume that a system contains uncertain random factors 1 , 2 , , n , and has a loss function f . Then the risk
index is the chance measure that the system is loss-positive, i.e.,
Risk = Ch{f (1 , 2 , , n ) > 0}.
(B.105)
If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [185]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [119]).
Theorem B.27 (Liu and Ralescu [143], Risk Index Theorem) Assume a
system contains independent random variables 1 , 2 , , m with probability
395
(B.107)
(B.108)
where
a = 1 (1 1 (T ))(1 2 (T )) (1 m (T )),
b = 1 (T ) 2 (T ) n (T ).
(B.109)
(B.110)
396
m with probability distributions 1 , 2 , , m and n elements whose lifetimes are independent uncertain variables 1 , 2 , , n with uncertainty distributions 1 , 2 , , n , respectively. If the loss is understood as the case
that the system fails before the time T , then the loss function is
f = T 1 2 m 1 2 n .
(B.111)
(B.112)
a = 1 (T )2 (T ) m (T ),
(B.113)
b = 1 (T ) 2 (T ) n (T ).
(B.114)
where
(B.115)
(B.116)
<m
(B.117)
Remark B.18: As a substitute of risk index, Liu and Ralescu [144] suggested
a concept of value-at-risk,
VaR() = sup{x | Ch{f (1 , 2 , , n ) x} }.
(B.118)
Note that VaR() represents the maximum possible loss when percent
of the right tail distribution is ignored. In other words, the loss will exceed VaR() with chance measure . Let be the chance distribution of
f (1 , 2 , , n ). It is easy to verify that
VaR() = 1 (1 ).
(B.119)
397
B.10
The study of uncertain random reliability analysis was started by Wen and
Kang [209] with the concept of reliability index..
Definition B.9 (Wen and Kang [209]) Assume a Boolean system has uncertain random elements 1 , 2 , , n and a structure function f . Then the
reliability index is the chance measure that the system is working, i.e.,
Reliability = Ch{f (1 , 2 , , n ) = 1}.
(B.120)
i=1
where
f (x1 , , xm ) =
sup
min j (yj ),
if
sup
(B.122)
1
sup
min j (yj ),
sup
min j (yj ) 0.5,
if
1jn
f (x1 , ,xm ,y1 , ,yn )=1
(
i (xi ) =
(
j (yj ) =
ai ,
1 ai ,
if xi = 1
if xi = 0
(i = 1, 2, , m),
(B.123)
bj ,
if yj = 1
1 bj , if yj = 0
(j = 1, 2, , n).
(B.124)
398
(B.125)
(B.126)
(B.127)
m
Y
i=1
!
i (xi ) f (x1 , x2 , , xm ) (B.130)
where
f (x1 , x2 , , xm ) = k-max [x1 , x2 , , xm , b1 , b2 , , bn ],
(
ai ,
if xi = 1
i (xi ) =
(i = 1, 2, , m).
1 ai , if xi = 0
B.11
(B.131)
(B.132)
In classic graph theory, the edges and vertices are all deterministic, either
exist or not. However, in practical applications, some indeterminacy factors
will no doubt appear in graphs. Thus it is reasonable to assume that in a
graph some edges exist with some degrees in probability measure and others
399
exist with some degrees in uncertain measure. In order to model this problem, let us introduce the concept of uncertain random graph by using chance
theory.
We say a graph is of order n if it has n vertices labeled by 1, 2, , n. In
this section, we assume the graph is always of order n, and has a collection
of vertices,
V = {1, 2, , n}.
(B.133)
Let us define two collections of edges,
U = {(i, j) | 1 i < j n and (i, j) are uncertain edges},
(B.134)
(B.135)
Note that all deterministic edges are regarded as special uncertain ones. Then
U R = {(i, j) | 1 i < j n} that contains n(n 1)/2 edges. We will call
T=
11
21
..
.
n1
12
22
..
.
n2
..
.
1n
2n
..
.
nn
(B.136)
0
0.8
0
0.5
0.8
0
1
0
0 0.5
1
0
0 0.3
0.3 0
400
21 x22 x2n
(B.137)
X= .
..
..
..
.
..
.
.
xn1
and
xn2
xnn
xij = 0 or 1, if (i, j) R
xij = 0, if (i, j) U
X= X|
.
xij = xji , i, j = 1, 2, , n
xii = 0, i = 1, 2, , n
(B.138)
Y =
y11
y21
..
.
yn1
y12
y22
..
.
yn2
..
.
y1n
y2n
..
.
ynn
(B.139)
x
=
0
or
1,
if
(i,
j)
U
ij
.
Y = X|
xij = xji , i, j = 1, 2, , n
xii = 0, i = 1, 2, , n
(B.140)
Example B.6: (Connectivity Index) An uncertain random graph is connected for some realizations of uncertain and random edges, and disconnected
for some other realizations. In order to show how likely an uncertain random
graph is connected, a connectivity index of an uncertain random graph is defined as the chance measure that the uncertain random graph is connected.
Let (V, U, R, T) be an uncertain random graph. Liu [130] proved that the
connectivity index is
X
Y
=
ij (Y ) f (Y )
(B.141)
Y X
(i,j)R
where
f (Y ) =
sup
min ij (X),
if
XY , f (X)=1 (i,j)U
sup
min ij (X), if
XY , f (X)=0 (i,j)U
sup
XY , f (X)=1 (i,j)U
sup
XY , f (X)=1 (i,j)U
401
(
ij (X) =
(
f (X) =
ij ,
if xij = 1
1 ij , if xij = 0
(i, j) U,
(B.142)
1, if I + X + X 2 + + X n1 > 0
(B.143)
0, otherwise,
X
Y
=
ij (X) f (X)
(B.144)
XX
1i<jn
where
xij = 0 or 1, i, j = 1, 2, , n
.
X = X | xij = xji , i, j = 1, 2, , n
xii = 0, i = 1, 2, , n
(B.145)
Remark B.20: (Gao and Gao [44]) If the uncertain random graph becomes
an uncertain graph, then the connectivity index is
sup
min ij (X),
if
sup
min ij (X) < 0.5
XX,f
(X)=1 1i<jn
XX,f (X)=1 1i<jn
=
sup
min ij (X), if
sup
min ij (X) 0.5
1
XX,f (X)=0 1i<jn
where X becomes
xij = 0 or 1, i, j = 1, 2, , n
X = X | xij = xji , i, j = 1, 2, , n
.
xii = 0, i = 1, 2, , n
(B.146)
Exercise B.20: An Euler circuit in the graph is a circuit that passes through
each edge exactly once. In other words, a graph has an Euler circuit if it can
be drawn on paper without ever lifting the pencil and without retracing over
any edge. It has been proved that a graph has an Euler circuit if and only
if it is connected and each vertex has an even degree (i.e., the number of
edges that are adjacent to that vertex). In order to measure how likely an
uncertain random graph has an Euler circuit, an Euler index is defined as
the chance measure that the uncertain random graph has an Euler circuit.
Please give a formula for calculating Euler index.
402
B.12
The term network is a synonym for a weighted graph, where the weights may
be understood as cost, distance or time consumed. In this section, we assume
the uncertain random network is always of order n, and has a collection of
nodes,
N = {1, 2, , n}
(B.147)
where 1 is always the source node, and n is always the destination node.
Let us define two collections of arcs,
U = {(i, j) | (i, j) are uncertain arcs},
(B.148)
(B.149)
Note that all deterministic arcs are regarded as special uncertain ones. Let
wij denote the weights of arcs (i, j), (i, j) U R, respectively. Then wij
are uncertain variables if (i, j) U, and random variables if (i, j) R. Write
W = {wij | (i, j) U R}.
(B.150)
(B.152)
(B.153)
(B.154)
403
(x) =
0
(i,j)R
(B.156)
(B.157)
and f may be calculated by the Dijkstra algorithm (Dijkstra [30]) for each
given .
Remark B.21: If the uncertain random network becomes a random network,
then the probability distribution of shortest path length is
Z
Y
(x) =
dij (yij ).
(B.158)
f (yij ,(i,j)R)x (i,j)R
B.13
Bibliographic Notes
404
405
through approach, uncertain network was first explored by Liu [120] in 2010
for modeling project scheduling problem with uncertain duration times. More
generally, Liu [130] assumed some weights are random variables and others
are uncertain variables, and initialized the concept of uncertain random network.
Finally, it is worth mentioning that Liu [145] designed an uncertain random logic, and Yao and Gao [228] initialized uncertain random process in the
light of chance theory.
Appendix C
Frequently Asked
Questions
This appendix will answer some frequently asked questions related to uncertainty theory and applications.
C.1
The word uncertainty has been widely used or abused. In a wide sense,
Knight (1921) and Keynes (1936) used uncertainty to represent any nonprobabilistic phenomena. This type of uncertainty is also known as Knightian
uncertainty, Keynesian uncertainty, or true uncertainty. Unfortunately, it
seems impossible for us to develop a decent mathematical theory to deal
with such a broad class of uncertainty because non-probability represents
too many things. In a narrow sense, Liu (2007) declared that uncertainty is
anything that satisfies the axioms of uncertainty theory. It is emphasized that
uncertainty in the narrow sense is a scientific terminology, but uncertainty
in the wide sense is not. Some people think that uncertainty and probability
are synonymous. This is a wrong viewpoint either in the wide sense or in the
narrow sense.
C.2
Probability theory (Kolmogorov, 1933) is a branch of mathematics for studying the behavior of random phenomena, while uncertainty theory (Liu, 2007)
is a branch of mathematics for modeling human uncertainty. What is the
difference between probability theory and uncertainty theory? The main difference is that the product probability measure of compound event is the
408
(C.1)
(C.2)
This difference implies that random variables and uncertain variables obey
different operational laws.
Probability theory and uncertainty theory are complementary mathematical systems that provide two acceptable mathematical models to deal with
the indeterminacy world. Probability is interpreted as frequency, while uncertainty is interpreted as personal belief degree.
C.3
We are frequently lack of observed data, and the estimated probability distribution may be far from the cumulative frequency. Liu [122] asserted that
probability theory may lead to counterintuitive results in this case. However,
some people still affirm that probability theory is the only legitimate approach.
Perhaps this misconception is rooted in Coxs theorem [23] that any measure
of belief is isomorphic to a probability measure. However, uncertain measure is considered coherent but not isomorphic to any probability measure.
What goes wrong with Coxs theorem? Personally I think that Coxs theorem presumes the truth value of conjunction P Q is a twice differentiable
function f of truth values of two propositions P and Q, i.e.,
T (P Q) = f (T (P ), T (Q))
and then excludes uncertain measure from its start because the function
f (x, y) = x y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, I strongly recognize that probability theory is a legitimate approach to deal with the frequency. On the other hand, at any rate,
it is impossible that probability theory is the unique one for modeling indeterminacy. In fact, it has demonstrated in this book that uncertainty theory
is a consistent mathematical system that is successful to deal with the belief
degree.
C.4
409
(C.4)
only for independent events A and B. However, a lot of surveys showed that
the measure of the union of events is usually greater than the maximum when
the events are not independent. This fact states that human brains do not
behave fuzziness.
Both uncertainty theory and possibility theory attempt to model human
belief degrees, where the former uses the tool of uncertain measure and the
latter uses the tool of possibility measure. Thus they are complete competitors.
C.5
0,
if x 80
(x 80)/10, if 80 x 90
1,
if 90 x 110
(x) =
(C.5)
(120
x)/10,
if
110
120
0,
if x 120
that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function and the definition
of possibility measure
Pos{ B} = sup (x),
xB
(C.6)
410
the possibility theory will immediately conclude the following three propositions:
(a) the bridge strength is exactly 100 tons with possibility measure 1,
(b) the bridge strength is not 100 tons with possibility measure 1,
(c) exactly 100 tons is as possible as not 100 tons.
The first proposition says we are 100% sure that the bridge strength is exactly 100 tons, neither less nor more. What a coincidence it should be!
It is doubtless that the belief degree of exactly 100 tons is almost zero,
and nobody is so naive to expect that exactly 100 tons is the true bridge
strength. The second proposition sounds good. The third proposition says
exactly 100 tons and not 100 tons have the same possibility measure.
Thus we have to regard them equally likely. It seems that no human being
can accept this conclusion because exactly 100 tons is almost impossible
compared with not 100 tons. This paradox shows that those indeterminacy quantities like the bridge strength cannot be quantified by possibility
measure and then they are not fuzzy concepts.
C.6
0,
if x 15
(x
15)/5,
if
15 x 20
1,
if 20 x 30
(x) =
(40 x)/10, if 30 x 40
0,
if x 40.
It follows from the fuzzy set theory that young may take any values of
-cut of . Thus we immediately conclude two propositions:
(a) young includes [20yr, 30yr] with possibility measure 1,
(b) young is included in [20yr, 30yr] with possibility measure 1.
411
The first proposition sounds good. However, the second proposition seems
unacceptable because the belief degree that young is between 20yr to 30yr
is impossible to achieve up to 1 (in fact, the belief degree should be almost 0
due to the fact that 19yr and 31yr are also nearly sure to be young). This
result says that young cannot be regarded as a fuzzy set.
C.7
(C.7)
ln Xt ln X0 (e 2 /2)t
(C.9)
whose increment is
Wt =
ln Xt+t ln Xt (e 2 /2)t
.
Write
(C.10)
(e 2 /2)t
.
(C.11)
Note that the stock price Xt is actually a step function of time with a finite
number of jumps although it looks like a curve. During a fixed period (e.g.
one week), without loss of generality, we assume that Xt is observed to have
A=
412
100 jumps. Now we divide the period into 10000 equal intervals. Then we
may observe 10000 samples of Xt . It follows from (C.10) that Wt has 10000
samples that consists of 9900 As and 100 other numbers:
A, A, , A, B, C, , Z.
|
{z
} |
{z
}
9900
100
(C.12)
Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance t. This fact is in contradiction with the property of Wiener process that the increment Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
..
.........
...
....
..
...............
...
... ...
...
... ...
...
... ...
...
... ..
...
.
... ..
...
.
... ..
...
... ..
...
... ...
... ...
...
... ..
...
... ...
...
... ...
...
... ...
.
.
.
.
.
.
.
.
.
.
.
... .. ......... .....................
.
.
......
..
... ......
.....
.
.
.
...
... .... ..
.....
.
........ ..
.
.....
.
.
.
.
.....
.
.
.
.
.
.
.....
.
... ... ...
.
.
.
.
.
.
.....
... .... ............................
.
.
......
.
.
... ... ... ..............
...
....
.
.
.
.
.
.............................. ... ... ... ....................................... ............
.
.
.
.
.
.
................... ... ... ... ... ... ... ... ... .............. ................
.
.
.
.
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
....
99%
Figure C.1: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of Wt . Hence
it is impossible that the real stock price Xt follows any Itos stochastic differential equation.
Perhaps some people think that the stock price does behave like a geometric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Itos calculus is just built on the microscopic structure (i.e., the differential dWt ) of Wiener process rather than macroscopic
structure. More precisely, Itos calculus is dependent on the presumption
that dWt is a normal random variable with expected value 0 and variance
dt. This unreasonable presumption is what causes the second order term in
Itos formula,
h
h
1 2h
(t, Wt )dt +
(t, Wt )dWt +
(t, Wt )dt.
(C.13)
t
w
2 w2
In fact, the increment of stock price is impossible to follow any continuous
probability distribution.
On the basis of the above paradox, personally I do not think Itos calculus
can play the essential tool of finance theory because Itos stochastic differential equation is impossible to model stock price. As a substitute, uncertain
dXt =
413
Bibliography
[1] Barbacioru IC, Uncertainty functional differential equations for finance, Surveys in Mathematics and its Applications, Vol.5, 275-284, 2010.
[2] Bedford T, and Cooke MR, Probabilistic Risk Analysis, Cambridge University
Press, 2001.
[3] Bellman RE, Dynamic Programming, Princeton University Press, New Jersey,
1957.
[4] Bellman RE, and Zadeh LA, Decision making in a fuzzy environment, Management Science, Vol.17, 141-164, 1970.
[5] Black F, and Scholes M, The pricing of option and corporate liabilities, Journal of Political Economy, Vol.81, 637-654, 1973.
[6] Bouchon-Meunier B, Mesiar R, and Ralescu DA, Linear non-additive setfunctions, International Journal of General Systems, Vol.33, No.1, 89-98,
2004.
[7] Buckley JJ, Possibility and necessity in optimization, Fuzzy Sets and Systems,
Vol.25, 1-13, 1988.
[8] Charnes A, and Cooper WW, Management Models and Industrial Applications of Linear Programming, Wiley, New York, 1961.
[9] Chen XW, and Liu B, Existence and uniqueness theorem for uncertain differential equations, Fuzzy Optimization and Decision Making, Vol.9, No.1,
69-81, 2010.
[10] Chen XW, American option pricing formula for uncertain financial market,
International Journal of Operations Research, Vol.8, No.2, 32-37, 2011.
[11] Chen XW, and Ralescu DA, A note on truth value in uncertain logic, Expert
Systems with Applications, Vol.38, No.12, 15582-15586, 2011.
[12] Chen XW, and Dai W, Maximum entropy principle for uncertain variables,
International Journal of Fuzzy Systems, Vol.13, No.3, 232-236, 2011.
[13] Chen XW, Kar S, and Ralescu DA, Cross-entropy measure of uncertain variables, Information Sciences, Vol.201, 53-60, 2012.
[14] Chen XW, Variation analysis of uncertain stationary independent increment
process, European Journal of Operational Research, Vol.222, No.2, 312-316,
2012.
416
Bibliography
[15] Chen XW, and Ralescu DA, B-spline method of uncertain statistics with
applications to estimate travel distance, Journal of Uncertain Systems, Vol.6,
No.4, 256-262, 2012.
[16] Chen XW, Liu YH, and Ralescu DA, Uncertain stock model with periodic
dividends, Fuzzy Optimization and Decision Making, Vol.12, No.1, 111-123,
2013.
[17] Chen XW, and Ralescu DA, Liu process and uncertain calculus, Journal of
Uncertainty Analysis and Applications, Vol.1, Article 3, 2013.
[18] Chen XW, and Gao J, Uncertain term structure model of interest rate, Soft
Computing, Vol.17, No.4, 597-604, 2013.
[19] Chen XW, Uncertain Calculus and Uncertain Finance, http://orsc.edu.cn/
xwchen/ucf.pdf.
[20] Chen Y, Fung RYK, and Yang J, Fuzzy expected value modelling approach for
determining target values of engineering characteristics in QFD, International
Journal of Production Research, Vol.43, No.17, 3583-3604, 2005.
[21] Chen Y, Fung RYK, and Tang JF, Rating technical attributes in fuzzy QFD
by integrating fuzzy weighted average method and fuzzy expected value operator, European Journal of Operational Research, Vol.174, No.3, 1553-1566,
2006.
[22] Choquet G, Theory of capacities, Annals de lInstitute Fourier, Vol.5, 131295, 1954.
[23] Cox RT, Probability, frequency and reasonable expectation, American Journal of Physics, Vol.14, 1-13, 1946.
[24] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathematical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[25] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[26] Das B, Maity K, Maiti A, A two warehouse supply-chain model under possibility/necessity/credibility measures, Mathematical and Computer Modelling,
Vol.46, No.3-4, 398-409, 2007.
[27] De Cooman G, Possibility theory I-III, International Journal of General Systems, Vol.25, 291-371, 1997.
[28] De Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[29] Dempster AP, Upper and lower probabilities induced by a multivalued mapping, Annals of Mathematical Statistics, Vol.38, No.2, 325-339, 1967.
[30] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[31] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[32] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
Bibliography
417
[33] Elkan C, The paradoxical controversy over fuzzy logic, IEEE Expert, Vol.9,
No.4, 47-49, 1994.
[34] Erd
os P, and Renyi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[35] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[36] Feng Y, and Yang LX, A two-objective fuzzy k-cardinality assignment problem, Journal of Computational and Applied Mathematics, Vol.197, No.1, 233244, 2006.
[37] Feng YQ, Wu WC, Zhang BM, and Li WY, Power system operation risk
assessment using credibility theory, IEEE Transactions on Power Systems,
Vol.23, No.3, 1309-1318, 2008.
[38] Frank H, and Hakimi SL, Probabilistic flows through a communication network, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[39] Fung RYK, Chen YZ, and Chen L, A fuzzy expected value-based goal programing model for product planning using quality function deployment, Engineering Optimization, Vol.37, No.6, 633-647, 2005.
[40] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[41] Gao J, Uncertain bimatrix game with applications, Fuzzy Optimization and
Decision Making, Vol.12, No.1, 65-78, 2013.
[42] Gao X, Some properties of continuous uncertain measure, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.17, No.3, 419426, 2009.
[43] Gao X, Gao Y, and Ralescu DA, On Lius inference rule for uncertain systems, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, Vol.18, No.1, 1-11, 2010.
[44] Gao XL, and Gao Y, Connectedness index of uncertain graphs, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.21, No.1,
127-137, 2013.
[45] Gao Y, Shortest path problem with uncertain arc lengths, Computers and
Mathematics with Applications, Vol.62, No.6, 2591-2600, 2011.
[46] Gao Y, Uncertain inference control for balancing inverted pendulum, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 481-492, 2012.
[47] Gao Y, Existence and uniqueness theorem on uncertain differential equations
with local Lipschitz condition, Journal of Uncertain Systems, Vol.6, No.3,
223-232, 2012.
[48] Ge XT, and Zhu Y, Existence and uniqueness theorem for uncertain delay
differential equations, Journal of Computational Information Systems, Vol.8,
No.20, 8341-8347, 2012.
418
Bibliography
[49] Ge XT, and Zhu Y, A necessary condition of optimality for uncertain optimal
control problem, Fuzzy Optimization and Decision Making, Vol.12, No.1, 4151, 2013.
[50] Gilbert EN, Random graphs, Annals of Mathematical Statistics, Vol.30, No.4,
1141-1144, 1959.
[51] Guo HY, and Wang XS, Variance of uncertain random variables, http://orsc.
edu.cn/online/130411.pdf.
[52] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[53] Ha MH, Li Y, and Wang XF, Fuzzy knowledge representation and reasoning
using a generalized fuzzy petri net and a similarity measure, Soft Computing,
Vol.11, No.4, 323-327, 2007.
[54] Han SW, and Peng ZX, The maximum flow problem of uncertain network,
http://orsc.edu.cn/online/101228.pdf.
[55] He Y, and Xu JP, A class of random fuzzy programming model and its application to vehicle routing problem, World Journal of Modelling and Simulation, Vol.1, No.1, 3-11, 2005.
[56] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[57] Hou YC, Distance between uncertain random variables, http://orsc.edu.cn/
online/130510.pdf.
[58] Hou YC, Subadditivity of chance measure, http://orsc.edu.cn/online/
130602.pdf.
[59] Inuiguchi M, and Ramk J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic programming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[60] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[61] Ito K, On stochastic differential equations, Memoirs of the American Mathematical Society, No.4, 1-51, 1951.
[62] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
[63] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[64] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[65] Ji XY, and Shao Z, Model and algorithm for bilevel newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[66] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.
Bibliography
419
[67] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main developments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[68] Kacprzyk J, and Yager RR, Linguistic summaries of data using fuzzy logic,
International Journal of General Systems, Vol.30, 133-154, 2001.
[69] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[70] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[71] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of randomness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[72] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Applied Mathematical Modelling, Vol.34, No.2, 301-308, 2010.
[73] Ke H, Ma WM, Gao X, and Xu WH, New fuzzy models for time-cost tradeoff problem, Fuzzy Optimization and Decision Making, Vol.9, No.2, 219-231,
2010.
[74] Ke H, Uncertain random multilevel programming with application to product
control problem, http://orsc.edu.cn/online/121027.pdf.
[75] Keynes JM, The General Theory of Employment, Interest, and Money, Harcourt, New York, 1936.
[76] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171182, 1986.
[77] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, PrenticeHall, Englewood Cliffs, NJ, 1980.
[78] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[79] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
[80] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[81] Kwakernaak H, Fuzzy random variablesI: Definitions and theorems, Information Sciences, Vol.15, 1-29, 1978.
[82] Kwakernaak H, Fuzzy random variablesII: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[83] Li J, Xu JP, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[84] Li PK, and Liu B, Entropy of credibility distributions for fuzzy variables,
IEEE Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[85] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
420
Bibliography
[86] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[87] Li X, and Liu B, Maximum entropy principle for fuzzy variables, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[88] Li X, and Liu B, On distance between fuzzy variables, Journal of Intelligent
& Fuzzy Systems, Vol.19, No.3, 197-204, 2008.
[89] Li X, and Liu B, Chance measure for hybrid events with fuzziness and randomness, Soft Computing, Vol.13, No.2, 105-115, 2009.
[90] Li X, and Liu B, Foundation of credibilistic logic, Fuzzy Optimization and
Decision Making, Vol.8, No.1, 91-102, 2009.
[91] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
[92] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[93] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[94] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[95] Liu B, and Iwamura K, Chance constrained programming with fuzzy parameters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[96] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[97] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[98] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transactions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
[99] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[100] Liu B, Uncertain Programming, Wiley, New York, 1999.
[101] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[102] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[103] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[104] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[105] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Heidelberg, 2002.
Bibliography
421
422
Bibliography
Bibliography
423
[147] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value operator, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[148] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[149] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[150] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[151] Liu YK, Fuzzy programming with recourse, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[152] Liu YK, and Gao J, The independence of fuzzy variables with applications to
fuzzy random optimization, International Journal of Uncertainty, Fuzziness
& Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[153] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125133, 2003.
[154] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[155] Maiti MK, and Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[156] Mamdani EH, Applications of fuzzy algorithms for control of a simple dynamic plant, Proceedings of IEEE, Vol.121, No.12, 1585-1588, 1974.
[157] Marano GC, and Quaranta G, A new possibilistic reliability index definition,
Acta Mechanica, Vol.210, 291-303, 2010.
[158] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
[159] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[160] M
oller B, and Beer M, Engineering computation under uncertainty, Computers and Structures, Vol.86, 1024-1041, 2008.
[161] Morgan JP, Risk Metrics TM Technical Document, 4th edn, Morgan Guaranty Trust Companies, New York, 1996.
[162] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[163] Negoita CV, and Ralescu DA, Representation theorems for fuzzy concepts,
Kybernetes, Vol.4, 169-174, 1975.
[164] Negoita CV, and Ralescu DA, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.
[165] Nguyen HT, Nguyen NT, and Wang TH, On capacity functionals in interval
probabilities, International Journal of Uncertainty, Fuzziness & KnowledgeBased Systems, Vol.5, 359-377, 1997.
[166] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.
424
Bibliography
[167] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[168] ksendal B, Stochastic Differential Equations, 6th edn, Springer-Verlag,
Berlin, 2005.
[169] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[170] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[171] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[172] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[173] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[174] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
http://orsc.edu.cn/online/100110.pdf.
[175] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[176] Puri ML, and Ralescu DA, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
[177] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[178] Qin ZF, and Gao X, Fractional Liu process with application to finance, Mathematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[179] Qin ZF, Uncertain random goal programming, http://orsc.edu.cn/online/
130323.pdf.
[180] Ralescu AL, and Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets
and Systems, Vol.86, No.3, 321-330, 1997.
[181] Ralescu DA, A generalization of representation theorem, Fuzzy Sets and Systems, Vol.51, 309-311, 1992.
[182] Ralescu DA, Cardinality, quantifiers, and the aggregation of fuzzy criteria,
Fuzzy Sets and Systems, Vol.69, No.3, 355-365, 1995.
[183] Ralescu DA, and Sugeno M, Fuzzy integral representation, Fuzzy Sets and
Systems, Vol.84, No.2, 127-133, 1996.
[184] Robbins HE, On the measure of a random set, Annals of Mathematical Statistics, Vol.15, No.1, 70-74, 1944.
[185] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[186] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for twolevel linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.
Bibliography
425
426
Bibliography
[205] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncertainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[206] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimization and Decision Making, Vol.12, No.1, 99-109, 2013.
[207] Wang XS, and Peng ZX, Method of moments for estimating uncertainty distributions, http://orsc.edu.cn/online/100408.pdf.
[208] Wang XS, and Wang LL, Delphi method for estimating membership function
of the uncertain set, http://orsc.edu.cn/online/130330.pdf.
[209] Wen ML, and Kang R, Reliability analysis in uncertain random system,
http://orsc.edu.cn/online/120419.pdf.
[210] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131174, 1923.
[211] Yager RR, A new approach to the summarization of data, Information Sciences, Vol.28, 69-86, 1982.
[212] Yager RR, Quantified propositions in a linguistic logic, International Journal
of Man-Machine Studies, Vol.19, 195-227, 1983.
[213] Yang LX, and Liu B, On inequalities and critical values of fuzzy random
variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[214] Yang N, and Wen FS, A chance constrained programming approach to transmission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[215] Yang XH, Moments and tails inequality within the framework of uncertainty
theory, Information: An International Interdisciplinary Journal, Vol.14,
No.8, 2599-2604, 2011.
[216] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimization and Decision Making, Vol.12, No.1, 89-98, 2013.
[217] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[218] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
[219] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[220] Yao K, Extreme values and integral of solution of uncertain differential equation, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
[221] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[222] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[223] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, 2013, to
be published.
Bibliography
427
428
Bibliography
[243] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[244] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[245] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[246] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[247] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[248] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[249] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[250] Zhou J, and Liu B, Modeling capacitated location-allocation problem with
fuzzy demands, Computers & Industrial Engineering, Vol.53, No.3, 454-468,
2007.
[251] Zhou J, Yang F, and Wang K, Multi-objective optimization in uncertain
random environments, http://orsc.edu.cn/online/130322.pdf.
[252] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[253] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent & Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[254] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy variables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[255] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[256] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.
Pr
(, A, Pr)
Ch
k-max
k-min
<
iid
uncertain measure
uncertainty space
uncertain variables
uncertainty distributions
inverse uncertainty distributions
membership functions
inverse membership functions
linear uncertain variable
zigzag uncertain variable
normal uncertain variable
lognormal uncertain variable
rectangular uncertain set
triangular uncertain set
trapezoidal uncertain set
expected value
variance
entropy
uncertain processes
Liu process
renewal process
uncertain quantifier
uncertain proposition
maximum operator
minimum operator
negation symbol
universal quantifier
existential quantifier
probability measure
probability space
chance measure
the kth largest value
the kth smallest value
the empty set
the set of real numbers
independent and identically distributed
Index
absorbtion law, 176
age replacement policy, 289
algebra, 5
-path, 319
alternating renewal process, 286
American option, 338
Asian option, 340
associative law, 175
belief degree, 2
bisection method, 59
block replacement policy, 281
Boolean function, 61
Boolean system calculator, 67
Boolean uncertain variable, 61
Borel algebra, 7
Borel set, 7
bridge system, 152
Brownian motion, 293
chain rule, 303
chance distribution, 375
chance inversion theorem, 376
chance measure, 370
change of variables, 304
Chebyshev inequality, 91
Chen-Ralescu theorem, 159
commutative law, 175
comonotonic function, 75
complement of uncertain set, 172, 193
complete uncertainty space, 16
compromise model, 123
compromise solution, 123
conditional uncertain measure, 25
consistency condition, 15
convergence almost surely, 93
convergence in distribution, 94
convergence in mean, 94
convergence in measure, 93
delayed renewal process, 281
Delphi method, 134
Index
investment risk analysis, 146
Ito formula, 367
Ito integral, 366
Ito process, 367
Jensens inequality, 93
joint uncertainty distribution, 103
k-out-of-n system, 138
law of contradiction, xiv, 174
law of excluded middle, xiv, 174
law of large numbers, 363, 390
law of truth conservation, xiv
linear uncertain variable, 35
linguistic summarizer, 237
Liu integral, 297
Liu process, 293, 301
logical equivalence theorem, 230
lognormal random variable, 357
lognormal uncertain variable, 36
loss function, 137
machine scheduling problem, 112
Markov inequality, 91
maximum entropy principle, 87
maximum uncertainty principle, xiv
measurable function, 7
measurable set, 6
measure inversion formula, 177
measure inversion theorem, 39
membership function, 177
method of moments, 132
Minkowski inequality, 92
modus ponens, 165
modus tollens, 166
moment, 80
monotone quantifier, 219
monotonicity theorem, 12
multivariate normal distribution, 105
Nash equilibrium, 125
negated quantifier, 220
no-arbitrage, 343
nonempty uncertain set, 183
normal random variable, 357
normal uncertain variable, 36
normality axiom, 10
operational law, 46, 190
optimal solution, 108
option pricing, 335
parallel system, 138
Pareto solution, 123
431
Peng-Iwamura theorem, 32
Poisson process, 365
polyrectangular theorem, 23
portfolio selection, 343
principle of least squares, 130, 211
probability density function, 355
probability distribution, 355
probability inversion theorem, 356
probability measure, 353
product axiom, 16
product probability, 354
product uncertain measure, 16
project scheduling problem, 119
random variable, 354
rectangular uncertain set, 179
regular membership function, 185
regular uncertainty distribution, 41
reliability index, 151, 397
renewal reward process, 283
risk index, 139, 394
ruin index, 288
ruin time, 289
rule-base, 246
sample path, 254
series system, 137
-algebra, 5
stability, 317
Stackelberg-Nash equilibrium, 126
standby system, 138
stationary increment, 262
stochastic calculus, 366
stochastic differential equation, 367
stochastic process, 364
strictly decreasing function, 53
strictly increasing function, 46
strictly monotone function, 56
structural risk analysis, 143
structure function, 149
subadditivity axiom, 10
time integral, 274, 330
trapezoidal uncertain set, 179
triangular uncertain set, 179
truth value, 157, 230
uncertain calculus, 293
uncertain control, 249
uncertain currency model, 347
uncertain differential equation, 307
uncertain entailment, 164
432
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
Index
finance, 335
graph, 398
inference, 241
insurance model, 287
integral, 297
interest rate model, 346
logic, 215
measure, 11
network, 402
process, 253
programming, 107
proposition, 155, 229
quantifier, 216
random programming, 391
random variable, 373
reliability analysis, 150
renewal process, 277
risk analysis, 137
Baoding Liu
Uncertainty Theory
When no samples are available to estimate a probability distribution, we have
to invite some domain experts to evaluate the belief degree that each event
will occur. Perhaps some people think that the belief degree is subjective
probability or fuzzy concept. However, it is usually inappropriate because
both probability theory and fuzzy set theory may lead to counterintuitive
results in this case.
In order to rationally deal with belief degrees, an uncertainty theory was
founded in 2007 and subsequently studied by many researchers. Nowadays,
uncertainty theory has become a branch of axiomatic mathematics for modeling human uncertainty.
This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process,
uncertain calculus, and uncertain differential equation. This textbook also
shows applications of uncertainty theory to scheduling, logistics, networks,
data mining, control, and finance.
Axiom 1. (Normality Axiom) M{} = 1 for the universal set .
Axiom 2. (Duality Axiom) M{} + M{c } = 1 for any event .
Axiom 3. (Subadditivity Axiom) For every countable sequence of events 1 ,
2 , , we have
( )
[
X
M
i
M{i }.
i=1
i=1
Y
^
M
k =
Mk {k }
k=1
k=1
Probability
....
........
........
.........................
....
. . ...........
...
........ ............
. ......
...
................... ....
...
..... .. .
...
..... ... ... ..
...
........ .. .. ...
.... .. ... ... ..
...
.. .... .... .... .... ....
.
...
.
. . .. . . .
...
... ... ... ... ... ..
...
... ....... .. .. .. ...
.. ... .. ... ... ... ..
...
... ....... .... .... .... .... ....
.
...
.
... .... .... .... .... .... .... ...
...
....
.
...
.... ....... .... .... .... ..... ..... ....
.....
... ... ... .. ... .. .. ...
...
.....
.
.....
.
.
.
...
.
.
....... .... .... .... .... .... .... ....
......
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
......................... ................................................................................................................................
..
....
...
Uncertainty