Sie sind auf Seite 1von 446

Uncertainty Theory

Fourth Edition

Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu

http://orsc.edu.cn/liu/ut.pdf
c 2013 by Uncertainty Theory Laboratory
4th Edition
c 2010 by Springer-Verlag Berlin
3rd Edition
c 2007 by Springer-Verlag Berlin
2nd Edition
c 2004 by Springer-Verlag Berlin
1st Edition

Contents
Preface

xi

0 Toward Uncertainty Theory


0.1 Indeterminacy . . . . . . . . . . . . . . . . . .
0.2 From Samples to Probability Theory . . . . .
0.3 From Belief Degrees to Uncertainty Theory .
0.4 Measurable Space: A Preliminary Knowledge

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1
1
1
2
5

1 Uncertain Measure
1.1 Events . . . . . . . . . . . . . .
1.2 Uncertain Measure . . . . . . .
1.3 Uncertainty Space . . . . . . .
1.4 Product Uncertain Measure . .
1.5 Independence . . . . . . . . . .
1.6 Polyrectangular Theorem . . .
1.7 Conditional Uncertain Measure
1.8 Bibliographic Notes . . . . . . .

9
9
10
16
16
20
23
24
27

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

2 Uncertain Variable
2.1 Uncertain Variable . . . . . . . . . .
2.2 Uncertainty Distribution . . . . . . .
2.3 Independence . . . . . . . . . . . . .
2.4 Operational Law . . . . . . . . . . .
2.5 Expected Value . . . . . . . . . . . .
2.6 Variance . . . . . . . . . . . . . . . .
2.7 Moments . . . . . . . . . . . . . . .
2.8 Entropy . . . . . . . . . . . . . . . .
2.9 Distance . . . . . . . . . . . . . . . .
2.10 Inequalities . . . . . . . . . . . . . .
2.11 Sequence Convergence . . . . . . . .
2.12 Conditional Uncertainty Distribution
2.13 Uncertain Vector . . . . . . . . . . .
2.14 Bibliographic Notes . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

29
. 29
. 31
. 44
. 46
. 68
. 77
. 80
. 83
. 89
. 91
. 93
. 98
. 101
. 105

vi

Contents

3 Uncertain Programming
3.1 Uncertain Programming . . . . . . . . .
3.2 Numerical Method . . . . . . . . . . . .
3.3 Machine Scheduling Problem . . . . . .
3.4 Vehicle Routing Problem . . . . . . . . .
3.5 Project Scheduling Problem . . . . . . .
3.6 Uncertain Multiobjective Programming
3.7 Uncertain Goal Programming . . . . . .
3.8 Uncertain Multilevel Programming . . .
3.9 Bibliographic Notes . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

107
107
110
112
115
119
122
124
125
126

4 Uncertain Statistics
4.1 Experts Experimental Data . . . .
4.2 Questionnaire Survey . . . . . . . .
4.3 Empirical Uncertainty Distribution
4.4 Principle of Least Squares . . . . .
4.5 Method of Moments . . . . . . . .
4.6 Multiple Domain Experts . . . . .
4.7 Delphi Method . . . . . . . . . . .
4.8 Bibliographic Notes . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

127
127
128
129
130
132
133
134
135

5 Uncertain Risk Analysis


5.1 Loss Function . . . . . . .
5.2 Risk Index . . . . . . . . .
5.3 Series System . . . . . . .
5.4 Parallel System . . . . . .
5.5 k-out-of-n System . . . .
5.6 Standby System . . . . .
5.7 Hazard Distribution . . .
5.8 Structural Risk Analysis .
5.9 Investment Risk Analysis
5.10 Bibliographic Notes . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

137
137
139
140
140
141
141
142
143
146
147

6 Uncertain Reliability Analysis


6.1 Structure Function . . . . . .
6.2 Reliability Index . . . . . . .
6.3 Series System . . . . . . . . .
6.4 Parallel System . . . . . . . .
6.5 k-out-of-n System . . . . . .
6.6 General System . . . . . . . .
6.7 Bibliographic Notes . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

149
149
150
151
151
152
152
153

.
.
.
.
.
.
.
.
.
.

vii

Contents

7 Uncertain Propositional Logic


7.1 Uncertain Proposition . . . .
7.2 Truth Value . . . . . . . . . .
7.3 Chen-Ralescu Theorem . . . .
7.4 Boolean System Calculator .
7.5 Bibliographic Notes . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

155
155
157
159
161
161

8 Uncertain Entailment
8.1 Uncertain Entailment Model . .
8.2 Uncertain Modus Ponens . . . .
8.3 Uncertain Modus Tollens . . . .
8.4 Uncertain Hypothetical Syllogism
8.5 Bibliographic Notes . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

163
163
165
166
168
169

9 Uncertain Set
9.1 Uncertain Set . . . . . . . . . . . .
9.2 Membership Function . . . . . . .
9.3 Independence . . . . . . . . . . . .
9.4 Set Operational Law . . . . . . . .
9.5 Arithmetic Operational Law . . . .
9.6 Expected Value . . . . . . . . . . .
9.7 Variance . . . . . . . . . . . . . . .
9.8 Entropy . . . . . . . . . . . . . . .
9.9 Distance . . . . . . . . . . . . . . .
9.10 Conditional Membership Function
9.11 Uncertain Statistics . . . . . . . .
9.12 Bibliographic Notes . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

171
171
177
186
190
193
198
204
205
209
209
210
213

10 Uncertain Logic
10.1 Individual Feature Data
10.2 Uncertain Quantifier . .
10.3 Uncertain Subject . . .
10.4 Uncertain Predicate . .
10.5 Uncertain Proposition .
10.6 Truth Value . . . . . . .
10.7 Algorithm . . . . . . . .
10.8 Linguistic Summarizer .
10.9 Bibliographic Notes . . .

.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

215
215
216
223
226
229
230
234
237
240

11 Uncertain Inference
11.1 Uncertain Inference Rule .
11.2 Uncertain System . . . . .
11.3 Uncertain Control . . . .
11.4 Inverted Pendulum . . . .
11.5 Bibliographic Notes . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

241
241
245
249
249
251

viii

Contents

12 Uncertain Process
12.1 Uncertain Process . . . . . . . . . . . . .
12.2 Uncertainty Distribution . . . . . . . . . .
12.3 Independence . . . . . . . . . . . . . . . .
12.4 Independent Increment Process . . . . . .
12.5 Stationary Independent Increment Process
12.6 Extreme Value Theorem . . . . . . . . . .
12.7 First Hitting Time . . . . . . . . . . . . .
12.8 Time Integral . . . . . . . . . . . . . . . .
12.9 Bibliographic Notes . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

253
253
254
258
259
262
267
271
274
276

13 Uncertain Renewal Process


13.1 Uncertain Renewal Process .
13.2 Delayed Renewal Process . .
13.3 Renewal Reward Process . . .
13.4 Alternating Renewal Process
13.5 Uncertain Insurance Model .
13.6 Age Replacement Policy . . .
13.7 Bibliographic Notes . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

277
277
281
283
286
287
289
290

14 Uncertain Calculus
14.1 Liu Process . . . . . .
14.2 Liu Integral . . . . . .
14.3 Fundamental Theorem
14.4 Chain Rule . . . . . .
14.5 Change of Variables .
14.6 Integration by Parts .
14.7 Bibliographic Notes . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

293
293
297
302
303
304
305
306

15 Uncertain Differential Equation


15.1 Uncertain Differential Equation . . .
15.2 Existence and Uniqueness . . . . . .
15.3 Stability . . . . . . . . . . . . . . . .
15.4 Yao-Chen Formula . . . . . . . . . .
15.5 Uncertainty Distribution of Solution
15.6 Extreme Value of Solution . . . . . .
15.7 First Hitting Time of Solution . . . .
15.8 Time Integral of Solution . . . . . .
15.9 Bibliographic Notes . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

307
307
315
317
319
322
324
327
330
332

16 Uncertain Finance
16.1 Uncertain Stock Model . . . . .
16.2 Uncertain Interest Rate Model
16.3 Uncertain Currency Model . . .
16.4 Bibliographic Notes . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

335
335
346
347
351

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

ix

Contents

A Probability Theory
A.1 Probability Measure . . . . . .
A.2 Random Variable . . . . . . . .
A.3 Probability Distribution . . . .
A.4 Independence . . . . . . . . . .
A.5 Operational Law . . . . . . . .
A.6 Expected Value . . . . . . . . .
A.7 Variance . . . . . . . . . . . . .
A.8 Law of Large Numbers . . . . .
A.9 Conditional Probability . . . .
A.10 Stochastic Process . . . . . . .
A.11 Itos Stochastic Calculus . . . .
A.12 Stochastic Differential Equation

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

B Chance Theory
B.1 Chance Measure . . . . . . . . . . . . .
B.2 Uncertain Random Variable . . . . . . .
B.3 Chance Distribution . . . . . . . . . . .
B.4 Operational Law . . . . . . . . . . . . .
B.5 Expected Value . . . . . . . . . . . . . .
B.6 Variance . . . . . . . . . . . . . . . . . .
B.7 Law of Large Numbers . . . . . . . . . .
B.8 Uncertain Random Programming . . . .
B.9 Uncertain Random Risk Analysis . . . .
B.10 Uncertain Random Reliability Analysis .
B.11 Uncertain Random Graph . . . . . . . .
B.12 Uncertain Random Network . . . . . . .
B.13 Bibliographic Notes . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

353
353
354
355
357
358
360
362
363
363
364
366
367

.
.
.
.
.
.
.
.
.
.
.
.
.

369
369
373
375
377
384
388
390
391
394
397
398
402
403

C Frequently Asked Questions


C.1 How did uncertainty evolve over the past 100 years? . . . .
C.2 What is the difference between probability theory and uncertainty theory? . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.3 Why is probability theory not the only legitimate approach? .
C.4 What is the difference between possibility theory and uncertainty theory? . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.5 Why is fuzzy variable unable to model indeterminacy quantity?
C.6 Why is fuzzy set unable to model unsharp concepts? . . . . .
C.7 Does the stock price follow stochastic differential equation or
uncertain differential equation? . . . . . . . . . . . . . . . . .

407
407

Bibliography

415

List of Frequently Used Symbols

429

407
408
409
409
410
411

x
Index

Contents

430

Preface
When no samples are available to estimate a probability distribution, we
have to invite some domain experts to evaluate the belief degree that each
event will occur. Perhaps some people think that the belief degree is subjective probability or fuzzy concept. However, it is usually inappropriate
because both probability theory and fuzzy set theory may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, an
uncertainty theory was founded in 2007 and subsequently studied by many
researchers. Nowadays, uncertainty theory has become a branch of axiomatic
mathematics for modeling human uncertainty.

Uncertain Measure
The most fundamental concept is uncertain measure that is a type of set
function satisfying the axioms of uncertainty theory. It is used to indicate
the belief degree that an uncertain event may occur. Chapter 1 will introduce normality, duality, subadditivity and product axioms. From those four
axioms, this chapter will also present uncertain measure, product uncertain
measure, and conditional uncertain measure.

Uncertain Variable
Uncertain variable is a measurable function from an uncertainty space to
the set of real numbers. It is used to represent quantities with uncertainty.
Chapter 2 is devoted to the uncertain variable, uncertainty distribution, operational law, expected value, variance, and so on.

Uncertain Programming
Uncertain programming is a type of mathematical programming involving
uncertain variables. Chapter 3 will provide a type of uncertain programming model with applications to machine scheduling problem, vehicle routing
problem, and project scheduling problem. In addition, uncertain multiobjective programming, uncertain goal programming and uncertain multilevel
programming are also documented.

xii

Preface

Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting experts
experimental data by uncertainty theory. Chapter 4 will present a questionnaire survey for collecting experts experimental data. In order to determine
uncertainty distributions from those experts experimental data, Chapter 4
will also introduce empirical uncertainty distribution, the principle of least
squares, the method of moments, and the Delphi method.
Uncertain Risk Analysis
The term risk has been used in different ways in literature. In this book
the risk is defined as the accidental loss plus the uncertain measure of such
loss, and a risk index is defined as the uncertain measure that some specified
loss occurs. Chapter 5 will introduce uncertain risk analysis that is a tool
to quantify risk via uncertainty theory. As applications of uncertain risk
analysis, Chapter 5 will also discuss structural risk analysis and investment
risk analysis.
Uncertain Reliability Analysis
Reliability index is defined as the uncertain measure that some system is
working. Chapter 6 will introduce uncertain reliability analysis that is a tool
to deal with system reliability via uncertainty theory.
Uncertain Propositional Logic
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. Chapter 7 will present a framework of uncertain propositional logic. In
addition, uncertain entailment is a methodology for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given. Chapter 8
will discuss an uncertain entailment model from which uncertain modus ponens, uncertain modus tollens and uncertain hypothetical syllogism are deduced.
Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts. The main difference between uncertain set and
uncertain variable is that the former takes values of set and the latter takes
values of point. Uncertain set theory will be introduced in Chapter 9. In
order to determine membership functions, Chapter 9 will also provide some
methods of uncertain statistics.

Preface

xiii

Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact encourages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 10 will be devoted to uncertain logic and
linguistic summarizer.
Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowledge via uncertain set theory. Chapter 11 will present a set of uncertain
inference rules, uncertain system, and uncertain control with application to
an inverted pendulum system.
Uncertain Process
An uncertain process is essentially a sequence of uncertain variables indexed
by time. Thus an uncertain process is usually used to model uncertain phenomena that vary with time. Chapter 12 is devoted to basic concepts of
uncertain process as well as independent increment process, and stationary
independent increment process. In addition, extreme value theorem, first
hitting time and time integral of uncertain processes are also introduced.
Chapter 13 deals with uncertain renewal process, delayed renewal process,
renewal reward process, alternating renewal process and uncertain insurance
model.
Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. Chapter 14 will introduce Liu process
that is a stationary independent increment process whose increments are
normal uncertain variables, and discuss Liu integral that is a type of uncertain
integral with respect to Liu process. In addition, the fundamental theorem of
uncertain calculus will be proved in this chapter from which the techniques
of chain rule, change of variables, and integration by parts are also derived.
Uncertain Differential Equation
Uncertain differential equation is a type of differential equation involving
uncertain processes. Chapter 15 will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and will introduce
Yao-Chen formula that represents the solution of an uncertain differential
equation by a family of solutions of ordinary differential equations. On the
basis of this formula, a numerical method for solving uncertain differential

xiv

Preface

equations is designed. In addition, extreme value, first hitting time and time
integral of solutions are provided.
Uncertain Finance
As applications of uncertain differential equation, Chapter 16 will discuss
uncertain stock model, uncertain interest rate model, and uncertain currency
model.
Law of Truth Conservation
The law of excluded middle tells us that a proposition is either true or false,
and the law of contradiction tells us that a proposition cannot be both true
and false. In the state of indeterminacy, some people said, the law of excluded
middle and the law of contradiction are no longer valid because the truth
degree of a proposition is no longer 0 or 1. I cannot gainsay this viewpoint
to a certain extent. But it does not mean that you might go as you please.
The truth values of a proposition and its negation should sum to unity. This is
the law of truth conservation that is weaker than the law of excluded middle
and the law of contradiction. Furthermore, the law of truth conservation
agrees with the law of excluded middle and the law of contradiction when
the uncertainty vanishes.
Maximum Uncertainty Principle
An event has no uncertainty if its uncertain measure is 1 because we may
believe that the event occurs. An event has no uncertainty too if its uncertain
measure is 0 because we may believe that the event does not occur. An event
is the most uncertain if its uncertain measure is 0.5 because the event and
its complement may be regarded as equally likely. In practice, if there is
no information about the uncertain measure of an event, we should assign
0.5 to it. Sometimes, only partial information is available. In this case, the
value of uncertain measure may be specified in some range. What value does
the uncertain measure take? For any event, if there are multiple reasonable
values that an uncertain measure may take, then the value as close to 0.5 as
possible is assigned to the event. This is the maximum uncertainty principle.
Matlab Uncertainty Toolbox
Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) is a collection of functions built on Matlab for many methods of uncertainty theory,
including uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain logic, uncertain inference, uncertain differential equation, scheduling, logistics, data mining, control, and
finance.

Preface

xv

Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.
Uncertainty Theory Online
If you want to read more papers related to uncertainty theory and applications, please visit the website at http://orsc.edu.cn/online.
Purpose
The purpose is to equip the readers with an axiomatic approach to deal
with human uncertainty. The book is suitable for researchers, engineers, and
students in the field of mathematics, information science, operations research,
industrial engineering, computer science, artificial intelligence, automation,
economics, and management science.
Acknowledgment
This work was supported by a series of grants from National Natural Science
Foundation, Ministry of Education, and Ministry of Science and Technology
of China.
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
September 15, 2013

The world is neither random nor uncertain, but sometimes


it can be analyzed by probability theory, and
sometimes by uncertainty theory.

Chapter 0

Toward Uncertainty
Theory
Real decisions are usually made in the state of indeterminacy. For modeling indeterminacy, there exist two mathematical systems, one is probability
theory (Kolmogorov, 1933) and the other is uncertainty theory (Liu, 2007).
Probability is interpreted as frequency, while uncertainty is interpreted as
personal belief degree.
What is indeterminacy? What is frequency? What is belief degree? This
chapter will answer these questions, and show in what situation we should use
probability theory and in what situation we should use uncertainty theory.

0.1

Indeterminacy

By indeterminacy we mean the phenomena whose outcomes cannot be exactly


predicted in advance. Some instances of indeterminacy include tossing dice,
roulette wheel, stock price, bridge strength, lifetime, demand, etc.

0.2

From Samples to Probability Theory

Assume we have collected a set of samples for some indeterminacy quantity


(e.g. stock price). By cumulative frequency we mean a function representing
the percentage of all samples that fall into the left side of the current point.
It is clear that the cumulative frequency looks like a step function in Figure 1,
and will always have bigger values as we move from the left to right.
The long-run cumulative frequency is defined as the limit that the sequence
of cumulative frequencies approaches as the sample size tends to infinity.
However, the limit is only possible in theory because the real sample size is
usually finite and even small. Thus a lot of methods (e.g. the principle of
least squares) were invented to estimate a probability distribution from the

Chapter 0 - Toward Uncertainty Theory


..
.........
...
.........
............................................................
...
............ ..
...
............. .... ...
...
.
............... .... .... ...
...
.. .. .. .. ..
...
.............. .... .... .... ....
...
. . . . .
...
.... .... .... ... ... ....
...
. . . . . .
............. .... .... .... .... ....
...
.. .. ... ... .. .. ..
...
.
.. ... ... ... ... ... ...
...
. .
. . .
.
............... .... .... ..... .... .... ...
...
... .. .. ... ... .. .. ...
...
.. ... ... .. ... ... ... ...
...
............. .. .. .. .. .. .. ..
...
.. ... ... ... ... ... ... ... ...
...
............... ... ... ... ... ... ... ... ...
...
. . . . . . . .
............... .... .... .... .... .... .... .... .... ....
...
..........
. . . . . . . .
.
.......................................................................................................................................................................................
.

Figure 1: A Cumulative Frequency Histogram


finite set of samples. When the sample size is large enough, it is possible
for you to believe your estimated probability distribution is close enough to
the long-run cumulative frequency. In this case, you may use probability
theory to deal with your problem on the basis of your estimated probability
distributions.
Probability theory is applicable when samples are available!
Probability theory, founded by Kolmogorov [79] in 1933, is a branch of mathematics that studies the behavior of random phenomena.
Keep in mind that a fundamental premise of applying probability theory
is that the estimated probability distribution is close enough to the long run
cumulative frequency, no matter whether the probability is interpreted as
subjective or objective. Otherwise, the law of large numbers is no longer
valid and probability theory is no longer applicable.
However, sometimes, no samples are available to estimate a probability
distribution. What can we do in this situation? Perhaps we have no choice
but to invite some domain experts to evaluate the belief degree that each
event will occur.

0.3

From Belief Degrees to Uncertainty Theory

Consider an indeterminacy quantity (e.g. bridge strength). A belief degree


function represents the degree with which we believe the indeterminacy quantity falls into the left side of the current point. If we believe the indeterminacy
quantity completely falls into the left side of the current point, then the belief
degree is 1. If it is completely impossible, then the belief degree is 0. Usually,
it is neither completely true nor completely false. In this case, we will assign
a number between 0 and 1 to the belief degree.
Is probability theory applicable to belief degrees?
Does a belief degree function play the role of probability distribution? Some
people do think so and call it subjective probability. However, it was shown

Section 0.3 - From Belief Degrees to Uncertainty Theory

among others by Kahneman and Tversky [69] that human beings usually
overweight unlikely events. This fact makes the belief degree function deviate far from the long-run cumulative frequency. More precisely, the belief
degree function may have much larger variance than the long-run cumulative
frequency. In this case, Liu [122] declared that it is inappropriate to use
probability theory because it may lead to counterintuitive results.
exactly 90 tons

...
...
... ...
... ...
... ...
... ...
... ... ...
... ... ...
... .... ..... .....
... .... ..... .....
.
.
.
.
. . .. .
.. .. .. ..
... ... .... .. ...
... ... ..... ... ...
... ... .. .. ... ...
...................................................................
... ... ... .. ... ...
... .. .. ... ... ....
.............
... .. .. ... ... ....
... ..... .... ..... ..... .....
.................................... .... .... .... .... .... .... .... .... .... .... .... .... ....
... ..... .... ..... ..... .....
.
.
.
.
.
.
.
.
.
.
... ... ...
. .
... .... ...
........................ ... ... ... ... ... ... ... ... ... ... ... ... ...
... ... ..
... .... ....
... ... ...
...................
... ... ....
... .. ..
... .. ..
... ... ...
... ... ... .................................................................................................................. ..................................................................
... ..... ....
... ..... ....
.
... ... ...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... ...
......
......
...... ......
...
. ...
... ...
.
...
.. ....
.
..
.
.
.
.
.
.
.
.
.
.
.
..
.. ..
..
.
.
.
.
.
.
.
.

...
...... ...........
......
... ......
....... ...
... ...
... ...
.
... ...
.... ...
... ...
.... ....
... ...
... ...
.. ..
...................................

....
...........
.... . .....
... ... .....
...
...
..
.
.
....
..
....
....
..
....
.....
....
......
.
.
.
.
.
.
.
.....
......
.
.
.
.
.
..........................................................................................................................

...
...... ...........
......
... ......
...... ...
... ....
.. ..
... ...
.... ...
.. ...
... ...
... ...
... ....
... ...
.. ..
...................................

Unknown Strength
Figure 2: A Truck is Crossing over a Bridge
Consider a counterexample presented by Liu [122]. Assume there is one
truck and 50 bridges in an experiment. Also assume the weight of the truck is
90 tons and the 50 bridge strengths are iid normal random variables N (100, 1)
in tons (I am afraid this fact cannot be verified without the help of God).
For simplicity, suppose a bridge collapses whenever its real strength is less
than the weight of the truck. Now let us have the truck cross over the 50
bridges one by one. It is easy to verify that
Pr{the truck can cross over the 50 bridges} 1.

(1)

That is to say, the truck may cross over the 50 bridges successfully.
However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degree function about it. As we stated before, usually the belief
degree function has much larger variance than the real bridge strengths. Assume the belief degree function looks like a normal probability distribution
N (100, 100). Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have no choice but to
regard the 50 bridge strengths as iid normal random variables with expected
value 100 and variance 100 in tons. If we have the truck cross over the 50
bridges one by one, then we immediately have
Pr{the truck can cross over the 50 bridges} 0.

(2)

Thus it is almost impossible that the truck crosses over the 50 bridges successfully. Unfortunately, the results (1) and (2) are at opposite poles. This

Chapter 0 - Toward Uncertainty Theory

conclusion seems unacceptable and then the belief degree function cannot be
treated as a probability distribution.
How to obtain belief degrees?
At first, we have to admit that any destructive experiment is not allowed for
a real bridge. Thus we have no samples about the bridge strength. In this
case, there do not exist any statistical methods to estimate its probability
distribution. How do we deal with it? It seems that we have no choice
but to invite some bridge engineers to evaluate the belief degree about the
bridge strength. In practice, it is almost impossible for bridge engineers to
give a perfect description of the belief degree function. Instead, they can only
provide some statements about the belief degrees. For example, the following
statements are given by the bridge engineers:
(a) Im 10% sure that the bridge strength does not exceed 80 tons;
(b) Im 20% sure that the bridge strength does not exceed 90 tons;
(c) Im 50% sure that the bridge strength does not exceed 100 tons;
(d) Im 70% sure that the bridge strength does not exceed 110 tons;
(e) Im 90% sure that the bridge strength does not exceed 120 tons.
From these statements, we may obtain a set of experts experimental data as
follows,
(80, 0.1), (90, 0.2), (100, 0.5), (110, 0.7), (120, 0.9).

(3)

Then some methods (e.g. the principle of least squares) were invented to
determine an uncertainty distribution from the experts experimental data
like (3). If you believe your estimated uncertainty distribution is close enough
to the belief degree function hidden in the mind of the domain experts, then
you may use uncertainty theory to deal with your problem on the basis of
your estimated uncertainty distributions.
Uncertainty theory is applicable when belief degrees are available!
In order to rationally deal with belief degrees, an uncertainty theory was
founded by Liu [113] in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of axiomatic mathematics
for modeling human uncertainty.
When no samples are available, we have to invite some domain experts to
evaluate the belief degree function about the indeterminacy quantity. Since
the belief degree function has much larger variance than the long-run cumulative frequency, probability theory is no longer applicable. Liu [122] declared
that uncertainty theory is the only legitimate approach when only belief degree is available.

Section 0.4 - Measurable Space: A Preliminary Knowledge


..
.........
...
............................................
.................. .... ...
....
.................. .... .... ....
..
. .. .. .. ..
.
.
...
.
................ ... .... ... ...
...
.... .. ... .. ... ..
...
.... ... .. ... ... ..
...
.............. .... .... .... .... ....
...
. . . .
.
.
..... .... .... .... .... ..... ...
...
.
.. .
...
................ .... .... ..... .... .... ....
.
.
.
.
.
...
.
.
.
.
.
.
.
... .. .. .. .. .. .. ..
...
................ .... .... .... .... .... .... ....
...
....... .... .... .... .... .... .... .... ....
.
.
...
.
.
............. ... .. .. .. .. .. ... ..
...
.......... ... .. .. ... .. ... .. .. ..
..
............ .. .. .. .. .. .. .. .. .. ..
.................................................................................................................................................................................................
....
...
..

Probability

..
.........
........
......................
...
.
.............
....
........................
..
. .....
................... ...
...
.... . .
...
.... .. .. ..
...
......... .... .... ...
...
..... ... .. .. ...
.. .... .... .... .... ...
...
.
...
... ...... ... .... .... ...
...
... ... .. ... .. .. ...
.. .. .. .. ... ... ..
...
.. ...... .... .... .... .... ....
.
...
.
...
.... .... .... .... ..... .... .... ....
...
....
... ..
...
.....
......... .... .... .... .... .... ....
.....
...
.. ... ... ... ... ... ... ...
......
.
.
.
.
.
...
.
.
.
.
.
.
.......
.
..........................................................................................................................................................................................
....
...
..

Uncertainty

Figure 3: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the long-run cumulative frequency (left histogram). In this case, probability theory is the only legitimate
approach. When the belief degrees are available (no samples), the estimated
uncertainty distribution (right curve) may have much larger variance than the
long-run cumulative frequency (right histogram). In this case, uncertainty
theory is the only legitimate approach.

0.4

Measurable Space: A Preliminary Knowledge

From the mathematical viewpoint, uncertainty theory is essentially a new


type of measure theory. Thus uncertainty theory should begin with a measurable space. In order to learn uncertainty theory, let us introduce algebra,
-algebra, measurable set, Borel algebra, Borel set, and measurable function.
The main results in this section are well-known. For this reason the credit
references are not provided. You may skip this section if you are familiar
with them.
Algebra and -Algebra
Definition 0.1 Let be a nonempty set (sometimes called universal set). A
collection L is called an algebra over if the following three conditions hold:
(a) L; (b) if L, then c L; and (c) if 1 , 2 , , n L, then
n
[

i L.

(4)

i=1

The collection L is called a -algebra over if the condition (c) is replaced


with closure under countable union, i.e., when 1 , 2 , L, we have

i L.

(5)

i=1

Example 0.1: The collection {, } is the smallest -algebra over , and


the power set (i.e., all subsets of ) is the largest -algebra.
Example 0.2: Let be a proper nonempty subset of . Then {, , c , }
is a -algebra over .

Chapter 0 - Toward Uncertainty Theory

Example 0.3: Let L be the collection of all finite disjoint unions of all
intervals of the form
(, a],

(a, b],

(b, ),

(6)

Then L is an algebra over < (the set of real numbers), but not a -algebra
because i = (0, (i 1)/i] L for all i but

i = (0, 1) 6 L.

(7)

i=1

Example 0.4: A -algebra L is closed under countable union, countable


intersection, difference, and limit. That is, if 1 , 2 , L, then

i L;

i=1

i L;

1 \ 2 L;

i=1

lim i L.

(8)

Measurable Space and Measurable Set


Definition 0.2 Let be a nonempty set, and L a -algebra over . Then
(, L) is called a measurable space, and any element in L is called a measurable set.
Example 0.5: Let < be the set of real numbers. Then L = {, <} is a
-algebra over <. Thus (<, L) is a measurable space. Note that there exist
only two measurable sets in this space, one is and another is <. Keep in
mind that the intervals like [0, 1] and (0, +) are not measurable!
Example 0.6: Let = {a, b, c}. Then L = {, {a}, {b, c}, } is a -algebra
over . Thus (, L) is a measurable space. Furthermore, {a} and {b, c} are
measurable sets in this space, but {b}, {c}, {a, b}, {a, c} are not.
Product -Algebra
Let 1 , 2 , , n be any nonempty sets (not necessarily subsets of the same
space). The product
= 1 2 n
(9)
is the set of all ordered n-tuples of the form (1 , 2 , , n ), where i i
for i = 1, 2, , n.
Definition 0.3 Let Li be -algebras over i , i = 1, 2, , n, respectively. A
measurable rectangle in is a set
= 1 2 n

(10)

Section 0.4 - Measurable Space: A Preliminary Knowledge

where i Li for i = 1, 2, , n. The smallest -algebra containing all


measurable rectangles of is called the product -algebra, denoted by
L = L1 L2 Ln .

(11)

Note that the product -algebra L is the smallest -algebra containing


measurable rectangles, rather than the product of L1 , L2 , , Ln . Product
-algebra may be easily extended to countably infinite case by defining a
measurable rectangle as a set of the form
= 1 2

(12)

where i Li for all i and i = i for all but finitely many i. The smallest
-algebra containing all measurable rectangles of = 1 2 is called
the product -algebra, denoted by
L = L1 L2

(13)

Borel Algebra and Borel Set


Let < be the set of all real numbers. A set O is said to be open if for any
x O, there exists a small positive number such that {y : |y x| < } O.
It is clear that < itself is an open set. The open interval (a, b) is also an
instance of open set. In addition, the complement of an open set is called a
closed set. For example, the closed interval [a, b] is a closed set. However,
the semiclosed intervals (a, b] and [a, b) are neither open nor closed.
Definition 0.4 The smallest -algebra B containing all open intervals is
called the Borel algebra over <, and any element in B is called a Borel set.
Example 0.7: It has been proved that intervals, open sets, closed sets,
rational numbers, and irrational numbers are all Borel sets.
Example 0.8: There exists a non-Borel set over <. Let [a] represent the set
of all rational numbers plus a. Note that if a1 a2 is not a rational number,
then [a1 ] and [a2 ] are disjoint sets. Thus < is divided into an infinite number
of those disjoint sets. Let A be a new set containing precisely one element
from them. Then A is not a Borel set.
Measurable Function
Definition 0.5 A real-valued function f on a measurable space (, L) is said
to be measurable if and only if f 1 (B) L for any Borel set B.
Example 0.9: Any monotone (increasing or decreasing) function f from <
to < is measurable.

Chapter 0 - Toward Uncertainty Theory

Example 0.10: Any continuous function f from < to < is also measurable.
Example 0.11: Assume is a subset of . Then its characteristic function
(
1, if x
f (x) =
(14)
0, if x
6
is measurable if is a measurable set; and is not measurable if is not.
Example 0.12: Let f and g be two measurable functions. Then their sum
f +g, product f g, and compound function f g are all measurable functions.
Example 0.13: Let f be a measurable function. Then its positive part
(
f (), if f () > 0
+
f () =
(15)
0,
otherwise
and negative part
(
f () =

f (), if f () < 0
0,
otherwise

(16)

are also measurable functions. Note that both of them are nonnegative.
Example 0.14: Let f1 , f2 , be a sequence of measurable functions. Then
the pointwise supremum, pointwise infimum, and pointwise limitation
sup fi (),
1i<

are also measurable functions.

inf

1i<

fi (),

lim fi ()

(17)

Chapter 1

Uncertain Measure
Uncertainty theory was founded by Liu [113] in 2007 and subsequently studied by many researchers. Nowadays uncertainty theory has become a branch
of axiomatic mathematics for modeling human uncertainty. This chapter
will present normality, duality, subadditivity and product axioms of uncertainty theory. From those four axioms, this chapter will also introduce the
uncertain measure that is a fundamental concept in uncertainty theory. In
addition, product uncertain measure and conditional uncertain measure will
be explored at the end of this chapter.

1.1

Events

Let be a nonempty set (universal set), and let L be a -algebra over .


Then (, L) is called a measurable space and each element in L is called
a measurable set. The first action we take is to rename measurable set as
event in uncertainty theory.
How do we understand those terminologies? Let us illustrate them by an
indeterminacy quantity (e.g. bridge strength). At first, the universal set
consists of all possible outcomes of the indeterminacy quantity. If we believe
that the possible bridge strengths range from 80 to 150 in tons, then the
universal set is
= [80, 150].
(1.1)
Note that you may replace the universal set with an enlarged interval, and
it would have no impact.
The -algebra L should contain all events we are concerned about. Note
that event and proposition are synonymous although the former is a set and
the latter is a statement. Assume the first event we are concerned about
corresponds to the proposition the bridge strength is less than or equal to
100 tons. Then it may be represented by
1 = [80, 100].

(1.2)

10

Chapter 1 - Uncertain Measure

Also assume the second event we are concerned about corresponds to the
proposition the bridge strength is more than 100 tons. Then it may be
represented by
2 = (100, 150].
(1.3)
If we are only concerned about the above two events, then we may construct
a -algebra L containing the two events 1 and 2 , for example,
L = {, 1 , 2 , }.

(1.4)

In this case, totally we have four events: , 1 , 2 and . However, please


note that the subsets like [80, 90] and [120, 140] are not events because they
do not belong to L.
Keep in mind that different -algebras are used for different purposes.
The minimum requirement of a -algebra is that it contains all events we
are concerned about. It is suggested to take the minimum -algebra that
contains those events.

1.2

Uncertain Measure

Let us define an uncertain measure M on the -algebra L. That is, a number


M{} will be assigned to each event to indicate the belief degree that
will occur. There is no doubt that the assignment is not arbitrary, and the
uncertain measure M must have certain mathematical properties. In order
to rationally deal with belief degrees, Liu [113] suggested the following three
axioms:
Axiom 1. (Normality Axiom) M{} = 1 for the universal set .
Axiom 2. (Duality Axiom) M{} + M{c } = 1 for any event .
Axiom 3. (Subadditivity Axiom) For every countable sequence of events 1 ,
2 , , we have
( )

[
X
M
i
M{i }.
(1.5)
i=1

i=1

Remark 1.1: Uncertain measure is interpreted as the personal belief degree


(not frequency) of an uncertain event that may occur. It depends on the
personal knowledge concerning the event. The uncertain measure will change
if the state of knowledge changes.
Remark 1.2: Duality axiom is in fact an application of the law of truth
conservation in uncertainty theory. The property ensures that the uncertainty theory is consistent with the law of excluded middle and the law of
contradiction. In addition, the human thinking is always dominated by the
duality. For example, if someone says a proposition is true with belief degree

11

Section 1.2 - Uncertain Measure

0.6, then all of us will think that the proposition is false with belief degree
0.4.
Remark 1.3: Given two events with known belief degrees, it is frequently
asked that how the belief degree of their union is generated from the individuals. Personally, I do not think there exists any rule to make it. A lot
of surveys showed that, generally speaking, the belief degree of the union is
neither the sum of individuals (e.g. probability measure) nor the maximum
(e.g. possibility measure). Perhaps there is no explicit relation between the
union and individuals except for the subadditivity axiom.
Remark 1.4: Pathology occurs if subadditivity axiom is not assumed. For
example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each set with at least
2 elements. Then such a set function satisfies all axioms but subadditivity.
Do you think it is strange if such a set function serves as a measure?
Remark 1.5: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 16.
Definition 1.1 (Liu [113]) The set function M is called an uncertain measure if it satisfies the normality, duality, and subadditivity axioms.
Exercise 1.1: Let = {1 , 2 , 3 }. It is clear that there exist 8 events in
the -algebra
L = {, {1 }, {2 }, {3 }, {1 , 2 }, {1 , 3 }, {2 , 3 }, }.

(1.6)

Define
M{1 } = 0.6,
M{1 , 2 } = 0.8,

M{2 } = 0.3,

M{3 } = 0.2,

M{1 , 3 } = 0.7,

M{} = 0,

M{2 , 3 } = 0.4,

M{} = 1.

Show that M is an uncertain measure.


Exercise 1.2: Suppose that (x) is a nonnegative function on < satisfying
sup ((x) + (y)) = 1.

(1.7)

x6=y

Define a set function


M{} =

sup (x),
x

if sup (x) < 0.5


x

1 sup (x), if sup (x) 0.5


xc

(1.8)

12

Chapter 1 - Uncertain Measure

for each Borel set . Show that M is an uncertain measure on <.


Exercise 1.3: Suppose (x) is a nonnegative and integrable function on <
such that
Z
(x)dx 1.
(1.9)
<

Define a set function

M{} =

Z
(x)dx,

if

(x)dx < 0.5

Z
(x)dx,

if

0.5,

(x)dx < 0.5

(1.10)

otherwise

for each Borel set . Show that M is an uncertain measure on <.


Exercise 1.4: Suppose (x) is a nonnegative function and (x) is a nonnegative and integrable function on < such that
Z
Z
sup (x) +
(x)dx 0.5 and/or sup (x) +
(x)dx 0.5 (1.11)
x

xc

for any Borel set . Show that the set function


Z
Z

sup
(x)
+
(x)dx,
if
sup
(x)
+
(x)dx < 0.5

x
x

Z
Z
M{} =
1 sup (x)
(x)dx, if sup (x) +
(x)dx < 0.5

xc
xc
c
c

0.5,
otherwise
is an uncertain measure on <.
Monotonicity Theorem
Theorem 1.1 (Monotonicity Theorem) Uncertain measure M is a monotone increasing set function. That is, for any events 1 2 , we have
M{1 } M{2 }.

(1.12)

Proof: The normality axiom says M{} = 1, and the duality axiom says
M{c1 } = 1 M{1 }. Since 1 2 , we have = c1 2 . By using the
subadditivity axiom, we obtain
1 = M{} M{c1 } + M{2 } = 1 M{1 } + M{2 }.
Thus M{1 } M{2 }.

13

Section 1.2 - Uncertain Measure

Theorem 1.2 Suppose that M is an uncertain measure. Then the empty set
has an uncertain measure zero, i.e.,
M{} = 0.

(1.13)

Proof: Since = c and M{} = 1, it follows from the duality axiom that
M{} = 1 M{} = 1 1 = 0.
Theorem 1.3 Suppose that M is an uncertain measure. Then for any event
, we have
0 M{} 1.
(1.14)
Proof: It follows from the monotonicity theorem that 0 M{} 1 because
and M{} = 0, M{} = 1.
Null-Additivity Theorem
Null-additivity is a direct deduction from the subadditivity axiom. We first
prove a more general theorem.
Theorem 1.4 Let 1 , 2 , be a sequence of events with M{i } 0 as
i . Then for any event , we have
lim M{ i } = lim M{\i } = M{}.

(1.15)

Proof: It follows from the monotonicity theorem and subadditivity axiom


that
M{} M{ i } M{} + M{i }
for each i. Thus we get M{ i } M{} by using M{i } 0. Since
(\i ) ((\i ) i ), we have
M{\i } M{} M{\i } + M{i }.
Hence M{\i } M{} by using M{i } 0.
Remark 1.6: It follows from the above theorem that the uncertain measure
is null-additive, i.e., M{1 2 } = M{1 } + M{2 } if either M{1 } = 0
or M{2 } = 0. In other words, the uncertain measure remains unchanged if
the event is enlarged or reduced by an event with uncertain measure zero.
Asymptotic Theorem
Theorem 1.5 (Asymptotic Theorem) For any events 1 , 2 , , we have
lim M{i } > 0,

if i ,

(1.16)

lim M{i } < 1,

if i .

(1.17)

14

Chapter 1 - Uncertain Measure

Proof: Assume i . Since = i i , it follows from the subadditivity


axiom that

X
1 = M{}
M{i }.
i=1

Since M{i } is increasing with respect to i, we have limi M{i } > 0. If


i , then ci . It follows from the first inequality and the duality axiom
that
lim M{i } = 1 lim M{ci } < 1.
i

The theorem is proved.


Example 1.1: Assume is the set of real numbers. Let be a number with
0 < 0.5. Define a set function as follows,

0,
if =

if is upper bounded

,
0.5,
if both and c are upper unbounded
(1.18)
M{} =

,
if

is
upper
bounded

1,
if = .
It is easy to verify that M is an uncertain measure. Write i = (, i] for
i = 1, 2, Then i and limi M{i } = . Furthermore, we have
ci and limi M{ci } = 1 .
Extension Theorem
Let c1 and c2 be nonnegative numbers with c1 + c2 = 1. Then there exists
an uncertain measure M on the universal set {1 , 2 } such that
(
c1 , if = 1
M{} =
(1.19)
c2 , if = 2 .
Furthermore, if M is an uncertain measure on the universal set {1 , 2 , 3 }
and c1 , c2 , c3 are nonnegative numbers satisfying the consistency condition
ci + cj 1 c1 + c2 + c3 ,
then

i 6= j,

c1 , if = 1
c2 , if = 2
M{} =

c3 , if = 3

(1.20)

(1.21)

can be uniquely extended to an uncertain measure on {1 , 2 , 3 } as follows,


M{1 , 2 } = 1 c3 ,

M{1 , 3 } = 1 c2 ,

M{2 , 3 } = 1 c1 .

(1.22)

15

Section 1.3 - Uncertainty Space

However, when there are four or more elements in the universal set, the
uncertain measure cannot be uniquely determined by the singletons. In this
case, we have the following theorem if the maximum uncertainty principle is
assumed.
Theorem 1.6 Let M be an uncertain measure on {1 , 2 , , n }. Then we
have
M{i } + M{j } 1 M{1 } + M{2 } + + M{n },

i 6= j.

(1.23)

If c1 , c2 , , cn are nonnegative numbers satisfying the consistency condition


ci + cj 1 c1 + c2 + + cn ,

i 6= j,

(1.24)

then

c1 , if = 1

c2 , if = 2
M{} =
..

cn , if = n

(1.25)

can be extended to an uncertain measure on {1 , 2 , , n } as follows,

ci ,

1
ci ,

i 6

1
ci ,

X
M{} =
ci ,

1
ci ,

i 6

ci ,

0.5,

if

ci > 0.5,

if

_
_

ci > 0.5,

ci > 0.5,

ci > 0.5,

_
_
i 6

ci 0.5,

if

ci +

ci 1

ci < 1

i 6

ci +

ci 1

i 6

i 6

if

X
i 6

i 6

if

ci +

if

ci +

ci < 1

ci < 0.5

i 6

ci 0.5,

ci < 0.5

otherwise

provided that the maximum uncertainty principle is assumed. Especially, if


c1 , c2 , , cn are nonnegative numbers such that c1 + c2 + + cn = 1, then
M{} =

X
i

ci .

(1.26)

16

1.3

Chapter 1 - Uncertain Measure

Uncertainty Space

Definition 1.2 (Liu [113]) Let be a nonempty set, L a -algebra over


, and M an uncertain measure. Then the triplet (, L, M) is called an
uncertainty space.
For practical purposes, the study of uncertainty spaces is sometimes restricted to complete uncertainty spaces.
Definition 1.3 An uncertainty space (, L, M) is called complete if for any
1 , 2 L with M{1 } = M{2 } and any subset A with 1 A 2 , one
has A L. In this case, we also have
M{A} = M{1 } = M{2 }.

(1.27)

Exercise 1.5: Let (, L, M) be a complete uncertainty space, and let be


an event with M{} = 0. Show that A is an event and M{A} = 0 whenever
A .
Exercise 1.6: Let (, L, M) be a complete uncertainty space, and let be
an event with M{} = 1. Show that A is an event and M{A} = 1 whenever
A .
Definition 1.4 (Gao [42]) An uncertainty space (, L, M) is called continuous if for any events 1 , 2 , , we have
n
o
M lim i = lim M{i }
(1.28)
i

provided that limi i exists.

1.4

Product Uncertain Measure

Product uncertain measure was defined by Liu [116] in 2009, thus producing
the fourth axiom of uncertainty theory. Let (k , Lk , Mk ) be uncertainty
spaces for k = 1, 2, Write
= 1 2 ,

L = L1 L2

(1.29)

Then the product uncertain measure M on the product -algebra L is defined


by the following product axiom (Liu [116]).
Axiom 4. (Product Axiom) Let (k , Lk , Mk ) be uncertainty spaces for k =
1, 2, The product uncertain measure M is an uncertain measure satisfying
(
)

Y
^
M
k =
Mk {k }
(1.30)
k=1

k=1

17

Section 1.4 - Product Uncertain Measure

where k are arbitrarily chosen events from Lk for k = 1, 2, , respectively.


Remark 1.7: Note that (1.30) defines a product uncertain measure only for
rectangles. How do we extend the uncertain measure M from the class of
rectangles to the product -algebra Lk ? For each event L, we have

M{} =

min Mk {k },

sup

1 2 1k<

if
1

min Mk {k } > 0.5

sup

1 2 1k<

sup

min Mk {k },

(1.31)

1 2 c 1k<

if
0.5,

min Mk {k } > 0.5

sup

1 2 c 1k<

otherwise.

.2

...
.........
....
.........
...
................ ........................
.......
........
...
......
.......
.....
...
......
....
.....
.
.
...
.
.... ................................................................................ .......
.
.........................................
.
.. ...
... ....
.
........
...
.
...
..
..
...
.
...
.
...
.
...
...
...
...
....
....
...
....
...
..
...
...
...
...
...
...
...
...
....
...
...
...
...
...
...
...
...
...
...
..
.
.
...
.
...
.
.
...
...
2 ...
..
..
.
.
..
..
...
..
.
...
.
.
...
.
..
...
.
.
.
.
...
.
.
.
...
...
...
...
...
....
...
...
.
...
...
...
....
...
...
.
...
...
...
..
..........
... ....
...
.
.
...........................................
.
.... ............................................................................... ...
..... ..
.. ......
...
......
....
...
......
......
.. ......
...
...... ...
.
.
.
.
.
.. .............
...
.
..
.............................................
..
...
..
...
...
..
..
...
.
.
.
..................................................................................................................................................................................................
..
..
..
...
...
...
...
...
...
...................................
..
...................................

Figure 1.1: Extension from Rectangles to Product -Algebra. The uncertain


measure of (the disk) is essentially the acreage of its inscribed rectangle
1 2 if it is greater than 0.5. Otherwise, we have to examine its complement
c . If the inscribed rectangle of c is greater than 0.5, then M{c } is just
its inscribed rectangle and M{} = 1 M{c }. If there does not exist an
inscribed rectangle of or c greater than 0.5, then we set M{} = 0.5.
Remark 1.8: Note that the sum of the uncertain measures of the maximum
rectangles in and c is always less than or equal to 1, i.e.,
sup

min Mk {k } +

1 2 1k<

sup

min Mk {k } 1.

1 2 c 1k<

18

Chapter 1 - Uncertain Measure

This means that at most one of


sup

min Mk {k }

1 2 1k<

and

min Mk {k }

sup

1 2 c 1k<

is greater than 0.5. Thus the expression (1.31) is reasonable.


Remark 1.9: If the sum of the uncertain measures of the maximum rectangles in and c is just 1, i.e.,
sup

min Mk {k } +

1 2 1k<

sup

min Mk {k } = 1,

1 2 c 1k<

then the product uncertain measure (1.31) is simplified as


M{} =

min Mk {k }.

sup

1 2 1k<

(1.32)

Theorem 1.7 (Peng and Iwamura [173]) The product uncertain measure
defined by (1.31) is an uncertain measure.
Proof: In order to prove that the product uncertain measure (1.31) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{} = 1.
Step 2: We prove the duality, i.e., M{} + M{c } = 1. The argument
breaks down into three cases. Case 1: Assume
min Mk {k } > 0.5.

sup

1 2 1k<

Then we immediately have


min Mk {k } < 0.5.

sup

1 2 c 1k<

It follows from (1.31) that


M{} =
M{c } = 1

min Mk {k },

sup

1 2 1k<

min Mk {k } = 1 M{}.

sup

1 2 (c )c 1k<

The duality is proved. Case 2: Assume


sup

min Mk {k } > 0.5.

1 2 c 1k<

This case may be proved by a similar process. Case 3: Assume


sup

min Mk {k } 0.5

1 2 1k<

19

Section 1.4 - Product Uncertain Measure

and
min Mk {k } 0.5.

sup

1 2 c 1k<

It follows from (1.31) that M{} = M{c } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose
and are two events in L with . The argument breaks down into
three cases. Case 1: Assume
min Mk {k } > 0.5.

sup

1 2 1k<

Then
sup

min Mk {k }

1 2 1k<

sup

min Mk {k } > 0.5.

1 2 1k<

It follows from (1.31) that M{} M{}. Case 2: Assume


min Mk {k } > 0.5.

sup

1 2 c 1k<

Then
sup

min Mk {k }

1 2 c 1k<

Thus

M{} = 1
1

sup

min Mk {k } > 0.5.

1 2 c 1k<

min Mk {k }

sup

1 2 c 1k<

min Mk {k } = M{}.

sup

1 2 c 1k<

Case 3: Assume
sup

min Mk {k } 0.5

sup

min Mk {k } 0.5.

1 2 1k<

and
1 2 c 1k<

Then
M{} 0.5 1 M{c } = M{}.
Step 4: Finally, we prove the subadditivity of M. For simplicity, we only
prove the case of two events and . The argument breaks down into three
cases. Case 1: Assume M{} < 0.5 and M{} < 0.5. For any given > 0,
there are two rectangles
1 2 c ,

1 2 c

such that
1 min Mk {k } M{} + /2,
1k<

20

Chapter 1 - Uncertain Measure

1 min Mk {k } M{} + /2.


1k<

Note that
(1 1 ) (2 2 ) ( )c .
It follows from the duality and subadditivity axioms that
Mk {k k } = 1 Mk {(k k )c } = 1 Mk {ck ck }
1 (Mk {ck } + Mk {ck })
= 1 (1 Mk {k }) (1 Mk {k })
= Mk {k } + Mk {k } 1
for any k. Thus
M{ } 1 min Mk {k k }
1k<

1 min Mk {k } + 1 min Mk {k }
1k<

1k<

M{} + M{} + .
Letting 0, we obtain
M{ } M{} + M{}.
Case 2: Assume M{} 0.5 and M{} < 0.5. When M{ } = 0.5, the
subadditivity is obvious. Now we consider the case M{ } > 0.5, i.e.,
M{c c } < 0.5. By using c = (c c ) and Case 1, we get
M{c } M{c c } + M{}.
Thus
M{ } = 1 M{c c } 1 M{c } + M{}
1 M{c } + M{} = M{} + M{}.
Case 3: If both M{} 0.5 and M{} 0.5, then the subadditivity is
obvious because M{} + M{} 1. The theorem is proved.
Definition 1.5 Assume (k , Lk , Mk ) are uncertainty spaces for k = 1, 2,
Let = 1 2 , L = L1 L2 and M = M1 M2 Then the
triplet (, L, M) is called the product uncertainty space.

1.5

Independence

Definition 1.6 (Liu [120]) The events 1 , 2 , , n are said to be independent if


( n
)
n
\
^

M
i =
M{i }
(1.33)
i=1

where

i=1

are arbitrarily chosen from {i , ci }, i = 1, 2, , n, respectively.

21

Section 1.5 - Independence

Remark 1.10: Note that (1.33) represents 2n equations. For example, when
n = 2, the four equations are
M{1 2 } = M{1 } M{2 },
M{c1 2 } = M{c1 } M{2 },
M{1 c2 } = M{1 } M{c2 },
M{c1 c2 } = M{c1 } M{c2 }.
Example 1.2: The impossible event is independent of any event because
c = and
M{ } = M{} = M{} M{},
M{c } = M{} = M{c } M{},
M{ c } = M{} = M{} M{c },
M{c c } = M{c } = M{c } M{c }.
Example 1.3: The sure event is independent of any event because
c = and
M{ } = M{} = M{} M{},
M{c } = M{c } = M{c } M{},
M{ c } = M{c } = M{} M{c },
M{c c } = M{c } = M{c } M{c }.
Example 1.4: Generally speaking, an event is not independent of itself
because
M{ c } =
6 M{} M{c }
whenever M{} is neither 1 nor 0.
Theorem 1.8 (Liu [120]) The events 1 , 2 , , n are independent if and
only if
( n
)
n
[
_

M
i =
M{i }
(1.34)
i=1

where

i=1

are arbitrarily chosen from {i , ci }, i = 1, 2, , n, respectively.

Proof: Assume 1 , 2 , , n are independent events. It follows from the


duality of uncertain measure that
( n
)
( n
)
n
n
[
\
^
_
M
i = 1 M
c
=1
M{c
M{i }.
i
i }=
i=1

i=1

i=1

i=1

The equation (1.34) is proved. Conversely, if the equation (1.34) holds, then
( n
)
( n
)
n
n
\
[
_
^

c
M
i = 1 M
i
=1
M{c
}
=
M{i }.
i
i=1

i=1

i=1

i=1

22

Chapter 1 - Uncertain Measure

.2

..........................................................................
...
...
..........
....
...
...
...
...
...
...
...
..
...
...
...
...
...
...
..
...
.
...............................................................................................................................................................................
...
.....
...
...
...
...
...
...
...
...
...
...
...
....
...
....
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
2 ...
1
2 ..
...
...
...
...
...
....
...
...
...
...
...
...
...
.
.
.
.
.
...
..
.
.
.
.
.
.
.
.
...
.
..
.
.........
..
.
.
.
.
.........................................................................................................................................................................
.
.
.
....
....
.....
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
.....................................................................................................................................................................................
...
...
...
.
...
...
.
.
..
...
................................
.................................

Figure 1.2: (1 2 ) (1 2 ) = 1 2
The equation (1.33) is true. The theorem is proved.
Theorem 1.9 (Liu [131]) Let (i , Li , Mi ) be uncertainty spaces and i Li
for i = 1, 2, , n. Then the events
1 i1 i i+1 n ,

i = 1, 2, , n

(1.35)

are always independent in the product uncertainty space. That is, the events
1 , 2 , , n

(1.36)

are always independent if they are from different uncertainty spaces.


Proof: For simplicity, we only prove the case of n = 2. It follows from the
product axiom that the product uncertain measure of the intersection is
M{(1 2 ) (1 2 )} = M{1 2 } = M1 {1 } M2 {2 }.
By using M{1 2 } = M1 {1 } and M{1 2 } = M2 {2 }, we obtain
M{(1 2 ) (1 2 )} = M{1 2 } M{1 2 }.
Similarly, we may prove that
M{(1 2 )c (1 2 )} = M{(1 2 )c } M{1 2 },
M{(1 2 ) (1 2 )c } = M{1 2 } M{(1 2 )c },
M{(1 2 )c (1 2 )c } = M{(1 2 )c } M{(1 2 )c }.
Thus 1 2 and 1 2 are independent events. Furthermore, since 1
and 2 are understood as 1 2 and 1 2 in the product uncertainty
space, respectively, the two events 1 and 2 are also independent.

23

Section 1.6 - Polyrectangular Theorem

1.6

Polyrectangular Theorem

Let (1 , L1 , M1 ) and (2 , L2 , M2 ) be two uncertainty spaces, 1 L1 and


2 L2 . It follows from the product axiom that the rectangle 1 2 has
an uncertain measure
M{1 2 } = M1 {1 } M2 {2 }.

(1.37)

This section will extend this result to a more general case.


Definition 1.7 (Liu [131]) Let (1 , L1 , M1 ) and (2 , L2 , M2 ) be two uncertainty spaces. A set on 1 2 is called a polyrectangle if it has the form
m
[

(1i 2i )

(1.38)

i=1

where 1i L1 and 2i L2 for i = 1, 2, , m, and


11 12 1m ,

(1.39)

21 22 2m .

(1.40)

A rectangle 1 2 is clearly a polyrectangle. In addition, a cross-like


set is also a polyrectangle. See Figure 1.3.
.2

.
..........
.... ....................
.........................
...
.. ...
...........................
...
...
...
... ...
...
....
...
...
..
... ...
...
...
...
...
.
... ...
..........................
...
.
...
...
.
.
.
...
... ...
.
.
...
...
.
.
.
.
...
... ...
.
.
.......................
.......................
........................
.
... ...
...
..
....
...
.........................
.
...
... ...
...
.
...
...
...
... ...
...
...
....
...
.
...
.
.
... ...
.
.
.
...
...
.........................
... ...
....
...
.
...
... ...
.........................
.........................
.........................
...
.
.
.
...
.
.
... ...
.
.
...
...
..
...
...
... ...
.
.
.
........................
...
...
... ...
...
....
...
...
...
... ...
...
...
...
...
...
.
.
.
... ...
.........................
.........................
... .................................................................
...
.
.
.................................................................................................................................................................................................................................................................................
....
.

Figure 1.3: Three Polyrectangles


Theorem 1.10 (Liu [131], Polyrectangular Theorem) Let (1 , L1 , M1 ) and
(2 , L2 , M2 ) be two uncertainty spaces. Then the polyrectangle
m
[

(1i 2i )

(1.41)

i=1

on the product uncertainty space (1 , L1 , M1 )(2 , L2 , M2 ) has an uncertain


measure
m
_
M{} =
M1 {1i } M2 {2i }.
(1.42)
i=1

24

Chapter 1 - Uncertain Measure

Proof: It is clear that the maximum rectangle in the polyrectangle is one


of 1i 2i , i = 1, 2, , n. Denote the maximum rectangle by 1k 2k .
Case I: If
M{1k 2k } = M1 {1k },
then the maximum rectangle in c is c1k c2,k+1 , and
M{c1k c2,k+1 } = M1 {c1k } = 1 M1 {1k }.
Thus
M{1k 2k } + M{c1k c2,k+1 } = 1.
Case II: If
M{1k 2k } = M2 {2k },
then the maximum rectangle in c is c1,k1 c2k , and
M{c1,k1 c2k } = M2 {c2k } = 1 M2 {2k }.
Thus
M{1k 2k } + M{c1,k1 c2k } = 1.
No matter what case happens, the sum of the uncertain measures of the
maximum rectangles in and c is always 1. It follows from the product
axiom that (1.42) holds.
Remark 1.11: Note that the polyrectangular theorem is also applicable to
the polyrectangles that are unions of infinitely many rectangles. In this case,
the polyrectangles may become the shapes in Figure 1.4.
.2

...
.........
.... ...
..
...
... ....
......
.......
... ........
... ..
... ...
.. .....
... ......
.
... ....
.. ...
... ........
.
... ....
...
.
.
... ... ...
.
... .......
....
...
.
......
.
.
... ... ....
.
.
...
.....
........
...
.
.
... .... ...
.
.
.
.
...
...........
.......
....
.
.
....................
.
.
.
... ... ...
.
.
.
.
...
.......
....
...........
.
.
.
... ... ....
.
.
.
.......
..................
.
.....
.
.
...........
.
.....
... .... ...
.
.
.
.
........
....
...
.
...
.
.
.
.
.
... ...
.
.
.
.
....
..
...
. .......
.
.
.
... ...
.
.
.
.
...
....
.
..
.....
... ....
... ....
.... ....
.....
... ..
... ...
... ...
......
... ...
... ...
........
......
......
.........
.
.
... ....
.
.
..
...
....
... .............................................................................
...
..
................................................................................................................................................................................................................................................................................
....
..

Figure 1.4: Three Deformed Polyrectangles

1.7

Conditional Uncertain Measure

We consider the uncertain measure of an event A after it has been learned


that some other event B has occurred. This new uncertain measure of A is
called the conditional uncertain measure of A given B.

25

Section 1.7 - Conditional Uncertain Measure

In order to define a conditional uncertain measure M{A|B}, at first we


have to enlarge M{A B} because M{A B} < 1 for all events whenever
M{B} < 1. It seems that we have no alternative but to divide M{A B} by
M{B}. Unfortunately, M{AB}/M{B} is not always an uncertain measure.
However, the value M{A|B} should not be greater than M{A B}/M{B}
(otherwise the normality will be lost), i.e.,
M{A|B}

M{A B}
.
M{B}

(1.43)

On the other hand, in order to preserve the duality, we should have


M{A|B} = 1 M{Ac |B} 1

M{Ac B}
.
M{B}

(1.44)

Furthermore, since (A B) (Ac B) = B, we have M{B} M{A B} +


M{Ac B} by using the subadditivity axiom. Thus
01

M{Ac B}
M{A B}

1.
M{B}
M{B}

(1.45)

Hence any numbers between 1M{Ac B}/M{B} and M{AB}/M{B} are


reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle, we have the following conditional
uncertain measure.
Definition 1.8 (Liu [113]) Let (, L, M) be an uncertainty space, and A, B
L. Then the conditional uncertain measure of A given B is defined by

M{A B}
M{A B}

,
if
< 0.5

M{B}
M{B}

M{Ac B}
M{Ac B}
M{A|B} =
(1.46)
1
, if
< 0.5

M{B}
M{B}

0.5,
otherwise
provided that M{B} > 0.
Remark 1.12: It follows immediately from the definition of conditional
uncertain measure that
1

M{Ac B}
M{A B}
M{A|B}
.
M{B}
M{B}

(1.47)

Furthermore, the conditional uncertain measure obeys the maximum uncertainty principle, and takes values as close to 0.5 as possible.
Remark 1.13: The conditional uncertain measure M{A|B} yields the posterior uncertain measure of A after the occurrence of event B.

26

Chapter 1 - Uncertain Measure

Theorem 1.11 Let (, L, M) be an uncertainty space, and B an event with


M{B} > 0. Then M{|B} defined by (1.46) is an uncertain measure, and
(, L, M{|B}) is an uncertainty space.
Proof: It is sufficient to prove that M{|B} satisfies the normality, duality
and subadditivity axioms. At first, it satisfies the normality axiom, i.e.,
M{|B} = 1

M{}
M{c B}
=1
= 1.
M{B}
M{B}

For any event A, if


M{A B}
0.5,
M{B}

M{Ac B}
0.5,
M{B}

then we have M{A|B} + M{Ac |B} = 0.5 + 0.5 = 1 immediately. Otherwise,


without loss of generality, suppose
M{Ac B}
M{A B}
< 0.5 <
,
M{B}
M{B}
then we have
M{A|B} + M{Ac |B} =



M{A B}
M{A B}
+ 1
= 1.
M{B}
M{B}

That is, M{|B} satisfies the duality axiom. Finally, for any countable sequence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from (1.47) and
the subadditivity axiom that
(
)

X
[
(
) M
M{Ai B}
Ai B

X
[
i=1
i=1

=
M{Ai |B}.
M
Ai | B
M{B}
M{B}
i=1
i=1
Suppose there is one term greater than 0.5, say
M{A1 |B} 0.5,

M{Ai |B} < 0.5,

i = 2, 3,

If M{i Ai |B} = 0.5, then we immediately have


(
)

[
X
M
Ai | B
M{Ai |B}.
i=1

i=1

If M{i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
!

[
\
c
c
(Ai B)
Ai B ,
A1 B
i=2

i=1

27

Section 1.8 - Bibliographic Notes

M{Ac1

B}

(
\

M{Ai B} + M

i=2

(
[

)
Aci

i=1

)
Ai | B

i=1

=1

(
\

)
Aci B

i=1

M{B}

M{Ac1 B}
+
M{Ai |B} 1
M{B}
i=1

M{Ai B}

i=2

M{B}

If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{|B} satisfies the subadditivity axiom. Hence M{|B} is
an uncertain measure. Furthermore, (, L, M{|B}) is an uncertainty space.

1.8

Bibliographic Notes

When no samples are available to estimate a probability distribution, we have


to invite some domain experts to evaluate the belief degree that each event
will occur. Perhaps some people think that the belief degree is subjective
probability or fuzzy concept. However, Liu [122] declared that it is usually
inappropriate because both probability theory and fuzzy set theory may lead
to counterintuitive results in this case.
In order to rationally deal with belief degrees, uncertainty theory was
founded by Liu [113] in 2007 and perfected by Liu [116] in 2009 with the
fundamental concept of uncertain measure. Since then, the tool of uncertain
measure was well developed and became a rigorous footstone of uncertainty
theory.
Independence of uncertain events was presented by Liu [120] in 2010.
In addition, Liu [131] proved a polyrectangular theorem for calculating the
uncertain measure of polyrectangles in the product uncertainty space.
In many applications of uncertainty theory, we have to deal with uncertain
events which are not necessarily independent. In order to study the dependence of uncertain events, Liu [113] defined a conditional uncertain measure
of an event after it has been learned that some other event has occurred.

Chapter 2

Uncertain Variable
Uncertain variable is a fundamental concept in uncertainty theory. It is used
to represent quantities with uncertainty. The emphasis in this chapter is
mainly on uncertain variable, uncertainty distribution, independence, operational law, expected value, variance, moment, entropy, distance, convergence,
and conditional uncertainty distribution.

2.1

Uncertain Variable

Roughly speaking, an uncertain variable is a real valued function on an uncertainty space. A formal definition is given as follows.
Definition 2.1 (Liu [113]) An uncertain variable is a measurable function
from an uncertainty space (, L, M) to the set of real numbers, i.e., { B}
is an event for any Borel set B.

<..

...
........
.........
........ ..........
....
....
.....
...
....
.....
...
...
....
...
....
.
...
.
...
...
...
...
.
.
.
...
.
..
.
...
.
...
...
.
.
...
...
...
...
...
...
...
.
...
.
..
..
...
..................................
...
........
.......
...
...
.......
.......
...
.
.
.
.
...
.......
......
....
......
...
.....
.......
.....
.........
...
......................
...
..
..............................................................................................................................................................................................................................................
....
.

()

Figure 2.1: An Uncertain Variable

30

Chapter 2 - Uncertain Variable

Example 2.1: Take (, L, M) to be {1 , 2 } with M{1 } = M{2 } = 0.5.


Then the function
(
0, if = 1
() =
1, if = 2
is an uncertain variable.
Example 2.2: A crisp number b may be regarded as a special uncertain
variable. In fact, it is the constant function () b on the uncertainty
space (, L, M).
Definition 2.2 An uncertain variable on the uncertainty space (, L, M) is
said to be (a) nonnegative if M{ < 0} = 0; and (b) positive if M{ 0} = 0.
Definition 2.3 Let and be uncertain variables defined on the uncertainty
space (, L, M). We say = if () = () for almost all .
Definition 2.4 Let 1 , 2 , , n be uncertain variables, and f a real-valued
measurable function. Then = f (1 , 2 , , n ) is an uncertain variable
defined by
() = f (1 (), 2 (), , n ()),

(2.1)

Example 2.3: Let 1 and 2 be two uncertain variables. Then the sum
= 1 + 2 is an uncertain variable defined by
() = 1 () + 2 (),

The product = 1 2 is also an uncertain variable defined by


() = 1 () 2 (),

The reader may wonder whether () defined by (2.1) is an uncertain


variable. The following theorem answers this question.
Theorem 2.1 Let 1 , 2 , , n be uncertain variables, and f a real-valued
measurable function. Then f (1 , 2 , , n ) is an uncertain variable.
Proof: Since 1 , 2 , , n are uncertain variables, they are measurable functions from an uncertainty space (, L, M) to the set of real numbers. Thus
f (1 , 2 , , n ) is also a measurable function from the uncertainty space
(, L, M) to the set of real numbers. Hence f (1 , 2 , , n ) is an uncertain
variable.

31

Section 2.2 - Uncertainty Distribution

2.2

Uncertainty Distribution

This section introduces a concept of uncertainty distribution in order to describe uncertain variables. Mention that uncertainty distribution is a carrier
of incomplete information of uncertain variable. However, in many cases, it
is sufficient to know the uncertainty distribution rather than the uncertain
variable itself.
Definition 2.5 (Liu [113]) The uncertainty distribution of an uncertain
variable is defined by
(x) = M { x}
(2.2)
for any real number x.

(x)
...
..........
...
............................................................................
.....................................
...
...............
...........
...
........
.
.
.
.
.
...
.
.
.......
...
.....
...
......
......
...
.....
.
.
.
...
.
.
.....
...
.....
...
.....
......
...
......
.
.
.
.
...
.
.......
...
.......
...
........
. ...............
......................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
..
...
...

Figure 2.2: An Uncertainty Distribution


Exercise 2.1: Show that the uncertain variable () b on the uncertainty
space (, L, M) (i.e., a crisp number b) has an uncertainty distribution
(
0, if x < b
(x) =
1, if x b.
Exercise 2.2: Take an uncertainty space (, L, M) to be {1 , 2 } with
M{1 } = 0.7 and M{2 } = 0.3. Show that the uncertain variable

0, if = 1
() =
1, if = 2
has an uncertainty distribution

0, if x < 0
0.7, if 0 x < 1
(x) =

1, if 1 x.

32

Chapter 2 - Uncertain Variable

Exercise 2.3: Take an uncertainty space (, L, M) to be {1 , 2 , 3 } with


M{1 } = 0.6,

M{2 } = 0.3,

M{3 } = 0.2.

Show that the uncertain variable

1, if = 1
2, if = 2
() =

3, if = 3
has an uncertainty distribution

0,

0.6,
(x) =

0.8,

1,

if
if
if
if

x<1
1x<2
2x<3
3 x.

Definition 2.6 Uncertain variables are said to be identically distributed if


they have the same uncertainty distribution.
It is clear that uncertain variables and are identically distributed if
= . However, identical distribution does not imply = . For example,
let (, L, M) be {1 , 2 } with M{1 } = M{2 } = 0.5. Define
(
(
1, if = 1
1, if = 1
() =
() =
1, if = 2 ,
1, if = 2 .
Then and have the same uncertainty distribution,

0, if x < 1
0.5, if 1 x < 1
(x) =

1, if x 1.
Thus the two uncertain variables and are identically distributed but 6= .
Sufficient and Necessary Condition
Theorem 2.2 (Peng-Iwamura Theorem [172]) A function (x) : < [0, 1]
is an uncertainty distribution if and only if it is a monotone increasing function except (x) 0 and (x) 1.
Proof: It is obvious that an uncertainty distribution is a monotone increasing function. In addition, both (x) 6 0 and (x) 6 1 follow from the
asymptotic theorem immediately. Conversely, suppose that is a monotone
increasing function but (x) 6 0 and (x) 6 1. We will prove that there is
an uncertain variable whose uncertainty distribution is just . Let C be a

Section 2.2 - Uncertainty Distribution

33

collection of all intervals of the form (, a], (b, ), and <. We define a
set function on < as follows,
M{(, a]} = (a),
M{(b, +)} = 1 (b),
M{} = 0,

M{<} = 1.

For an arbitrary Borel set B, there exists a sequence {Ai } in C such that
B

Ai .

i=1

Note that such a sequence is not unique. Thus the set function M{B} is
defined by

X
X

inf
M{A
},
if
inf
M{Ai } < 0.5

S
S

B Ai i=1
B Ai i=1

i=1
i=1

X
X
M{B} =

inf
M{A
},
if
inf
M{Ai } < 0.5
i

S
S

c
c
B
A
B
A
i=1
i=1

i
i

i=1
i=1

0.5,
otherwise.
We may prove that the set function M is indeed an uncertain measure on <,
and the uncertain variable defined by the identity function () = from the
uncertainty space (<, L, M) to < has the uncertainty distribution .
Example 2.4: Let c be a number with 0 < c < 1. Then (x) c is an
uncertainty distribution. When c 0.5, we define a set function over < as
follows,

0,
if =

if is upper bounded

c,
0.5, if both and c are upper unbounded
M{} =

1 c, if c is upper bounded

1,
if = .
Then (<, L, M) is an uncertainty space. It is easy to verify that the identity
function () = is an uncertain variable whose uncertainty distribution is
just (x) c. When c > 0.5, we define

0,
if =

1 c, if is upper bounded
0.5, if both and c are upper unbounded
M{} =

c,
if c is upper bounded

1,
if = .

34

Chapter 2 - Uncertain Variable

Then the function () = is an uncertain variable whose uncertainty


distribution is just (x) c.
What is a completely unknown number?
A completely unknown number may be regarded as an uncertain variable.
A possible uncertainty distribution is
(x) 0.5.

(2.3)

What is a large number?


A large number may be regarded as an uncertain variable. A possible
uncertainty distribution is
1
1
(x) = (1 + exp(1000 x)) , x <.
(2.4)
2
(x)
...
..........
.

1 .........................................................................................

..
...
..
...
...
...
...
............................................................
...
..................................................................................
.........
...
......
......
...
.....
.
.
...
.
....
...
.....
...
.....
......
...
......
.
.
.
.
.
...
.
.......
.............................................................................................................................................................................................................................................
..
...
..

0.5

Figure 2.3: Uncertainty Distribution of Large Number


What is a small number?
A small number may be regarded as an uncertain variable. A possible
uncertainty distribution is
(
0,
if x 0
(x) =
(2.5)
1
(1 + exp(10x)) , if x > 0.
How old is John?
Someone thinks John is neither younger than 24 nor older than 28, and
presents an uncertainty distribution of Johns age as follows,

0,
if x 24

(x 24)/4, if 24 x 28
(x) =
(2.6)

1,
if x 28.

35

Section 2.2 - Uncertainty Distribution

(x)
....
........
..
...
.........................
.............................................................................................................................................................
....
........
......
...
.....
.
.
.
...
.
..
... ......
... ...
... ...
.......
.
...............................................................................
.....
...
..
...
...
...
...
...
...
...............................................................................................................................................................................................................................
..
....
.

0.5

Figure 2.4: Uncertainty Distribution of Small Number


How tall is James?
Someone thinks James height is between 180 and 185 centimeters, and
presents the following uncertainty distribution,

0,
if x 180

(x 180)/5, if 180 x 185


(2.7)
(x) =

1,
if x 185.
Some Special Uncertainty Distributions
Definition 2.7 An uncertain variable is
uncertainty distribution

0,

(x a)/(b a),
(x) =

1,

called linear if it has a linear


if x a
if a x b
if x b

(2.8)

denoted by L(a, b) where a and b are real numbers with a < b.


Example 2.5: Johns age (2.6) is a linear uncertain variable L(24, 28), and
James height (2.7) is another linear uncertain variable L(180, 185).
Definition 2.8 An uncertain variable is called zigzag if it has a zigzag
uncertainty distribution

0,
if x a

(x a)/2(b a),
if a x b
(x) =
(2.9)
(x + c 2b)/2(c b), if b x c

1,
if x c
denoted by Z(a, b, c) where a, b, c are real numbers with a < b < c.

36

Chapter 2 - Uncertain Variable

(x)
....
........
..
...
..........................................................
.......................................................
...
......
...
..... ..
..... ..
...
.....
..
.
.
.
...
.
..
..
...
.....
..
.....
...
.....
.
.
.
...
.
...
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
...
.
.
.
...
.
..
....
.
.
.
...
..
.
...
.
.
.
...
..
.
...
.
.
.
.
.
.
............................................................................................................................................................................................................................
..
...
..
..

Figure 2.5: Linear Uncertainty Distribution


(x)
..
........
...
.............................................................
.........................................................
...
...... ..
...
...... ..
......
..
...
......
.
.
.
.
..
...
.
.
......
..
...
......
.
.
.
.
..
...
.
....
.
.
.
.
..
...
.
....
.
.
.
.
..
...
.
..
..
...........................................
..
..
...
.
. ..
.
..
.
...
.. ..
.
..
.
...
.. ..
.
..
...
.
..
.
.
..
...
.
.
..
.
.
..
...
.
.
..
.
.
..
...
.
.
.
...
..
...
.
.
.
.
.
.
.
.
.
...................................................................................................................................................................................................................................
..
...
...
.

0.5

Figure 2.6: Zigzag Uncertainty Distribution


Definition 2.9 An uncertain variable is called normal if it has a normal
uncertainty distribution

(x) =


1 + exp

(e x)

1
x<

(2.10)

denoted by N (e, ) where e and are real numbers with > 0.

Definition 2.10 An uncertain variable is called lognormal if ln is a normal uncertain variable N (e, ). In other words, a lognormal uncertain variable has an uncertainty distribution

(x) =


1 + exp

(e ln x)

1
,

x0

denoted by LOGN (e, ), where e and are real numbers with > 0.

(2.11)

37

Section 2.2 - Uncertainty Distribution

(x)
....
........
..
...
.........................................................................
..
........
..............................
....
.............
..........
...
........
.
.
.
.
.
...
.
.
......
...
.....
...
......
.....
...
.....
.
.
.
...
.
....
...
............
.........................................................................
...
.
..... ...
...
..... .. ....
.
.
.
...
.
...
.....
...
...
...
......
..
...
...
......
..
...
.......
...
....
.......
.
.
.
.
.
.
.
...
.
.....
.
......
.
.
.
.
.
.
.
.
.
.
.........
.
.
.
.
.
..................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...........................................................................................................................................................................................................
...............
.........................................
..
..
....
.

0.5

Figure 2.7: Normal Uncertainty Distribution


(x)
....
........
..
.
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .............................................. .
.........
....................
....
..........
...
........
.......
...
......
.
.
.
...
.
...
...
.....
.....
...
....
...
....
... . . . . . . . . . . . . . . .........
...
.....
...
... .
... .
...
.... ..
.
.
...
.
....
.
...
....
.
...
....
.
.....
...
.
.....
.
.
.
.
.
...
.
......
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
......................... ..............................................................................................................................................................................................
..
...
..
..

0.5

exp(e)

Figure 2.8: Lognormal Uncertainty Distribution


Definition 2.11 An uncertain variable is called empirical if it has an empirical uncertainty distribution

0,
if x < x1
(i+1 i )(x xi )
, if xi x xi+1 , 1 i < n
i +
(x) =
xi+1 xi

1,
if x > xn

(2.12)

where x1 < x2 < < xn and 0 1 2 n 1.

Definition 2.12 An uncertain variable is called discrete if it takes values


in {x1 , x2 , , xn } and

x1 with uncertain measure c1

x2 with uncertain measure c2


=
(2.13)

xn with uncertain measure cn

38

Chapter 2 - Uncertain Variable

(x)
....
........
..
...
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .......................................
..
....
..
..
....
5 .............................................................................................................................
.
...
.
.
..
.
.
.......
..
..
.............
.
......
4 .......................................................................
..
..
. ...
.
...
.. ...
....
...
.
.. ..
.
..
...
..
.. ..
.
...
.. ...
....
.
...
..
..
...
.
...
..
..
..
...
.
..
... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...........
...
...
.
.
.
3 ...
... ..
.
.
.
.
.
.
..
.
.
.
..
....
..
..
....
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .................
..
.
.
.

..
.
.
..
2 ...
.
.. ..
.
.
.
.
.
.
..
.
..
..
.... ...
..
.
.
....
.
..
..
... .
.
.
....
.
...
.
..
..
..
...
.
.
..
...
.
.
.
.
.
.
.
...
..
.
.
.
.
...
.
..
..
..
...
.
.
.
....
.
...
.
.
..
..
...
.
.
...
.
...
.
.
.
.
.
..
..
...
..
.... .. .. .. .. .. .. .. .. .
.......
..
..
.
1 ...
..
.
...
.
.
..
.
.
.
..
....
...
.
.
.
........................................................................................................................................................................................................................................................................
...
..
1
2
3
4
5
..

x x

Figure 2.9: Empirical Uncertainty Distribution


where c1 , c2 , , cn are nonnegative numbers satisfying the consistency condition
ci + cj 1 c1 + c2 + + cn , i 6= j.
(2.14)
When the maximum uncertainty principle is assumed, the discrete uncertain
variable has a discrete uncertainty distribution

_
_
_
X

ci ,
if
ci > 0.5,
ci +
ci 1

xi >x
xi x
xi x
xi x

X
_
_
X

c
,
if
c
>
0.5,
c
+
ci < 1

i
i
i

xi >x
xi >x
xi x
xi x

_
_
_
X

1
ci , if
ci > 0.5,
ci +
ci 1

x
>x
x
>x
x
>x

x
x
i
i
i
i

X
_
_
X
(x) =
ci ,
if
ci > 0.5,
ci +
ci < 1

xi >x
xi >x
xi x
xi x

X
_
X

ci , if
ci 0.5,
ci < 0.5

xi >x
xi >x
1in

X
_
X

ci ,
if
ci 0.5,
ci < 0.5

xi x

1in
xi x

0.5,
otherwise.
Especially, if c1 , c2 , , cn are nonnegative numbers such that c1 + c2 + +
cn = 1, then
X
(x) =
ci .
(2.15)
xi x

39

Section 2.2 - Uncertainty Distribution

(x)
....
........
..
...
..
..
5 ...........................................................................................................................................
..
...
..
...
..
...
..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 ....
...
...
..
..
...
.
..
..
...
.
.
..
...
.
.
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
..
3 ...
.
.
.
...
..
..
....
..
..
.
...
..
.
..
..
..
...
.
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
2 ...
..
.
.
..
.
..
..
.
.
....
.
.
...
..
..
...
...
..
..
..
...
...
..
.
.. .. .. .. .. .. .. .. .. .
.........................................
..
.
.
.
.
.
1 ...
.
.
.
..
.
.
.
..
..
..
..
..
..
....
...
..
.
..
..
..
.
.
...
...
..
.
...
...
...
..........................................................................................................................................................................................................................................................
....
..
1
2
3
4
5
...

Figure 2.10: Discrete Uncertainty Distribution


Measure Inversion Theorem
Theorem 2.3 (Liu [120], Measure Inversion Theorem) Let be an uncertain variable with continuous uncertainty distribution . Then for any real
number x, we have
M{ x} = (x),

M{ x} = 1 (x).

(2.16)

Proof: The equation M{ x} = (x) follows from the definition of uncertainty distribution immediately. By using the duality of uncertain measure
and continuity of uncertainty distribution, we get
M{ x} = 1 M{ < x} = 1 (x).
The theorem is verified.
Theorem 2.4 Let be an uncertain variable with continuous uncertainty
distribution . Then for any interval [a, b], we have
(b) (a) M{a b} (b) (1 (a)).

(2.17)

Proof: It follows from the subadditivity of uncertain measure and the measure inversion theorem that
M{a b} + M{ a} M{ b}.
That is,
M{a b} + (a) (b).
Thus the inequality on the left hand side is verified. It follows from the
monotonicity of uncertain measure and the measure inversion theorem that
M{a b} M{ (, b]} = (b).

40

Chapter 2 - Uncertain Variable

On the other hand,


M{a x b} M{ [a, +)} = 1 (a).
Hence the inequality on the right hand side is proved.
Remark 2.1: Perhaps some readers would like to get an exactly scalar value
of the uncertain measure M{a x b}. Generally speaking, it is an impossible job (except a = or b = +) if only an uncertainty distribution is
available. I would like to ask if there is a need to know it. In fact, it is not
necessary for practical purpose. Would you believe? I hope so!
Theorem 2.5 Suppose that is a discrete uncertain variable represented by

x1 with uncertain measure c1

x2 with uncertain measure c2


=
(2.18)

xn with uncertain measure cn


where c1 , c2 , , cn are nonnegative numbers satisfying the consistency condition
ci + cj 1 c1 + c2 + + cn , i 6= j.
(2.19)
Then for any subset A {x1 , x2 , , xn }, we have

_
_
_
X

ci ,
if
ci > 0.5,
ci +
ci

xi A
xi A
xi A
xi 6A

X
_
_
X

c
,
if
c
>
0.5,
c
+
ci

i
i
i

xi A
xi A
xi 6A
xi 6A

_
_
_
X

1
ci , if
ci > 0.5,
ci +
ci

x
A
x

6
A
x

6
A
x

6
A
i
i
i
i

X
_
_
X
M{ A} =
ci ,
if
ci > 0.5,
ci +
ci

x
A
x
A
x

6
A
x

6
A
i
i
i
i

X
_
X

1
ci , if
ci 0.5,
ci < 0.5

1in
x

6
A
x

6
A

i
i

X
_
X

ci ,
if
ci 0.5,
ci < 0.5

xi A

xi A
1in

0.5,
otherwise

1
<1
1
<1

provided that the maximum uncertainty principle is assumed. Especially, if


c1 , c2 , , cn are nonnegative numbers such that c1 + c2 + + cn = 1, then
X
M{ A} =
ci .
(2.20)
xi A

41

Section 2.2 - Uncertainty Distribution

Regular Uncertainty Distribution


Definition 2.13 (Liu [120]) An uncertainty distribution (x) is said to be
regular if it is a continuous and strictly increasing function with respect to x
at which 0 < (x) < 1, and
lim (x) = 0,

lim (x) = 1.

x+

(2.21)

For example, linear uncertainty distribution, zigzag uncertainty distribution, normal uncertainty distribution, and lognormal uncertainty distribution
are all regular.
A regular uncertainty distribution (x) has an inverse function on the
range of x with 0 < (x) < 1, and the inverse function 1 () exists on
the open interval (0, 1). It is easy to verify that 1 () is a continuous and
strictly increasing function with respect to (0, 1).
For convenience, we stipulate that the uncertainty distribution of a crisp
value c is regular. That is, we will say
(
1, if x c
(x) =
(2.22)
0, if x < c
is a continuous and strictly increasing function with respect to x at which
0 < (x) < 1 even though it is discontinuous at c. We will also stipulate
that a crisp value c has an inverse uncertainty distribution
1 () c

(2.23)

and say 1 () is a continuous and strictly increasing function with respect


to (0, 1) even though it is not.
Inverse Uncertainty Distribution
Definition 2.14 (Liu [120]) Let be an uncertain variable with regular uncertainty distribution (x). Then the inverse function 1 () is called the
inverse uncertainty distribution of .
Note that the inverse uncertainty distribution 1 () is well defined on the
open interval (0, 1). If needed, we may extend the domain to [0, 1] via
1 (0) = lim 1 (),
0

1 (1) = lim 1 ().


1

(2.24)

Example 2.6: The inverse uncertainty distribution of linear uncertain variable L(a, b) is
1 () = (1 )a + b.
(2.25)

42

Chapter 2 - Uncertain Variable

1 ()
....
.........
.

...
..

..
.....
b .................................................................
..... ..
...... .

..
....
...
...
......
..
......
..
.....
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
.
..
.
.
...
.
.....
..
.
.
.
...
.
...
.
..
.
.
.
...
.
.....
..
.
.
.
...
.
...
.
..
.
.
.
...
.
....
.
.
.
.
.
.
.
............................................................ ..........................................................................................................................
.
.
.
..
.....
.
.
.
...
.
.....
...
......
... ..........
.. .....
........
...

Figure 2.11: Inverse Linear Uncertainty Distribution


Example 2.7: The inverse uncertainty distribution of zigzag uncertain variable Z(a, b, c) is
(
(1 2)a + 2b,
if < 0.5
1 () =
(2.26)
(2 2)b + (2 1)c, if 0.5.
1 ()
....
.........

...
.

..
....
.....
c .........................................................
....... .
....
....... .

..
....
...
.......
..
...
.......
..
..
.......
.......
.
..
.
.
.
...
.
.
.....
.
..
.
.
.
...
.
.
..
..
...........................................
.
.
..
.
.
...
.
... ..
.
..
.
.
...
.
.
...
..
.
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
...
.
..
.
.
.
...
.
.
..
.
.
.
.
.
.
.
.
.................................................. ...................................................................................................................................
.
.
..
.
...
.
.
.
...
.
...
...
.....
... .........
.. ....
.......
...

0.5

Figure 2.12: Inverse Zigzag Uncertainty Distribution


Example 2.8: The inverse uncertainty distribution of normal uncertain
variable N (e, ) is

3
1
ln
.
(2.27)
() = e +

1
Example 2.9: The inverse uncertainty distribution of lognormal uncertain
variable LOGN (e, ) is
!

1 () = exp e +
ln
.
(2.28)

43

Section 2.2 - Uncertainty Distribution

1 ()
....
...
.........
.
...
....
...
.......
...
... ..
...
.. .
...
... ..
.... .
...
.... ...
.
.
.
...
.
.....
...
...
......
...
..
.......
.........
...
..
..........
.
.
.
.
.
.
.
...
.
..
.
.....
..
...................................................
.
.
.
.
.
.
..
...
.
.....
.
.
.
.
.
.
.
.
.
..
.
...
.....
.
.
.
.
.
.
.
..
.
...
....
.
.
.
.
.
.
..
.
...
..
...
.
.... ........
......................................................................................................................................................................................
... ...
... ...
......
....

0.5

Figure 2.13: Inverse Normal Uncertainty Distribution


1 ()
...
...
..........
.
....
. ..
...
.. ..
.
.. ..
...
.
...
... ..
...
... ..
...
... ..
.. ..
...
.
...
... ..
...
... ...
..
...
..
..
.
.
...
..
.
.
....
...
..
....
.
.
...
.
..
.
...
.
.
.
.
...
.
..
.
.....
.
.
.
.
.
...
.
.
..
.
.......
.
.
.
.
.
.
...
.
.
..
.
.
........
.
.
.
.
.
.
.
.
.
...
.
.
..
.
.
.
.....
... ...............................
..
............
.
..................................................................................................................................................................................
....
...

Figure 2.14: Inverse Lognormal Uncertainty Distribution


Theorem 2.6 (Liu [132], Sufficient and Necessary Condition) A function
1 () : (0, 1) < is an inverse uncertainty distribution if and only if it is
a continuous and strictly increasing function with respect to .
Proof: Suppose 1 () is an inverse uncertainty distribution. It follows
from the definition of inverse uncertainty distribution that 1 () is a continuous and strictly increasing function with respect to (0, 1).
Conversely, suppose 1 () is a continuous and strictly increasing function on (0, 1). Define

0, if x lim 1 ()

, if x = 1 ()
(x) =

1, if x lim 1 ().
1

It follows from Peng-Iwamura theorem that (x) is an uncertainty distribution of some uncertain variable . Then for each (0, 1), we have
M{ 1 ()} = (1 ()) = .

44

Chapter 2 - Uncertain Variable

Thus 1 () is just the inverse uncertainty distribution of the uncertain


variable . The theorem is verified.

2.3

Independence

Independence has been explained in many ways. Personally, I think some


uncertain variables are independent if they can be separately defined on different uncertainty spaces. In order to ensure that we are able to do so, we
may define independence in the following mathematical form.
Definition 2.15 (Liu [116]) The uncertain variables 1 , 2 , , n are said
to be independent if
( n
)
n
\
^
M
(i Bi ) =
M {i Bi }
(2.29)
i=1

i=1

for any Borel sets B1 , B2 , , Bn .


Example 2.10: Let 1 be an uncertain variable and let 2 be a constant c.
For any Borel sets B1 and B2 , if c B2 , then M{2 B2 } = 1 and
M {(1 B1 ) (2 B2 )} = M{1 B1 } = M{1 B1 } M{2 B2 }.
If c 6 B2 , then M{2 B2 } = 0 and
M {(1 B1 ) (2 B2 )} = M{} = 0 = M{1 B1 } M{2 B2 }.
It follows from the definition of independence that an uncertain variable is
always independent of a constant.
Theorem 2.7 The uncertain variables 1 , 2 , , n are independent if and
only if
( n
)
n
[
_
M
(i Bi ) =
M {i Bi }
(2.30)
i=1

i=1

for any Borel sets B1 , B2 , , Bn .


Proof: It follows from the duality of uncertain measure that 1 , 2 , , n
are independent if and only if
( n
)
( n
)
[
\
c
M
(i Bi ) = 1 M
(i Bi )
i=1
n
^

=1

i=1

M{i Bic } =

i=1

Thus the proof is complete.

n
_
i=1

M {i Bi } .

45

Section 2.4 - Operational Law

Theorem 2.8 Let 1 , 2 , , n be independent uncertain variables, and f1 ,


f2 , , fn measurable functions. Then f1 (1 ), f2 (2 ), , fn (n ) are independent uncertain variables.
Proof: For any Borel sets B1 , B2 , , Bn , it follows from the definition of
independence that
( n
)
( n
)
\
\
1
M
(fi (i ) Bi ) = M
(i fi (Bi ))
i=1

n
^

i=1

M{i fi1 (Bi )} =

n
^

M{fi (i ) Bi }.

i=1

i=1

Thus f1 (1 ), f2 (2 ), , fn (n ) are independent uncertain variables.


Example 2.11: Let 1 and 2 be independent uncertain variables. Then
their functions 1 + 2 and 22 + 32 + 4 are also independent.
Theorem 2.9 Let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. Then the uncertain vector (1 , 2 , , n ) has a joint uncertainty distribution
(x1 , x2 , , xn ) = 1 (x1 ) 2 (x2 ) n (xn )

(2.31)

for any real numbers x1 , x2 , , xn .


Proof: For simplicity, we only prove the case n = 2. Since 1 and 2 are
independent uncertain variables, we have
(x1 , x2 ) = M{(1 x1 ) (2 x2 )}
= M{1 x1 } M{2 x2 }
= 1 (x1 ) 2 (x2 )
for any real numbers x1 and x2 . The theorem is proved.
Remark 2.2: However, the equation (2.31) does not imply that the uncertain variables are independent. For example, let be an uncertain variable
with uncertainty distribution . Then the joint uncertainty distribution
of uncertain vector (, ) is
(x1 , x2 ) = M{( x1 ) ( x2 )} = (x1 ) (x2 )
for any real numbers x1 and x2 . But, generally speaking, an uncertain variable is not independent with itself.

46

Chapter 2 - Uncertain Variable

2.4

Operational Law

The operational law of independent uncertain variables was given by Liu


[120] for calculating the uncertainty distribution of strictly increasing function, strictly decreasing function, and strictly monotone function of uncertain variables. This section will also discuss the uncertainty distribution
of Boolean function of Boolean uncertain variables, and general function of
discrete uncertain variables.
Strictly Increasing Function of Uncertain Variables
A real-valued function f (x1 , x2 , , xn ) is said to be strictly increasing if
f (x1 , x2 , , xn ) f (y1 , y2 , , yn )

(2.32)

whenever xi yi for i = 1, 2, , n, and


f (x1 , x2 , , xn ) < f (y1 , y2 , , yn )

(2.33)

whenever xi < yi for i = 1, 2, , n. The following are strictly increasing


functions,
f (x1 , x2 , , xn ) = x1 x2 xn ,
f (x1 , x2 , , xn ) = x1 x2 xn ,
f (x1 , x2 , , xn ) = x1 + x2 + + xn ,
f (x1 , x2 , , xn ) = x1 x2 xn , x1 , x2 , , xn 0.
Theorem 2.10 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If f
is a strictly increasing function, then
= f (1 , 2 , , n )

(2.34)

is an uncertain variable with inverse uncertainty distribution


1
1
1 () = f (1
1 (), 2 (), , n ()).

(2.35)

Proof: For simplicity, we only prove the case n = 2. At first, we always have
1
M{ 1 ()} = M{f (1 , 2 ) f (1
1 (), 2 ())}.

Since f is a strictly increasing function, we obtain


1
{ 1 ()} {1 1
1 ()} {2 2 ()}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 ()} M{2 2 ()} = = .

47

Section 2.4 - Operational Law

On the other hand, since f is a strictly increasing function, we obtain


1
{ 1 ()} {1 1
1 ()} {2 2 ()}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 ()} M{2 2 ()} = = .

It follows that M{ 1 ()} = . In other words, is just the uncertainty


distribution of . The theorem is proved.
Exercise 2.4: Let 1 , 2 , , n be independent uncertain variables with
regular uncertainty distributions 1 , 2 , , n , respectively. Show that the
sum
= 1 + 2 + + n
(2.36)
is an uncertain variable with inverse uncertainty distribution
1
1
1 () = 1
1 () + 2 () + + n ().

(2.37)

Exercise 2.5: Let 1 , 2 , , n be independent and nonnegative uncertain


variables with regular uncertainty distributions 1 , 2 , , n , respectively.
Show that the product
= 1 2 n

(2.38)

is an uncertain variable with inverse uncertainty distribution


1
1
1 () = 1
1 () 2 () n ().

(2.39)

Exercise 2.6: Let 1 , 2 , , n be independent uncertain variables with


regular uncertainty distributions 1 , 2 , , n , respectively. Show that the
minimum
= 1 2 n
(2.40)
is an uncertain variable with inverse uncertainty distribution
1
1
1 () = 1
1 () 2 () n ().

(2.41)

Exercise 2.7: Let 1 , 2 , , n be independent uncertain variables with


regular uncertainty distributions 1 , 2 , , n , respectively. Show that the
maximum
= 1 2 n
(2.42)
is an uncertain variable with inverse uncertainty distribution
1
1
1 () = 1
1 () 2 () n ().

(2.43)

48

Chapter 2 - Uncertain Variable

Theorem 2.11 Assume that 1 and 2 are independent linear uncertain


variables L(a1 , b1 ) and L(a2 , b2 ), respectively. Then the sum 1 + 2 is also a
linear uncertain variable L(a1 + a2 , b1 + b2 ), i.e.,
L(a1 , b1 ) + L(a2 , b2 ) = L(a1 + a2 , b1 + b2 ).

(2.44)

The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,
k L(a, b) = L(ka, kb).

(2.45)

Proof: Assume that the uncertain variables 1 and 2 have uncertainty


distributions 1 and 2 , respectively. Then
1
1 () = (1 )a1 + b1 ,
1
2 () = (1 )a2 + b2 .
It follows from the operational law that the inverse uncertainty distribution
of 1 + 2 is
1
1 () = 1
1 () + 2 () = (1 )(a1 + a2 ) + (b1 + b2 ).

Hence the sum is also a linear uncertain variable L(a1 + a2 , b1 + b2 ). The


first part is verified. Next, suppose that the uncertainty distribution of the
uncertain variable L(a, b) is . It follows from the operational law that
when k > 0, the inverse uncertainty distribution of k is
1 () = k1 () = (1 )(ka) + (kb).
Hence k is just a linear uncertain variable L(ka, kb).
Theorem 2.12 Assume that 1 and 2 are independent zigzag uncertain
variables Z(a1 , b1 , c1 ) and Z(a2 , b2 , c3 ), respectively. Then the sum 1 + 2 is
also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ), i.e.,
Z(a1 , b1 , c1 ) + Z(a2 , b2 , c2 ) = Z(a1 + a2 , b1 + b2 , c1 + c2 ).

(2.46)

The product of a zigzag uncertain variable Z(a, b, c) and a scalar number


k > 0 is also a zigzag uncertain variable Z(ka, kb, kc), i.e.,
k Z(a, b, c) = Z(ka, kb, kc).

(2.47)

Proof: Assume that the uncertain variables 1 and 2 have uncertainty


distributions 1 and 2 , respectively. Then
(
(1 2)a1 + 2b1 ,
if < 0.5
1
1 () =
(2 2)b1 + (2 1)c1 , if 0.5,

Section 2.4 - Operational Law

(
1
2 ()

49

(1 2)a2 + 2b2 ,
if < 0.5
(2 2)b2 + (2 1)c2 , if 0.5.

It follows from the operational law that the inverse uncertainty distribution
of 1 + 2 is
(
(1 2)(a1 + a2 ) + 2(b1 + b2 ),
if < 0.5
1
() =
(2 2)(b1 + b2 ) + (2 1)(c1 + c2 ), if 0.5.
Hence the sum is also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ).
The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable Z(a, b, c) is . It follows from the operational law
that when k > 0, the inverse uncertainty distribution of k is
(
(1 2)(ka) + 2(kb),
if < 0.5
1 () = k1 () =
(2 2)(kb) + (2 1)(kc), if 0.5.
Hence k is just a zigzag uncertain variable Z(ka, kb, kc).
Theorem 2.13 Let 1 and 2 be independent normal uncertain variables
N (e1 , 1 ) and N (e2 , 2 ), respectively. Then the sum 1 + 2 is also a normal
uncertain variable N (e1 + e2 , 1 + 2 ), i.e.,
N (e1 , 1 ) + N (e2 , 2 ) = N (e1 + e2 , 1 + 2 ).

(2.48)

The product of a normal uncertain variable N (e, ) and a scalar number


k > 0 is also a normal uncertain variable N (ke, k), i.e.,
k N (e, ) = N (ke, k).

(2.49)

Proof: Assume that the uncertain variables 1 and 2 have uncertainty


distributions 1 and 2 , respectively. Then

1 3

1
1 () = e1 +
ln
,

2 3
1
ln
.
2 () = e2 +

1
It follows from the operational law that the inverse uncertainty distribution
of 1 + 2 is

(1 + 2 ) 3

1
1 () = 1
()
+

()
=
(e
+
e
)
+
ln
.
1
2
1
2

1
Hence the sum is also a normal uncertain variable N (e1 + e2 , 1 + 2 ). The
first part is verified. Next, suppose that the uncertainty distribution of the

50

Chapter 2 - Uncertain Variable

uncertain variable N (e, ) is . It follows from the operational law that,


when k > 0, the inverse uncertainty distribution of k is

(k) 3
1
1
ln
.
() = k () = (ke) +

1
Hence k is just a normal uncertain variable N (ke, k).
Theorem 2.14 Assume that 1 and 2 are independent lognormal uncertain
variables LOGN (e1 , 1 ) and LOGN (e2 , 2 ), respectively. Then the product
1 2 is also a lognormal uncertain variable LOGN (e1 + e2 , 1 + 2 ), i.e.,
LOGN (e1 , 1 ) LOGN (e2 , 2 ) = LOGN (e1 + e2 , 1 + 2 ).

(2.50)

The product of a lognormal uncertain variable LOGN (e, ) and a scalar number k > 0 is also a lognormal uncertain variable LOGN (e + ln k, ), i.e.,
k LOGN (e, ) = LOGN (e + ln k, ).

(2.51)

Proof: Assume that the uncertain variables 1 and 2 have uncertainty


distributions 1 and 2 , respectively. Then
!

3
1
1
ln
,
1 () = exp e1 +

1
!

2 3

1
2 () = exp e2 +
ln
.

1
It follows from the operational law that the inverse uncertainty distribution
of 1 2 is
!

(
+

)
3

1
2
1
1 () = 1
ln
.
1 () 2 () = exp (e1 + e2 ) +

1
Hence the product is a lognormal uncertain variable LOGN (e1 + e2 , 1 + 2 ).
The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable LOGN (e, ) is . It follows from the operational
law that, when k > 0, the inverse uncertainty distribution of k is
!

3
1
1
ln
.
() = k () = exp (e + ln k) +

1
Hence k is just a lognormal uncertain variable LOGN (e + ln k, ).
Theorem 2.15 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. If f is a
strictly increasing function, then
= f (1 , 2 , , n )

(2.52)

51

Section 2.4 - Operational Law

is an uncertain variable with uncertainty distribution


(x) =

sup

min i (xi ).

f (x1 ,x2 , ,xn )=x 1in

(2.53)

Proof: For simplicity, we only prove the case n = 2. Since f is strictly


increasing, it follows from the definition of uncertainty distribution that

(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .

f (x1 ,x2 )=x

Note that for each given number x, the event


[
(1 x1 ) (2 x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that


(x) =

M {(1 x1 ) (2 x2 )}

sup
f (x1 ,x2 )=x

M{1 x1 } M{2 x2 }

sup
f (x1 ,x2 )=x

1 (x1 ) 2 (x2 ).

sup
f (x1 ,x2 )=x

The theorem is proved.


Exercise 2.8: Let be an uncertain variable with uncertainty distribution ,
and let f be a strictly increasing function. Show that f () has an uncertainty
distribution
(x) = (f 1 (x)), x <.
(2.54)
Exercise 2.9: Let 1 , 2 , , n be independent uncertain variables with
uncertainty distributions 1 , 2 , , n , respectively. Show that the sum
= 1 + 2 + + n

(2.55)

is an uncertain variable with uncertainty distribution


(x) =

sup

1 (x1 ) 2 (x2 ) n (xn ).

(2.56)

x1 +x2 ++xn =x

Especially, if 1 , 2 , , n are iid uncertain variables, then we immediately


have
1 + 2 + + n
1 .
(2.57)
n
Exercise 2.10: Let 1 , 2 , , n be independent nonnegative uncertain
variables with uncertainty distributions 1 , 2 , , n , respectively. Show
that the product
= 1 2 n
(2.58)

52

Chapter 2 - Uncertain Variable

is an uncertain variable with uncertainty distribution


(x) =

sup
x1 x2 xn =x

1 (x1 ) 2 (x2 ) n (xn ).

(2.59)

Especially, if 1 , 2 , , n are iid nonnegative uncertain variables, then we


immediately have
p
n
1 2 n 1 .
(2.60)
Exercise 2.11: Let 1 , 2 , , n be independent uncertain variables with
uncertainty distributions 1 , 2 , , n , respectively. Show that the minimum
= 1 2 n
(2.61)
has an uncertainty distribution
(x) = 1 (x) 2 (x) n (x).

(2.62)

Especially, if 1 , 2 , , n are iid uncertain variables, then we immediately


have
1 2 n 1 .
(2.63)
Exercise 2.12: Let 1 , 2 , , n be independent uncertain variables with
uncertainty distributions 1 , 2 , , n , respectively. Show that the maximum
= 1 2 n
(2.64)
has an uncertainty distribution
(x) = 1 (x) 2 (x) n (x).

(2.65)

Especially, if 1 , 2 , , n are iid uncertain variables, then we immediately


have
1 2 n 1 .
(2.66)
Theorem 2.16 (Liu [126], Extreme Value Theorem) Let 1 , 2 , , n be
independent uncertain variables. Assume that
Si = 1 + 2 + + i

(2.67)

have uncertainty distributions i for i = 1, 2, , n, respectively. Then the


maximum
S = S1 S2 Sn
(2.68)
has an uncertainty distribution
(x) = 1 (x) 2 (x) n (x);

(2.69)

S = S1 S2 Sn

(2.70)

and the minimum


has an uncertainty distribution
(x) = 1 (x) 2 (x) n (x).

(2.71)

53

Section 2.4 - Operational Law

Proof: Assume that the uncertainty distributions of the uncertain variables


1 , 2 , , n are 1 , 2 , , n , respectively. Define
f (x1 , x2 , , xn ) = x1 (x1 + x2 ) (x1 + x2 + + xn ).
Then f is a strictly increasing function and
S = f (1 , 2 , , n ).
It follows from Theorem 2.15 that S has an uncertainty distribution
(x) =

1 (x1 ) 2 (x2 ) n (xn )

sup
f (x1 ,x2 , ,xn )=x

= min

sup

1in x1 +x2 ++xi =x

1 (x1 ) 2 (x2 ) i (xi )

= min i (x).
1in

Thus (2.69) is verified. Similarly, define


f (x1 , x2 , , xn ) = x1 (x1 + x2 ) (x1 + x2 + + xn ).
Then f is a strictly increasing function and
S = f (1 , 2 , , n ).
It follows from Theorem 2.15 that S has an uncertainty distribution
(x) =

1 (x1 ) 2 (x2 ) n (xn )

sup
f (x1 ,x2 , ,xn )=x

= max

sup

1in x1 +x2 ++xi =x

1 (x1 ) 2 (x2 ) i (xi )

= max i (x).
1in

Thus (2.71) is verified.


Strictly Decreasing Function of Uncertain Variables
A real-valued function f (x1 , x2 , , xn ) is said to be strictly decreasing if
f (x1 , x2 , , xn ) f (y1 , y2 , , yn )

(2.72)

whenever xi yi for i = 1, 2, , n, and


f (x1 , x2 , , xn ) > f (y1 , y2 , , yn )

(2.73)

54

Chapter 2 - Uncertain Variable

whenever xi < yi for i = 1, 2, , n. If f (x1 , x2 , , xn ) is a strictly increasing function, then f (x1 , x2 , , xn ) is a strictly decreasing function. Furthermore, 1/f (x1 , x2 , , xn ) is also a strictly decreasing function provided
that f is positive. Especially, the following are strictly decreasing functions,
f (x) = x,
f (x) = exp(x),
f (x) =

1
,
x

x > 0.

Theorem 2.17 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If f
is a strictly decreasing function, then
= f (1 , 2 , , n )

(2.74)

is an uncertain variable with inverse uncertainty distribution


1
1
1 () = f (1
1 (1 ), 2 (1 ), , n (1 )).

(2.75)

Proof: For simplicity, we only prove the case n = 2. At first, we always have
1
M{ 1 ()} = M{f (1 , 2 ) f (1
1 (1 ), 2 (1 ))}.

Since f is a strictly decreasing function, we obtain


1
{ 1 ()} {1 1
1 (1 )} {2 2 (1 )}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 (1 )} M{2 2 (1 )} = = .

On the other hand, since f is a strictly decreasing function, we obtain


1
{ 1 ()} {1 1
1 (1 )} {2 2 (1 )}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 (1 )} M{2 2 (1 )} = = .

It follows that M{ 1 ()} = . In other words, is just the uncertainty


distribution of . The theorem is proved.
Exercise 2.13: Let be a positive uncertain variable with regular uncertainty distribution . Show that the reciprocal 1/ is an uncertain variable
with inverse uncertainty distribution
1 () =

1
.
1 (1 )

(2.76)

55

Section 2.4 - Operational Law

Exercise 2.14: Let be an uncertain variable with regular uncertainty


distribution . Show that exp() is an uncertain variable with inverse
uncertainty distribution

1 () = exp 1 (1 ) .
(2.77)
Theorem 2.18 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with continuous uncertainty distributions 1 , 2 , , n , respectively.
If f is a strictly decreasing function, then
= f (1 , 2 , , n )

(2.78)

is an uncertain variable with uncertainty distribution


(x) =

sup

min (1 i (xi )).

f (x1 ,x2 , ,xn )=x 1in

(2.79)

Proof: For simplicity, we only prove the case n = 2. Since f is strictly


decreasing, it follows from the definition of uncertainty distribution that

(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .

f (x1 ,x2 )=x

Note that for each given number x, the event


[
(1 x1 ) (2 x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that


(x) =

sup

M {(1 x1 ) (2 x2 )}

f (x1 ,x2 )=x

sup

M{1 x1 } M{2 x2 }

f (x1 ,x2 )=x

sup

(1 1 (x1 )) (1 2 (x2 )).

f (x1 ,x2 )=x

The theorem is proved.


Exercise 2.15: Let be an uncertain variable with continuous uncertainty
distribution , and let f be a strictly decreasing function. Show that f ()
has an uncertainty distribution
(x) = 1 (f 1 (x)),

x <.

(2.80)

Exercise 2.16: Let be an uncertain variable with continuous uncertainty


distribution , and let a and b be real numbers with a < 0. Show that a + b
is an uncertain variable with uncertainty distribution


xb
, x <.
(2.81)
(x) = 1
a

56

Chapter 2 - Uncertain Variable

Exercise 2.17: Let be a positive uncertain variable with continuous uncertainty distribution . Show that 1/ is an uncertain variable with uncertainty
distribution
 
1
(x) = 1
, x > 0.
(2.82)
x
Exercise 2.18: Let be an uncertain variable with continuous uncertainty
distribution . Show that exp() is a positive uncertain variable with
uncertainty distribution
(x) = 1 ( ln(x)),

x > 0.

(2.83)

Strictly Monotone Function of Uncertain Variables


A real-valued function f (x1 , x2 , , xn ) is said to be strictly monotone if it
is strictly increasing with respect to x1 , x2 , , xm and strictly decreasing
with respect to xm+1 , xm+2 , , xn , that is,
f (x1 , , xm , xm+1 , , xn ) f (y1 , , ym , ym+1 , , yn )

(2.84)

whenever xi yi for i = 1, 2, , m and xi yi for i = m + 1, m + 2, , n,


and
f (x1 , , xm , xm+1 , , xn ) < f (y1 , , ym , ym+1 , , yn )

(2.85)

whenever xi < yi for i = 1, 2, , m and xi > yi for i = m + 1, m + 2, , n.


The following are strictly monotone functions,
f (x1 , x2 ) = x1 x2 ,
f (x1 , x2 ) = x1 /x2 , x1 , x2 > 0,
f (x1 , x2 ) = x1 /(x1 + x2 ), x1 , x2 > 0
Theorem 2.19 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If
the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , ,
xm and strictly decreasing with respect to xm+1 , xm+2 , , xn , then
= f (1 , 2 , , n )
is an uncertain variable with inverse uncertainty distribution
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

(2.86)

Proof: We only prove the case of m = 1 and n = 2. At first, we always have


1
M{ 1 ()} = M{f (1 , 2 ) f (1
1 (), 2 (1 ))}.

57

Section 2.4 - Operational Law

Since the function f (x1 , x2 ) is strictly increasing with respect to x1 and


strictly decreasing with x2 , we obtain
1
{ 1 ()} {1 1
1 ()} {2 2 (1 )}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 ()} M{2 2 (1 )} = = .

On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain
1
{ 1 ()} {1 1
1 ()} {2 2 (1 )}.

By using the independence of 1 and 2 , we get


1
M{ 1 ()} M{1 1
1 ()} M{2 2 (1 )} = = .

It follows that M{ 1 ()} = . In other words, is just the uncertainty


distribution of . The theorem is proved.
Exercise 2.19: Let 1 and 2 be independent uncertain variables with regular uncertainty distributions 1 and 2 , respectively. Show that the inverse
uncertainty distribution of the difference 1 2 is
1
1 () = 1
1 () 2 (1 ).

(2.87)

Exercise 2.20: Let 1 and 2 be independent and positive uncertain variables with regular uncertainty distributions 1 and 2 , respectively. Show
that the inverse uncertainty distribution of the quotient 1 /2 is
1 () =

1
1 ()
.
1
2 (1 )

(2.88)

Exercise 2.21: Assume 1 and 2 are independent and positive uncertain variables with regular uncertainty distributions 1 and 2 , respectively.
Show that the inverse uncertainty distribution of 1 /(1 + 2 ) is
1 () =

1
1 ()
.
+ 1
2 (1 )

1
1 ()

(2.89)

Theorem 2.20 (Liu [120]) Let 1 , 2 , , n be independent uncertain variables with continuous uncertainty distributions 1 , 2 , , n , respectively.
If the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 ,
, xm and strictly decreasing with respect to xm+1 , xm+2 , , xn , then
= f (1 , 2 , , n )

(2.90)

58

Chapter 2 - Uncertain Variable

is an uncertain variable with uncertainty distribution




(x) =
sup
min i (xi ) min (1 i (xi )) .
f (x1 ,x2 , ,xn )=x

1im

m+1in

(2.91)

Proof: For simplicity, we only prove the case of m = 1 and n = 2. Since


f (x1 , x2 ) is strictly increasing with respect to x1 and strictly decreasing with
respect to x2 , it follows from the definition of uncertainty distribution that

(x) = M{f (1 , 2 ) x} = M
(1 x1 ) (2 x2 ) .

f (x1 ,x2 )=x

Note that for each given number x, the event


[
(1 x1 ) (2 x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that


(x) =

sup

M {(1 x1 ) (2 x2 )}

f (x1 ,x2 )=x

sup

M{1 x1 } M{2 x2 }

f (x1 ,x2 )=x

sup

1 (x1 ) (1 2 (x2 )).

f (x1 ,x2 )=x

The theorem is proved.


Exercise 2.22: Let 1 and 2 be independent uncertain variables with continuous uncertainty distributions 1 and 2 , respectively. Show that 1 2
is an uncertain variable with uncertainty distribution
(x) = sup 1 (x + y) (1 2 (y)).

(2.92)

y<

Exercise 2.23: Let 1 and 2 be independent and positive uncertain variables with continuous uncertainty distributions 1 and 2 , respectively. Show
that 1 /2 is an uncertain variable with uncertainty distribution
(x) = sup 1 (xy) (1 2 (y)).

(2.93)

y>0

Some Useful Theorems


In many cases, it is required to calculate M{f (1 , 2 , , n ) 0}. Perhaps
the first idea is to produce the uncertainty distribution (x) of f (1 , 2 , , n )
by the operational law, and then the uncertain measure is just (0). However, for convenience, we may use the following theorems.

59

Section 2.4 - Operational Law

Theorem 2.21 (Liu [119]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If
f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and strictly
decreasing with respect to m+1 , m+2 , , n , then
M{f (1 , 2 , , n ) 0}

(2.94)

is just the root of the equation


1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 )) = 0.

(2.95)

Proof: It follows from Theorem 2.19 that f (1 , 2 , , n ) is an uncertain


variable whose inverse uncertainty distribution is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

Since M{f (1 , 2 , , n ) 0} = (0), it is the solution of the equation


1 () = 0. The theorem is proved.
Remark 2.3: Keep in mind that sometimes the equation (2.95) may not
have a root. In this case, if
1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 )) < 0

(2.96)

for all , then we set the root = 1; and if


1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 )) > 0

(2.97)

for all , then we set the root = 0.


Remark 2.4: Since f (1 , 2 , , n ) is strictly increasing with respect to
1 , 2 , , m and strictly decreasing with respect to m+1 , m+2 , , n , the
1
1
1
function f (1
1 (), , m (), m+1 (1 ), , n (1 )) is a strictly
increasing function with respect to . See Figure 2.15. Thus its root may
be estimated by the bisection method:
Step 1. Set a = 0, b = 1 and c = (a + b)/2.
1
1
1
Step 2. If f (1
1 (c), , m (c), m+1 (1 c), , n (1 c)) 0, then set
a = c. Otherwise, set b = c.

Step 3. If |b a| > (a predetermined precision), then set c = (b a)/2


and go to Step 2. Otherwise, output b as the root.
Theorem 2.22 (Liu [119]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If
f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and strictly
decreasing with respect to m+1 , m+2 , , n , then
M{f (1 , 2 , , n ) > 0}

(2.98)

60

Chapter 2 - Uncertain Variable


...
....
..........
....
.. ..
...
.. ..
...
.
.
... ..
...
.... ..
...
.....
...
.....
...
.....
.
.
.
..
.
...
.
.
..
..
...
........
........
.
.
.
.
..
.
...
.
.
.
......
.
.
.
.
.
.
.
.
.
.
.............................................................................................. ..........................................................................................
..
.
.
.
..
.
.......
.
.
.
.
..
.
.
.
...
.
.
..
........
...
.......
.
.
..
.
.
...
.
....
.
.
..
.
...
.
..
..
... .......
..
... ...
..
... ...
..
... ...
..
......
.
....
...
...
.

1
1
1
Figure 2.15: f (1
1 (), , m (), m+1 (1 ), , n (1 ))

is just the root of the equation


1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) = 0.

(2.99)

Proof: It follows from Theorem 2.19 that f (1 , 2 , , n ) is an uncertain


variable whose inverse uncertainty distribution is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

Since M{f (1 , 2 , , n ) > 0} = 1(0), it is the solution of the equation


1 (1 ) = 0. The theorem is proved.
Remark 2.5: Keep in mind that sometimes the equation (2.99) may not
have a root. In this case, if
1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) < 0

(2.100)

for all , then we set the root = 0; and if


1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) > 0

(2.101)

for all , then we set the root = 1.


Remark 2.6: Since f (1 , 2 , , n ) is strictly increasing with respect to
1 , 2 , , m and strictly decreasing with respect to m+1 , m+2 , , n , the
1
1
1
function f (1
1 (1 ), , m (1 ), m+1 (), , n ()) is a strictly
decreasing function with respect to . See Figure 2.16. Thus its root may
be estimated by the bisection method:
Step 1. Set a = 0, b = 1 and c = (a + b)/2.
1
1
1
Step 2. If f (1
1 (1 c), , m (1 c), m+1 (c), , n (c)) > 0, then set
a = c. Otherwise, set b = c.

Step 3. If |b a| > (a predetermined precision), then set c = (b a)/2


and go to Step 2. Otherwise, output b as the root.

61

Section 2.4 - Operational Law


....
........
....
..
......
..
... ....
..
... ...
..
... ....
..
... ....
..
.....
...
.....
..
......
...
..
.......
...
........
..
.........
...
..
..........
...
..........
.
.
.
.
.
................................................................................... .....................................................................................................
..........
..
..
.........
..
...
.........
.......
..
...
......
..
...
......
.....
...
..... ...
...
.... .
...
... ..
... ..
...
... .
...
.....
...
..
.

1
1
1
Figure 2.16: f (1
1 (1 ), , m (1 ), m+1 (), , n ())

Theorem 2.23 Let 1 , 2 , , n be independent uncertain variables with


regular uncertainty distributions 1 , 2 , , n , respectively. If the function f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and
strictly decreasing with respect to m+1 , m+2 , , n , then
M {f (1 , 2 , , n ) 0}

(2.102)

if and only if
1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 )) 0.

(2.103)

Proof: It follows from Theorem 2.19 that the inverse uncertainty distribution
of f (1 , 2 , , n ) is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

Thus (2.102) holds if and only if 1 () 0. The theorem is thus verified.


Boolean Function of Boolean Uncertain Variables
A function is said to be Boolean if it is a mapping from {0, 1}n to {0, 1}. For
example,
f (x1 , x2 , x3 ) = x1 x2 x3
(2.104)
is a Boolean function. An uncertain variable is said to be Boolean if it
takes values either 0 or 1. For example, the following is a Boolean uncertain
variable,
(
1 with uncertain measure a
=
(2.105)
0 with uncertain measure 1 a
where a is a number between 0 and 1. This subsection introduces an operational law for Boolean system.

62

Chapter 2 - Uncertain Variable

Theorem 2.24 (Liu [120]) Assume 1 , 2 , , n are independent Boolean


uncertain variables, i.e.,
(
1 with uncertain measure ai
i =
(2.106)
0 with uncertain measure 1 ai
for i = 1, 2, , n. If f is a Boolean function (not necessarily monotone),
then = f (1 , 2 , , n ) is a Boolean uncertain variable such that

sup
min i (xi ),

f (x1 ,x2 , ,xn )=1 1in

if
sup
min i (xi ) < 0.5

f (x1 ,x2 , ,xn )=1 1in


(2.107)
M{ = 1} =

sup
min i (xi ),
1

f (x1 ,x2 , ,xn )=0 1in

if
sup
min i (xi ) 0.5

1in
f (x1 ,x2 , ,xn )=1

where xi take values either 0 or 1, and i are defined by


(
ai ,
if xi = 1
i (xi ) =
1 ai , if xi = 0

(2.108)

for i = 1, 2, , n, respectively.
Proof: Let B1 , B2 , , Bn be nonempty subsets of {0, 1}. In other words,
they take values of {0}, {1} or {0, 1}. Write
= { = 1},

c = { = 0},

i = {i Bi }

for i = 1, 2, , n. It is easy to verify that


1 2 n = if and only if f (B1 , B2 , , Bn ) = {1},
1 2 n = c if and only if f (B1 , B2 , , Bn ) = {0}.
It follows from the product axiom that

sup
min M{i Bi },

f (B1 ,B2 , ,Bn )={1} 1in

if
sup
min M{i Bi } > 0.5

f (B1 ,B2 , ,Bn )={1} 1in

1
sup
min M{i Bi },
M{ = 1} =

f (B1 ,B2 , ,Bn )={0} 1in

if
sup
min M{i Bi } > 0.5

f (B1 ,B2 , ,Bn )={0} 1in

0.5, otherwise.

(2.109)

63

Section 2.4 - Operational Law

Please note that


i (1) = M{i = 1},

i (0) = M{i = 0}

for i = 1, 2, , n. The argument breaks down into four cases. Case 1:


Assume
sup
min i (xi ) < 0.5.
f (x1 ,x2 , ,xn )=1 1in

Then we have
min M{i Bi } = 1

sup

f (B1 ,B2 , ,Bn )={0} 1in

sup

min i (xi ) > 0.5.

f (x1 ,x2 , ,xn )=1 1in

It follows from (2.109) that


M{ = 1} =

sup

min i (xi ).

f (x1 ,x2 , ,xn )=1 1in

Case 2: Assume
sup

min i (xi ) > 0.5.

f (x1 ,x2 , ,xn )=1 1in

Then we have
min M{i Bi } = 1

sup

f (B1 ,B2 , ,Bn )={1} 1in

sup

min i (xi ) > 0.5.

f (x1 ,x2 , ,xn )=0 1in

It follows from (2.109) that


M{ = 1} = 1

sup

min i (xi ).

f (x1 ,x2 , ,xn )=0 1in

Case 3: Assume
sup

min i (xi ) = 0.5,

sup

min i (xi ) = 0.5.

f (x1 ,x2 , ,xn )=1 1in


f (x1 ,x2 , ,xn )=0 1in

Then we have
sup

min M{i Bi } = 0.5,

sup

min M{i Bi } = 0.5.

f (B1 ,B2 , ,Bn )={1} 1in

f (B1 ,B2 , ,Bn )={0} 1in

It follows from (2.109) that


M{ = 1} = 0.5 = 1

sup

min i (xi ).

f (x1 ,x2 , ,xn )=0 1in

Case 4: Assume
sup

min i (xi ) = 0.5,

f (x1 ,x2 , ,xn )=1 1in

64

Chapter 2 - Uncertain Variable

sup

min i (xi ) < 0.5.

f (x1 ,x2 , ,xn )=0 1in

Then we have
min M{i Bi } = 1

sup

f (B1 ,B2 , ,Bn )={1} 1in

sup

min i (xi ) > 0.5.

f (x1 ,x2 , ,xn )=0 1in

It follows from (2.109) that


M{ = 1} = 1

sup

min i (xi ).

f (x1 ,x2 , ,xn )=0 1in

Hence the equation (2.107) is proved for the four cases.


Theorem 2.25 Assume that 1 , 2 , , n are independent Boolean uncertain variables, i.e.,
(
1 with uncertain measure ai
i =
(2.110)
0 with uncertain measure 1 ai
for i = 1, 2, , n. Then the minimum
= 1 2 n

(2.111)

is a Boolean uncertain variable such that


M{ = 1} = a1 a2 an ,

(2.112)

M{ = 0} = (1 a1 ) (1 a2 ) (1 an ).

(2.113)

Proof: Since is the minimum of Boolean uncertain variables, the corresponding Boolean function is
f (x1 , x2 , , xn ) = x1 x2 xn .

(2.114)

Without loss of generality, we assume a1 a2 an . Then we have


sup

min i (xi ) = min i (1) = an ,

f (x1 ,x2 , ,xn )=1 1in

sup

1in

min i (xi ) = (1 an ) min (ai (1 ai ))

f (x1 ,x2 , ,xn )=0 1in

1i<n

where i (xi ) are defined by (2.108) for i = 1, 2, , n, respectively. When


an < 0.5, we have
sup

min i (xi ) = an < 0.5.

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} =

sup

min i (xi ) = an .

f (x1 ,x2 , ,xn )=1 1in

65

Section 2.4 - Operational Law

When an 0.5, we have


min i (xi ) = an 0.5.

sup

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} = 1

sup

min i (xi ) = 1 (1 an ) = an .

f (x1 ,x2 , ,xn )=0 1in

Thus M{ = 1} is always an , i.e., the minimum value of a1 , a2 , , an . Thus


the equation (2.112) is proved. The equation (2.113) may be verified by the
duality of uncertain measure.
Theorem 2.26 Assume that 1 , 2 , , n are independent Boolean uncertain variables, i.e.,
(
1 with uncertain measure ai
i =
(2.115)
0 with uncertain measure 1 ai
for i = 1, 2, , n. Then the maximum
= 1 2 n

(2.116)

is a Boolean uncertain variable such that


M{ = 1} = a1 a2 an ,

(2.117)

M{ = 0} = (1 a1 ) (1 a2 ) (1 an ).

(2.118)

Proof: Since is the maximum of Boolean uncertain variables, the corresponding Boolean function is
f (x1 , x2 , , xn ) = x1 x2 xn .

(2.119)

Without loss of generality, we assume a1 a2 an . Then we have


min i (xi ) = a1 min (ai (1 ai )),

sup

f (x1 ,x2 , ,xn )=1 1in

sup

1<in

min i (xi ) = min i (0) = 1 a1

f (x1 ,x2 , ,xn )=0 1in

1in

where i (xi ) are defined by (2.108) for i = 1, 2, , n, respectively. When


a1 0.5, we have
sup

min i (xi ) 0.5.

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} = 1

sup

min i (xi ) = 1 (1 a1 ) = a1 .

f (x1 ,x2 , ,xn )=0 1in

66

Chapter 2 - Uncertain Variable

When a1 < 0.5, we have


sup

min i (xi ) = a1 < 0.5.

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} =

sup

min i (xi ) = a1 .

f (x1 ,x2 , ,xn )=1 1in

Thus M{ = 1} is always a1 , i.e., the maximum value of a1 , a2 , , an . Thus


the equation (2.117) is proved. The equation (2.118) may be verified by the
duality of uncertain measure.
Theorem 2.27 Assume that 1 , 2 , , n are independent Boolean uncertain variables, i.e.,
(
1 with uncertain measure ai
i =
(2.120)
0 with uncertain measure 1 ai
for i = 1, 2, , n. Then
(
=

1, if 1 + 2 + + n k
0, if 1 + 2 + + n < k

(2.121)

is a Boolean uncertain variable such that


M{ = 1} = k-max [a1 , a2 , , an ]

(2.122)

M{ = 0} = k-min [1 a1 , 1 a2 , , 1 an ]

(2.123)

and
where k-max represents the kth largest value, and k-min represents the kth
smallest value.
Proof: This is the so-called k-out-of-n system. The corresponding Boolean
function is
(
1, if x1 + x2 + + xn k
f (x1 , x2 , , xn ) =
(2.124)
0, if x1 + x2 + + xn < k.
Without loss of generality, we assume a1 a2 an . Then we have
sup

min i (xi ) = ak min (ai (1 ai )),

f (x1 ,x2 , ,xn )=1 1in

sup

k<in

min i (xi ) = (1 ak ) min (ai (1 ai ))

f (x1 ,x2 , ,xn )=0 1in

k<in

67

Section 2.4 - Operational Law

where i (xi ) are defined by (2.108) for i = 1, 2, , n, respectively. When


ak 0.5, we have
min i (xi ) 0.5.

sup

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} = 1

min i (xi ) = 1 (1 ak ) = ak .

sup

f (x1 ,x2 , ,xn )=0 1in

When ak < 0.5, we have


sup

min i (xi ) = ak < 0.5.

f (x1 ,x2 , ,xn )=1 1in

It follows from Theorem 2.24 that


M{ = 1} =

sup

min i (xi ) = ak .

f (x1 ,x2 , ,xn )=1 1in

Thus M{ = 1} is always ak , i.e., the kth largest value of a1 , a2 , , an .


Thus the equation (2.122) is proved. The equation (2.123) may be verified
by the duality of uncertain measure.
Boolean System Calculator
Boolean System Calculator is a function in the Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) for computing the uncertain measure
like
M{f (1 , 2 , , n ) = 1}, M{f (1 , 2 , , n ) = 0}
(2.125)
where 1 , 2 , , n are independent Boolean uncertain variables and f is a
Boolean function. For example, let 1 , 2 , 3 be independent Boolean uncertain variables,
(
1 with uncertain mesure 0.8
1 =
0 with uncertain mesure 0.2,
(
1 with uncertain mesure 0.7
2 =
0 with uncertain mesure 0.3,
(
1 with uncertain mesure 0.6
3 =
0 with uncertain mesure 0.4.
We also assume the Boolean function is
(
1, if x1 + x2 + x3 = 0 or 2
f (x1 , x2 , x3 ) =
0, if x1 + x2 + x3 = 1 or 3.
The Boolean System Calculator yields M{f (1 , 2 , 3 ) = 1} = 0.4.

68

Chapter 2 - Uncertain Variable

General Function of Discrete Uncertain Variables


Theorem 2.28 Let 1 , 2 , , n be independent discrete uncertain variables
represented by

xi1 with uncertain measure ci1

xi2 with uncertain measure ci2


i =
(2.126)

ximi with uncertain measure cimi


where ci1 , ci2 , , cimi are nonnegative numbers satisfying the consistency
condition
cij + cik 1 ci1 + ci2 + + cimi , j 6= k
(2.127)
for i = 1, 2, , n. Then = f (1 , 2 , , n ) is a discrete uncertain variable
taking values f (x1k1 , x2k2 , , xnkn ) with uncertain measures c1k1 c2k2
cnkn for ki = 1, 2, , mi , i = 1, 2, , n.
Example 2.12: Let 1 and 2 be independent discrete uncertain variables,
(
xi1 with uncertain measure ci1
i =
(2.128)
xi2 with uncertain measure ci2
for i = 1, 2. Then = 1 + 2 is a discrete uncertain variable,

x11 + x21 with uncertain measure c11 c21

x + x with uncertain measure c c


11
22
11
22
=

x
+
x
with
uncertain
measure
c

c
12
21
12
21

x12 + x22 with uncertain measure c12 c22 .

2.5

(2.129)

Expected Value

Expected value is the average value of uncertain variable in the sense of


uncertain measure, and represents the size of uncertain variable.
Definition 2.16 (Liu [113]) Let be an uncertain variable. Then the expected value of is defined by
Z +
Z 0
E[] =
M{ r}dr
M{ r}dr
(2.130)

provided that at least one of the two integrals is finite.


Theorem 2.29 (Liu [113]) Let be an uncertain variable with uncertainty
distribution . If the expected value exists, then
Z +
Z 0
E[] =
(1 (x))dx
(x)dx.
(2.131)
0

69

Section 2.5 - Expected Value

Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ x} = 1 (x) and M{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
M{ x}dx
M{ x}dx

(1 (x))dx

(x)dx.

See Figure 2.17. The theorem is proved.


(x)
....
........
....
...........................................................................................................................................
... .. .. .. ... ... .. ... .. ............................
... ... ... .. .. .. ... ..............
... .. .. .. .. .............
... .. ... .. ............
... ... .. ..........
... .. .. .......
... .. .......
... ........
. ..
.......
.
.
.
.
... ...
.
.
.
.
.... ... ...
.
.
.
.
..... . . ..
....... . .. ..
....... .. .. .. ..
....... .. . . . ..
......... ... ... .. ... ... ....
.
.
.
.
.
.
.
.
.
.
. .
.
.
...... .
.................... . .. .. . .. . .. .. ..
...........................................................................................................................................................................................................................................................................
....
..
...

(1 (x))dx

Figure 2.17: E[] =

(x)dx

Theorem 2.30 Let be an uncertain variable with uncertainty distribution


. Then we have
Z +
E[] =
xd(x).
(2.132)

Proof: It follows from the integration by parts and Theorem 2.29 that the
expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx

Z
=

xd(x) +
0

xd(x) =

xd(x).

See Figure 2.18. The theorem is proved.


Remark 2.7: If the uncertainty distribution (x) has a derivative (x),
then we immediately have
Z +
E[] =
x(x)dx.
(2.133)

70

Chapter 2 - Uncertain Variable

(x)
....
........
..
.
...............................................................................................................................
....................................................................................
............................................................
.............................................
.......................................
.....................................
............................
........................
..................
..............
.......
........
.
.
.
.
............
.
.
.
.
.
.
................
.
.
.
.
.
.
............................
......
................................
........................................
..................................................
.
.
.
.
.
.
.
.
.
.
.
.....................................................................
.......................................................................................................................................................................................................................................................................
..
...
...
.

Figure 2.18: E[] =

Z
xd(x) =

1 ()d

However, it is inappropriate to regard (x) as an uncertainty density function


because uncertain measure is not additive.
Theorem 2.31 (Liu [120]) Let be an uncertain variable with regular uncertainty distribution . Then we have
Z
E[] =

1 ()d.

(2.134)

Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.30 that the expected value is
Z

E[] =

Z
xd(x) =

1 ()d.

See Figure 2.18. The theorem is proved.


Exercise 2.24: Show that the linear uncertain variable L(a, b) has an
expected value
a+b
.
(2.135)
E[] =
2
Exercise 2.25: Show that the zigzag uncertain variable Z(a, b, c) has
an expected value
a + 2b + c
E[] =
.
(2.136)
4
Exercise 2.26: Show that the normal uncertain variable N (e, ) has
an expected value e, i.e.,
E[] = e.
(2.137)

71

Section 2.5 - Expected Value

Exercise 2.27: Show that the lognormal uncertain variable LOGN (e, )
has an expected value
(
E[] =

3 exp(e) csc( 3), if < / 3

+,
if / 3.

(2.138)

This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.
Exercise 2.28: Let be an uncertain variable with empirical uncertainty
distribution

0,
if x < x1

(i+1 i )(x xi )
i +
, if xi x xi+1 , 1 i < n
(x) =
xi+1 xi

1,
if x > xn
where x1 < x2 < < xn and 0 1 2 n 1. Show that
E[] =



n1
X i+1 i1
n1 + n
1 + 2
x1 +
xi + 1
xn .
2
2
2
i=2

(2.139)

Example 2.13: Let be a discrete uncertain variable taking values x1 , x2 ,


, xn with uncertain measures c1 , c2 , , cn , respectively, i.e.,

x1 with uncertain measure c1

x2 with uncertain measure c2


=
(2.140)

xn with uncertain measure cn


where c1 , c2 , , cn are nonnegative numbers satisfying the consistency condition
ci + cj 1 c1 + c2 + + cn ,

i 6= j.

(2.141)

When the maximum uncertainty principle is assumed, we have


E[] =

n
X

wk xk

(2.142)

wk = qk qk1

(2.143)

k=1

where

72

Chapter 2 - Uncertain Variable

and

ci ,

xi xk

ci ,

xi >xk

1
ci ,

x
>x

i
k

X
qk =
ci ,

xi xk

1
ci ,

xi >xk

ci ,

xi xk

0.5,

if

ci > 0.5,

xi xk

if

_
_

ci > 0.5,

ci > 0.5,

ci > 0.5,

_
_
xi >xk

ci 0.5,

ci 1

ci < 1

xi >xk

ci +

ci 1

xi xk

ci +

ci < 1

xi xk

ci < 0.5

xi >xk

1in

if

ci +

xi >xk

xi >xk

if

X
xi >xk

xi xk

xi >xk

if

ci +

xi xk

xi xk

if

ci 0.5,

1in

ci < 0.5

xi xk

otherwise

for k = 1, 2, , n. Note that q0 0, qn 1 and w1 , w2 , , wn are nonnegative numbers with w1 + w2 + + wn = 1. Especially, if c1 , c2 , , cn are
nonnegative numbers such that c1 + c2 + + cn = 1, then
wk ck , k = 1, 2, , n.

(2.144)

Expected Value of Monotone Function of Uncertain Variables


Theorem 2.32 (Liu and Ha [138]) Assume 1 , 2 , , n are independent
uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , ,
xm and strictly decreasing with respect to xm+1 , xm+2 , , xn , then the uncertain variable = f (1 , 2 , , n ) has an expected value
Z
E[] =

1
1
1
1
f (1
1 (), , m (), m+1 (1 ), , n (1 ))d. (2.145)

Proof: Since the function f (x1 , x2 , , xn ) is strictly increasing with respect


to x1 , x2 , , xm and strictly decreasing with respect to xm+1 , xm+2 , , xn ,
it follows from Theorem 2.19 that the inverse uncertainty distribution of is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

By using Theorem 2.31, we obtain (2.145). The theorem is proved.


Exercise 2.29: Let be an uncertain variable with regular uncertainty
distribution , and let f (x) be a strictly monotone (increasing or decreasing)

73

Section 2.5 - Expected Value

function. Show that


1

f (1 ())d.

E[f ()] =

(2.146)

Exercise 2.30: Let be an uncertain variable with uncertainty distribution


, and let f (x) be a strictly monotone (increasing or decreasing) function.
Show that
Z
+

E[f ()] =

f (x)d(x).

(2.147)

Exercise 2.31: Let and be independent and nonnegative uncertain


variables with regular uncertainty distributions and , respectively. Show
that
Z
1

1 ()1 ()d.

E[] =

(2.148)

Exercise 2.32: Let and be independent and positive uncertain variables


with regular uncertainty distributions and , respectively. Show that
  Z 1

1 ()
E
d.
(2.149)
=
1 (1 )

0
Exercise 2.33: Assume and are independent and positive uncertain
variables with regular uncertainty distributions and , respectively. Show
that
 Z 1

1 ()

=
d.
(2.150)
E
1 () + 1 (1 )
+
0
Linearity of Expected Value Operator
Theorem 2.33 (Liu [120]) Let and be independent uncertain variables
with finite expected values. Then for any real numbers a and b, we have
E[a + b] = aE[] + bE[].

(2.151)

Proof: Without loss of generality, suppose and have regular uncertainty


distributions and , respectively. Otherwise, we may give the uncertainty
distributions a small perturbation such that they become regular.
Step 1: We first prove E[a] = aE[]. If a = 0, then the equation holds
trivially. If a > 0, then the inverse uncertainty distribution of a is
1 () = a1 ().
It follows from Theorem 2.31 that
Z 1
Z
1
E[a] =
a ()d = a
0

1 ()d = aE[].

74

Chapter 2 - Uncertain Variable

If a < 0, then the inverse uncertainty distribution of a is


1 () = a1 (1 ).
It follows from Theorem 2.31 that
Z 1
Z
1
E[a] =
a (1 )d = a
0

1 ()d = aE[].

Thus we always have E[a] = aE[].


Step 2: We prove E[ + ] = E[] + E[]. The inverse uncertainty
distribution of the sum + is
1 () = 1 () + 1 ().
It follows from Theorem 2.31 that
Z 1
Z 1
Z
E[ + ] =
1 ()d =
1 ()d +
0

1 ()d = E[] + E[].

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[a + b] = E[a] + E[b] = aE[] + bE[].
The theorem is proved.
Example 2.14: Generally speaking, the expected value operator is not
necessarily linear if the independence is not assumed. For example, take
(, L, M) to be {1 , 2 , 3 } with M{1 } = 0.7, M{2 } = 0.3 and M{3 } = 0.2.
It follows from the extension theorem that M{1 , 2 } = 0.8, M{1 , 3 } = 0.7,
M{2 , 3 } = 0.3. Define two uncertain variables as follows,

1, if = 1
0, if = 1
0, if = 2
2, if = 2
() =
() =

2, if = 3 ,
3, if = 3 .
Note that and are not independent, and their sum is

1, if = 1
2, if = 2
( + )() =

5, if = 3 .
It is easy to verify that E[] = 0.9, E[] = 0.8, and E[ + ] = 1.9. Thus we
have
E[ + ] > E[] + E[].

75

Section 2.5 - Expected Value

If the uncertain variables

0,
1,
() =

2,
Then

are defined by
if = 1
if = 2
if = 3 ,

0, if = 1
3, if = 2
() =

1, if = 3 .

0, if = 1
4, if = 2
( + )() =

3, if = 3 .

It is easy to verify that E[] = 0.5, E[] = 0.9, and E[ + ] = 1.2. Thus we
have
E[ + ] < E[] + E[].
Comonotonic Functions of Uncertain Variable
Two real-valued functions f and g are said to be comonotonic if for any
numbers x and y, we always have
(f (x) f (y))(g(x) g(y)) 0.

(2.152)

It is easy to verify that (i) any function is comonotonic with itself (or positive
constant multiple of the function); (ii) any monotone increasing functions are
comonotonic with each other; and (iii) any monotone decreasing functions are
also comonotonic with each other.
Theorem 2.34 (Yang [216]) Let f and g be comonotonic functions. Then
for any uncertain variable , we have
E[f () + g()] = E[f ()] + E[g()].

(2.153)

Proof: Let f () and g() have uncertainty distributions and , respectively. Since f and g are comonotonic functions, at least one of the following
relations is true,
{f () 1 ()} {g() 1 ()},
{f () 1 ()} {g() 1 ()}.
On the one hand, we have
M{f () + g() 1 () + 1 ()}
M{(f () 1 ()) (g() 1 ())}
= M{f () 1 ()} M{g() 1 ()}
= = .

76

Chapter 2 - Uncertain Variable

On the other hand, we have


M{f () + g() 1 () + 1 ()}
M{(f () 1 ()) (g() 1 ())}
= M{f () 1 ()} M{g() 1 ()}
= = .
It follows that
M{f () + g() 1 () + 1 ()} =
holds for each . That is, 1 () + 1 () is the inverse uncertainty distribution of f () + g(). By using Theorem 2.31, we obtain
Z 1
E[f () + g()] =
(1 () + 1 ())d
0

Z
=

1 ()d +

1 ()d

= E[f ()] + E[g()].


The theorem is verified.
Example 2.15: Let be a positive uncertain variable. Since ln x and exp(x)
are comonotonic functions (both are increasing) on (0, +), we have
E[ln + exp()] = E[ln ] + E[exp()].

(2.154)

Example 2.16: Let be a nonnegative uncertain variable. Since x, x2 , , xn


are comonotonic functions (all are increasing) on [0, +), we have
E[ + 2 + + n ] = E[] + E[ 2 ] + + E[ n ].

(2.155)

Convex Function of Uncertain Variable


Theorem 2.35 Let f be a convex function on [a, b], and an uncertain
variable that takes values in [a, b] and has an expected value e. Then
be
ea
f (a) +
f (b).
ba
ba
Proof: For each , we have a () b and
E[f ()]

b ()
() a
a+
b.
ba
ba
It follows from the convexity of f that
() =

b ()
() a
f (a) +
f (b).
ba
ba
Taking expected values on both sides, we obtain the inequality.
f (())

(2.156)

77

Section 2.6 - Variance

2.6

Variance

The variance of uncertain variable provides a degree of the spread of the


distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 2.17 (Liu [113]) Let be an uncertain variable with finite expected value e. Then the variance of is
V [] = E[( e)2 ].

(2.157)

This definition tells us that the variance is just the expected value of
( e)2 . Since ( e)2 is a nonnegative uncertain variable, we also have
Z +
V [] =
M{( e)2 r}dr.
(2.158)
0

Theorem 2.36 If is an uncertain variable with finite expected value, a and


b are real numbers, then
V [a + b] = a2 V [].

(2.159)

Proof: Let e be the expected value of . Then a + b has an expected value


ae + b. It follows from the definition of variance that


V [a + b] = E (a + b (ae + b))2 = a2 E[( e)2 ] = a2 V [].
The theorem is thus verified.
Theorem 2.37 Let be an uncertain variable with expected value e. Then
V [] = 0 if and only if M{ = e} = 1. That is, the uncertain variable is
essentially the constant e.
Proof: We first assume V [] = 0. It follows from the equation (2.158) that
Z +
M{( e)2 r}dr = 0
0

which implies M{( e) r} = 0 for any r > 0. Hence we have


2

M{( e)2 = 0} = 1.
That is, M{ = e} = 1. Conversely, assume M{ = e} = 1. Then we
immediately have M{( e)2 = 0} = 1 and M{( e)2 r} = 0 for any
r > 0. Thus
Z
+

M{( e)2 r}dr = 0.

V [] =
0

The theorem is proved.

78

Chapter 2 - Uncertain Variable

Maximum Variance Theorem


Let be an uncertain variable that takes values in [a, b], but whose uncertainty distribution is otherwise arbitrary. When its expected value is given,
what is the possible maximum variance? The maximum variance theorem
will answer this question, thus playing an important role in treating games
against nature.
Theorem 2.38 (Maximum Variance Theorem) Let be an uncertain variable that takes values in [a, b] and has an expected value e. Then
V [] (e a)(b e)

(2.160)

and equality holds if the uncertain variable is determined by


(
a with uncertain measure (b e)/(b a)
=
b with uncertain measure (e a)/(b a).

(2.161)

Proof: It follows from Theorem 2.35 immediately by defining f (x) = (xe)2 .


It is also easy to verify that the uncertain variable determined by (2.161) has
variance (e a)(b e). The theorem is proved.
Exercise 2.34: Let be an uncertain variable that takes values in [a, b].
Show that V [] (b a)2 /4.
How to Obtain Variance from Uncertainty Distribution?
Let be an uncertain variable with expected value e. If we only know its
uncertainty distribution , then the variance
Z +
V [] =
M{( e)2 x}dx
0

M{( e +

x) ( e

x)}dx

(M{ e +

x} + M{ e

x})dx

(1 (e +

x) + (e

x))dx.

Thus we have the following stipulation.


Stipulation 2.1 Let be an uncertain variable with uncertainty distribution
and expected value e. Then its variance is
Z +

V [] =
(1 (e + x) + (e x))dx.
(2.162)
0

79

Section 2.6 - Variance

Theorem 2.39 Let be an uncertain variable with uncertainty distribution


. Then we have
Z +
V [] =
(x e)2 d(x).
(2.163)

Proof: This theorem is based on the stipulation (2.162) that says the variance is
Z +
Z +

V [] =
(1 (e + y))dy +
(e y)dy.
0

Substituting e + y with x and y with (x e) , the change of variables and


integration by parts produce
Z +
Z +
Z +

(1 (e + y))dy =
(1 (x))d(x e)2 =
(x e)2 d(x).
0

Similarly, substituting e y with x and y with (x e) , we obtain


Z +
Z
Z e

2
(e y)dy =
(x)d(x e) =
(x e)2 d(x).
0

It follows that the variance is


Z +
Z
2
V [] =
(x e) d(x) +

e
2

(x e) d(x) =

(x e)2 d(x).

The theorem is verified.


Theorem 2.40 (Yao [231]) Let be an uncertain variable with regular uncertainty distribution and finite expected value e. Then
Z 1
V [] =
(1 () e)2 d.
(2.164)
0

Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.39 that the variance is
Z +
Z 1
V [] =
(x e)2 d(x) =
(1 () e)2 d.

The theorem is verified.


Exercise 2.35: Show that the linear uncertain variable L(a, b) has a
variance
(b a)2
.
(2.165)
V [] =
12
Exercise 2.36: Show that the normal uncertain variable N (e, ) has a
variance
V [] = 2 .
(2.166)

80

Chapter 2 - Uncertain Variable

Theorem 2.41 (Yao [231]) Assume 1 , 2 , , n are independent uncertain


variables with regular uncertainty distributions 1 , 2 , , n , respectively.
If f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and
strictly decreasing with respect to xm+1 , xm+2 , , xn , then the uncertain
variable = f (1 , 2 , , n ) has a variance
1

1
1
1
2
(f (1
1 (), , m (), m+1 (1 ), , n (1 )) e) d

V [] =
0

where e is the expected value of .


Proof: Since the function f (x1 , x2 , , xn ) is strictly increasing with respect
to x1 , x2 , , xm and strictly decreasing with respect to xm+1 , xm+2 , , xn ,
the inverse uncertainty distribution of is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

It follows from Theorem 2.40 that the result holds.

2.7

Moments

Definition 2.18 (Liu [113]) Let be an uncertain variable and let k be a


positive integer. Then E[ k ] is called the k-th moment of .
Theorem 2.42 Let be a nonnegative uncertain variable with uncertainty
distribution , and let k be a positive integer. Then the k-th moment of is
E[ k ] =

(1 ( k x))dx.

(2.167)

Proof: Since is a nonnegative uncertain variable, we immediately have


E[ k ] =

M{ k x}dx =

M{

Z
x}dx =

(1 ( k x))dx.

The theorem is proved.


Theorem 2.43 Let be an uncertain variable with uncertainty distribution
, and let k be an odd number. Then the k-th moment of is
E[ k ] =

Z
0

(1 ( k x))dx

( k x)dx.

(2.168)

81

Section 2.7 - Moments

Proof: Since k is an odd number, it follows from the definition of expected


value operator that
Z +
Z 0
k
k
M{ x}dx
M{ k x}dx
E[ ] =

M{

M{

x}dx

x}dx

(1 ( k x))dx

( k x)dx.

The theorem is proved.


However, when k is an even number, the k-th moment of cannot be
uniquely determined by the uncertainty distribution . In this case, we have
Z +
k
E[ ] =
M{ k x}dx
0
+

M{(

x) ( k x)}dx

0
+

(M{

x} + M{ k x})dx

0
+

Z
=

(1 ( k x) + ( k x))dx.

Thus for the even number k, we have the following stipulation.


Stipulation 2.2 Let be an uncertain variable with uncertainty distribution
, and let k be an even number. Then the k-th moment of is
Z +

(2.169)
E[ k ] =
(1 ( k x) + ( k x))dx.
0

Theorem 2.44 Let be an uncertain variable with uncertainty distribution


, and let k be a positive integer. Then the k-th moment of is
Z +
k
E[ ] =
xk d(x).
(2.170)

Proof: When k is an odd number, Theorem 2.43 says that the k-th moment
is
Z +
Z 0

E[ k ] =
(1 ( k y))dy
( k y)dy.

0
k

Substituting y with x and y with x , the change of variables and integration


by parts produce
Z +
Z +
Z +

k
k
(1 ( y))dy =
(1 (x))dx =
xk d(x)
0

82

Chapter 2 - Uncertain Variable

and
Z

xk d(x).

(x)dx =

( y)dy =

Thus we have
E[ k ] =

xk d(x) +

xk d(x) =

xk d(x).

When k is an even number, the theorem is based on the stipulation (2.169)


that says the k-th moment is
Z +

E[ k ] =
(1 ( k y) + ( k y))dy.
0

Substituting y with x and y with xk , the change of variables and integration


by parts produce
Z +
Z +
Z +

(1 ( k y))dy =
(1 (x))dxk =
xk d(x).
0

Similarly, substituting y with x and y with x , we obtain


Z +
Z 0
Z 0

k
k
( y)dy =
(x)dx =
xk d(x).

It follows that the k-th moment is


Z +
Z
E[ k ] =
xk d(x) +

xk d(x) =

xk d(x).

The theorem is thus verified for any positive integer k.


Theorem 2.45 (Sheng [194]) Let be an uncertain variable with regular
uncertainty distribution , and let k be a positive integer. Then the k-th
moment of is
Z
1

(1 ())k d.

E[ k ] =

(2.171)

Proof: Substituting (x) with and x with 1 (), it follows from the
change of variables of integral and Theorem 2.44 that the k-th moment is
Z +
Z 1
E[ k ] =
xk d(x) =
(1 ())k d.

The theorem is verified.


Exercise 2.37: Show that the second moment of the linear uncertain variable L(a, b) is
a2 + ab + b2
.
(2.172)
E[ 2 ] =
3

83

Section 2.8 - Entropy

Exercise 2.38: Show that the second moment of the normal uncertain
variable N (e, ) is
E[ 2 ] = e2 + 2 .
(2.173)
Theorem 2.46 (Sheng [194]) Assume 1 , 2 , , n are independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively, and k is a positive integer. If f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and strictly decreasing with respect to
xm+1 , xm+2 , , xn , then the k-th moment of = f (1 , 2 , , n ) is
E[ k ] =

1
1
1
1
f k (1
1 (), , m (), m+1 (1 ), , n (1 ))d.

Proof: Since the function f (x1 , x2 , , xn ) is strictly increasing with respect


to x1 , x2 , , xm and strictly decreasing with respect to xm+1 , xm+2 , , xn ,
the inverse uncertainty distribution of is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

It follows from Theorem 2.45 that the result holds.

2.8

Entropy

This section provides a definition of entropy to characterize the uncertainty


of uncertain variables.
Definition 2.19 (Liu [116]) Suppose that is an uncertain variable with
uncertainty distribution . Then its entropy is defined by
Z

H[] =

S((x))dx

(2.174)

where S(t) = t ln t (1 t) ln(1 t).


Example 2.17: Let be an uncertain variable with uncertainty distribution
(
0, if x < a
(x) =
(2.175)
1, if x a.
Essentially, is a constant a. It follows from the definition of entropy that
Z

Z
(0 ln 0 + 1 ln 1) dx

H[] =

This means a constant has no uncertainty.

(1 ln 1 + 0 ln 0) dx = 0.
a

84

Chapter 2 - Uncertain Variable

S(t)
.
....
.......
..
...
.
... . . . . . . . . . . . . . . ..............................
.......
.
.......
.....
.
....
.....
.....
.
...
.....
.....
.
.....
....
...
.
....
....
.
.
.
...
.
....
...
.
.
...
...
.
.
...
...
...
.
.
...
..
.
...
.
...
.
..
...
.
...
.
..
...
...
.
.
..
...
.
...
.
...
.
... ....
...
.
... ...
.
...
.
... ...
...
.
... ...
...
.
...
... ...
.
...
.
... ...
.
...
......
.
...
......
.
.
.
.
....................................................................................................................................................................................
..
...
..

ln 2

0.5

Figure 2.19: Function S(t) = t ln t (1 t) ln(1 t). It is easy to verify


that S(t) is a symmetric function about t = 0.5, strictly increasing on the
interval [0, 0.5], strictly decreasing on the interval [0.5, 1], and reaches its
unique maximum ln 2 at t = 0.5.
Example 2.18: Let be a linear uncertain variable L(a, b). Then its entropy
is

Z b
xa xa bx bx
ba
H[] =
ln
+
ln
dx =
.
(2.176)
ba
ba
ba ba
2
a
Exercise 2.39: Show that the zigzag uncertain variable Z(a, b, c) has
an entropy
ca
H[] =
.
(2.177)
2
Exercise 2.40: Show that the normal uncertain variable N (e, ) has
an entropy

H[] = .
(2.178)
3
Theorem 2.47 Let be an uncertain variable. Then H[] 0 and equality
holds if is essentially a constant.
Proof: The nonnegativity is clear. In addition, when an uncertain variable
tends to a constant, its entropy tends to the minimum 0.
Theorem 2.48 Let be an uncertain variable taking values on the interval
[a, b]. Then
H[] (b a) ln 2
(2.179)
and equality holds if has an uncertainty distribution (x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.

85

Section 2.8 - Entropy

Theorem 2.49 Let be an uncertain variable, and let c be a real number.


Then
H[ + c] = H[].
(2.180)
That is, the entropy is invariant under arbitrary translations.
Proof: Write the uncertainty distribution of by . Then the uncertain
variable + c has an uncertainty distribution (x c). It follows from the
definition of entropy that
Z +
Z +
H[ + c] =
S ((x c)) dx =
S((x))dx = H[].

The theorem is proved.


Theorem 2.50 (Dai and Chen [24]) Let be an uncertain variable with
regular uncertainty distribution . Then
Z 1

d.
(2.181)
H[] =
1 () ln
1

0
Proof: It is clear that S() is a derivable function with S 0 () = ln /(1
). Since
Z (x)
Z 1
S((x)) =
S 0 ()d =
S 0 ()d,
0

(x)

we have
Z

H[] =

It follows from Fubini theorem that


Z (0) Z 0
Z
H[] =
S 0 ()dxd
1 ()

1 ()S 0 ()d

Z
=

(0)

(0)

S ()ddx

S((x))dx =

(x)

S 0 ()ddx.

(x)

1 ()

S 0 ()dxd

1 ()S 0 ()d

(0)
1

1 ()S 0 ()d =

Z
0

1 () ln

d.
1

The theorem is verified.


Entropy of Monotone Function of Uncertain Variables
Theorem 2.51 (Dai and Chen [24]) Let 1 , 2 , , n be independent uncertain variables with regular uncertainty distributions 1 , 2 , , n , respectively. If f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm

86

Chapter 2 - Uncertain Variable

and strictly decreasing with respect to xm+1 , xm+2 , , xn , then the uncertain
variable = f (1 , 2 , , n ) has an entropy
Z 1

1
1
1
f (1
H[] =
d.
1 (), , m (), m+1 (1 ), , n (1 )) ln
1
0
Proof: Since the function f (x1 , x2 , , xn ) is strictly increasing with respect
to x1 , x2 , , xm and strictly decreasing with respect to xm+1 , xm+2 , , xn ,
it follows from Theorem 2.19 that the inverse uncertainty distribution of is
1
1
1
1 () = f (1
1 (), , m (), m+1 (1 ), , n (1 )).

By using Theorem 2.50, we get the entropy formula.


Exercise 2.41: Let and be independent and nonnegative uncertain
variables with regular uncertainty distributions and , respectively. Show
that
Z 1

d.
H[] =
1 ()1 () ln
1

0
Exercise 2.42: Let and be independent and positive uncertain variables
with regular uncertainty distributions and , respectively. Show that
  Z 1

1 ()
H
ln
d.
=
1

(1

)
1

0
Exercise 2.43: Let and be independent and positive uncertain variables
with regular uncertainty distributions and , respectively. Show that

 Z 1

1 ()
H
ln
d.
=
1
+
() + 1 (1 ) 1
0
Positive Linearity of Entropy
Theorem 2.52 (Dai and Chen [24]) Let and be independent uncertain
variables. Then for any real numbers a and b, we have
H[a + b] = |a|H[] + |b|H[].

(2.182)

Proof: Without loss of generality, suppose and have regular uncertainty


distributions and , respectively.
Step 1: We prove H[a] = |a|H[]. If a > 0, then the inverse uncertainty
distribution of a is
1 () = a1 ().
It follows from Theorem 2.50 that
Z 1
Z 1

1
H[a] =
a () ln
d = a
1 () ln
d = |a|H[].
1
1
0
0

87

Section 2.8 - Entropy

If a = 0, then we immediately have H[a] = 0 = |a|H[]. If a < 0, then the


inverse uncertainty distribution of a is
1 () = a1 (1 ).
It follows from Theorem 2.50 that
Z 1
Z 1

1
H[a] =
a (1 ) ln
d =(a)
1 () ln
d = |a|H[].
1

0
0
Thus we always have H[a] = |a|H[].
Step 2: We prove H[ + ] = H[] + H[]. Note that the inverse uncertainty distribution of + is
1 () = 1 () + 1 ().
It follows from Theorem 2.50 that
Z 1
H[ + ] =
(1 () + 1 ()) ln
0

d = H[] + H[].
1

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[a + b] = H[a] + H[b] = |a|H[] + |b|H[].
The theorem is proved.
Maximum Entropy Principle
Given some constraints, for example, expected value and variance, there are
usually multiple compatible uncertainty distributions. Which uncertainty
distribution shall we take? The maximum entropy principle attempts to
select the uncertainty distribution that has maximum entropy and satisfies
the prescribed constraints.
Theorem 2.53 (Chen and Dai [12]) Let be an uncertain variable whose
uncertainty distribution is arbitrary but the expected value e and variance 2 .
Then

(2.183)
H[]
3
and the equality holds if is a normal uncertain variable N (e, ).
Proof: Let (x) be the uncertainty distribution of and write (x) =
(2e x) for x e. It follows from the stipulation (2.162) and the change
of variable of integral that the variance is
Z +
Z +
V [] = 2
(x e)(1 (x))dx + 2
(x e)(x)dx = 2 .
e

88

Chapter 2 - Uncertain Variable

Thus there exists a real number such that


Z +
(x e)(1 (x))dx = 2 ,
2
e

(x e)(x)dx = (1 ) 2 .

2
e

The maximum entropy distribution should maximize the entropy


Z +
Z +
Z +
H[] =
S((x))dx =
S((x))dx +
S((x))dx

subject to the above two constraints. The Lagrangian is


Z +
Z +
L=
S((x))dx +
S((x))dx
e

 Z
2

(x e)(1 (x))dx 2

 Z
2


(x e)(x)dx (1 ) 2 .

The maximum entropy distribution meets Euler-Lagrange equations


ln (x) ln(1 (x)) = 2(x e),
ln (x) ln(1 (x)) = 2(e x).
Thus and have the forms
(x) = (1 + exp(2(e x)))1 ,
(x) = (1 + exp(2(x e)))1 .
Substituting them into the variance constraints, we get


1
(e x)

(x) = 1 + exp
,
6
!!1
(x e)
(x) = 1 + exp p
.
6(1 )
Then the entropy is

H[] = +
6
6

which achieves the maximum when = 1/2. Thus the maximum entropy
distribution is just the normal uncertainty distribution N (e, ).

89

Section 2.9 - Distance

2.9

Distance

Definition 2.20 (Liu [113]) The distance between uncertain variables and
is defined as
d(, ) = E[| |].
(2.184)
That is, the distance between and is just the expected value of | |.
Since | | is a nonnegative uncertain variable, we always have
Z +
M{| | r}dr.
(2.185)
d(, ) =
0

Theorem 2.54 Let , , be uncertain variables, and let d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +
d(, ) =
M {| | r} dr
0

M {| | + | | r} dr

M {(| | r/2) (| | r/2)} dr

(M{| | r/2} + M{| | r/2}) dr

= 2E[| |] + 2E[| |] = 2d(, ) + 2d(, ).


Example 2.19: Let = {1 , 2 , 3 }. Define M{} = 0, M{} = 1 and
M{} = 1/2 for any subset (excluding and ). We set uncertain variables
, and as follows,

0, if = 1
1, if = 1
1, if = 2
1, if = 2
() =
() 0.
() =

0, if = 3 ,
1, if = 3 ,
It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus
3
(d(, ) + d(, )).
2
A conjecture is d(, ) 1.5(d(, )+d(, )) for arbitrary uncertain variables
, and . This is an open problem.
d(, ) =

90

Chapter 2 - Uncertain Variable

How to Obtain Distance from Uncertainty Distributions?


Let and be independent uncertain variables with uncertainty distributions
and , respectively. If has an uncertainty distribution , then the
distance is
Z +
M{| | x}dx
d(, ) =
0
+

M{( x) ( x)}dx

=
0
+

(M{ x} + M{ x})dx

0
+

(1 (x) + (x))dx.

=
0

Thus we stipulate that the distance between and is


+

(1 (x) + (x))dx.

d(, ) =

(2.186)

Mention that (2.186) is a stipulation rather than a precise formula! Furthermore, substituting (x) with and x with 1 (), the change of variables
and integration by parts produce
+

1
1

(1 (x))dx =

(1 )d

1 ()d.

() =

(0)

(0)

Similarly, substituting (x) with and x with 1 (), we obtain


Z

d(1 ()) =

(x)dx =
0

(0)

(0)

1 ()d.

Based on the distance formula (2.186), we have


Z

d(, ) =

Z
()d

(0)

(0)

Z
()d =

|1 ()|d.

Since 1 () = 1 () 1 (1 ), we immediately obtain a new distance


formula,
Z 1
d(, ) =
|1 () 1 (1 )|d
(2.187)
0
1

where
and
respectively.

are the inverse uncertainty distributions of and ,

91

Section 2.10 - Inequalities

2.10

Inequalities

Theorem 2.55 (Liu [113]) Let be an uncertain variable, and f a nonnegative function. If f is even and increasing on [0, ), then for any given
number t > 0, we have
E[f ()]
.
f (t)

M{|| t}

(2.188)

Proof: It is clear that M{|| f 1 (r)} is a monotone decreasing function


of r on [0, ). It follows from the nonnegativity of f () that
Z +
Z +
E[f ()] =
M{f () r}dr =
M{|| f 1 (r)}dr
0

0
f (t)

M{|| f 1 (r)}dr

f (t)

dr M{|| f 1 (f (t))}

= f (t) M{|| t}
which proves the inequality.
Theorem 2.56 (Liu [113], Markov Inequality) Let be an uncertain variable. Then for any given numbers t > 0 and p > 0, we have
M{|| t}

E[||p ]
.
tp

(2.189)

Proof: It is a special case of Theorem 2.55 when f (x) = |x|p .


Example 2.20: For any given positive number t, we define an uncertain
variable as follows,
(
0 with uncertain measure 1/2
=
t with uncertain measure 1/2.
Then E[ p ] = tp /2 and M{ t} = 1/2 = E[ p ]/tp .
Theorem 2.57 (Liu [113], Chebyshev Inequality) Let be an uncertain variable whose variance V [] exists. Then for any given number t > 0, we have
V []
.
(2.190)
t2
Proof: It is a special case of Theorem 2.55 when the uncertain variable is
replaced with E[], and f (x) = x2 .
M {| E[]| t}

Example 2.21: For any given positive number t, we define an uncertain


variable as follows,
(
t with uncertain measure 1/2
=
t with uncertain measure 1/2.

92

Chapter 2 - Uncertain Variable

Then V [] = t2 and M{| E[]| t} = 1 = V []/t2 .


Theorem 2.58 (Liu [113], H
olders Inequality) Let p and q be positive numbers with 1/p + 1/q = 1, and let and be independent uncertain variables
with E[||p ] < and E[||q ] < . Then we have
p
p
E[||] p E[||p ] q E[||q ].
(2.191)
Proof: The inequality holds trivially if at least one of and is zero a.s.
p
Now we assume E[||
] > 0 and E[||q ] > 0. It is easy to prove that the

p
function f (x, y) = x q y is a concave function on {(x, y) : x 0, y 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

x 0, y 0.

Letting x0 = E[||p ], y0 = E[||q ], x = ||p and y = ||q , we have


f (||p , ||q ) f (E[||p ], E[||q ]) a(||p E[||p ]) + b(||q E[||q ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||q )] f (E[||p ], E[||q ]).
Hence the inequality (2.191) holds.
Theorem 2.59 (Liu [113], Minkowski Inequality) Let p be a real number
with p 1, and let and be independent uncertain variables with E[||p ] <
and E[||p ] < . Then we have
p
p
p
p
E[| + |p ] p E[||p ] + p E[||p ].
(2.192)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function

p
f (x, y) = ( x + p y)p is a concave function on {(x, y) : x 0, y 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

x 0, y 0.

Letting x0 = E[||p ], y0 = E[||p ], x = ||p and y = ||p , we have


f (||p , ||p ) f (E[||p ], E[||p ]) a(||p E[||p ]) + b(||p E[||p ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||p )] f (E[||p ], E[||p ]).
Hence the inequality (2.192) holds.

93

Section 2.11 - Sequence Convergence

Theorem 2.60 (Liu [113], Jensens Inequality) Let be an uncertain variable, and f a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].

(2.193)

Especially, when f (x) = |x|p and p 1, we have |E[]|p E[||p ].


Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.
Exercise 2.44: (Zhang [241]) Let 1 , 2 , , n be independent uncertain
variables with finite expected values, and let f be a convex function. Show
that
f (E[1 ], E[2 ], , E[n ]) E[f (1 , 2 , , n )].
(2.194)

2.11

Sequence Convergence

This section introduces four convergence concepts of uncertain sequence: convergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.
Table 2.1: Relationship among Convergence Concepts
Convergence
in Mean

Convergence
in Measure

Convergence
in Distribution

Definition 2.21 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables defined on the uncertainty space (, L, M). The sequence {i } is said
to be convergent a.s. to if there exists an event with M{} = 1 such that
lim |i () ()| = 0

(2.195)

for every . In that case we write i , a.s.


Definition 2.22 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables. We say that the sequence {i } converges in measure to if
lim M {|i | } = 0

for every > 0.

(2.196)

94

Chapter 2 - Uncertain Variable

Definition 2.23 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables with finite expected values. We say that the sequence {i } converges in
mean to if
lim E[|i |] = 0.
(2.197)
i

Definition 2.24 (Liu [113]) Suppose that , 1 , 2 , are the uncertainty


distributions of uncertain variables , 1 , 2 , , respectively. We say that
{i } converges in distribution to if
lim i (x) = (x),

x <.

(2.198)

Convergence in Mean vs. Convergence in Measure


Theorem 2.61 (Liu [113]) Suppose that , 1 , 2 , are uncertain variables.
If {i } converges in mean to , then {i } converges in measure to .
Proof: It follows from the Markov inequality that for any given number
> 0, we have
E[|i |]
0
M{|i | }

as i . Thus {i } converges in measure to . The theorem is proved.


Example 2.22: Convergence in measure does not imply convergence in
mean. Take an uncertainty space (, L, M) to be {1 , 2 , } with

M{} =

sup 1/i,
i

1 sup 1/i,

if sup 1/i < 0.5


i

if sup 1/i < 0.5

i 6

i 6

0.5,

otherwise.

The uncertain variables are defined by


(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and 0. For some small number > 0, we have
M{|i | } = M{|i | } =

1
0.
i

That is, the sequence {i } converges in measure to . However, for each i,


we have
E[|i |] = 1.
That is, the sequence {i } does not converge in mean to .

Section 2.11 - Sequence Convergence

95

Convergence in Measure vs. Convergence in Distribution


Theorem 2.62 (Liu [113]) Suppose , 1 , 2 , are uncertain variables. If
{i } converges in measure to , then {i } converges in distribution to .
Proof: Let x be a given continuity point of the uncertainty distribution .
On the one hand, for any y > x, we have
{i x} = {i x, y} {i x, > y} { y} {|i | y x}.
It follows from the subadditivity axiom that
i (x) (y) + M{|i | y x}.
Since {i } converges in measure to , we have M{|i | y x} 0 as
i . Thus we obtain lim supi i (x) (y) for any y > x. Letting
y x, we get
lim sup i (x) (x).
(2.199)
i

On the other hand, for any z < x, we have


{ z} = {i x, z} {i > x, z} {i x} {|i | x z}
which implies that
(z) i (x) + M{|i | x z}.
Since M{|i | x z} 0, we obtain (z) lim inf i i (x) for any
z < x. Letting z x, we get
(x) lim inf i (x).
i

(2.200)

It follows from (2.198) and (2.199) that i (x) (x). The theorem is
proved.
Example 2.23: Convergence in distribution does not imply convergence in
measure. Take an uncertainty space (, L, M) to be {1 , 2 } with M{1 } =
M{2 } = 1/2. We define an uncertain variable as
(
1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then i and have the same chance
distribution. Thus {i } converges in distribution to . However, for some
small number > 0, we have
M{|i | } = M{|i | } = 1.
That is, the sequence {i } does not converge in measure to .

96

Chapter 2 - Uncertain Variable

Convergence Almost Surely vs. Convergence in Measure


Example 2.24: Convergence a.s. does not imply convergence in measure.
Take an uncertainty space (, L, M) to be {1 , 2 , } with

sup i/(2i + 1),


if sup i/(2i + 1) < 0.5

i
1 sup i/(2i + 1), if sup i/(2i + 1) < 0.5
M{} =

i 6
i 6

0.5,
otherwise.
Then we define uncertain variables as
(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and 0. The sequence {i } converges a.s. to . However,
for some small number > 0, we have
M{|i | } = M{|i | } =

1
i
.
2i + 1
2

That is, the sequence {i } does not converge in measure to .


Example 2.25: Convergence in measure does not imply convergence a.s.
Take an uncertainty space (, L, M) to be [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j 1. Then we define
uncertain variables as
(
1, if k/2j (k + 1)/2j
i () =
0, otherwise
for i = 1, 2, and 0. For some small number > 0, we have
1
0
2j
as i . That is, the sequence {i } converges in measure to . However, for
any [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing . Thus i () does not converge to 0. In other words, the
sequence {i } does not converge a.s. to .
M{|i | } = M{|i | } =

Convergence Almost Surely vs. Convergence in Mean


Example 2.26: Convergence a.s. does not imply convergence in mean. Take
an uncertainty space (, L, M) to be {1 , 2 , } with
X 1
M{} =
.
2i
i

97

Section 2.11 - Sequence Convergence

The uncertain variables are defined by


(
2i , if j = i
i (j ) =
0, otherwise
for i = 1, 2, and 0. Then i converges a.s. to . However, the sequence
{i } does not converge in mean to because E[|i |] 1.
Example 2.27: Convergence in mean does not imply convergence a.s. Take
an uncertainty space (, L, M) to be [0, 1] with Borel algebra and Lebesgue
measure. For any positive integer i, there is an integer j such that i = 2j + k,
where k is an integer between 0 and 2j 1. The uncertain variables are
defined by
(
1, if k/2j (k + 1)/2j
i () =
0, otherwise
for i = 1, 2, and 0. Then
E[|i |] =

1
0.
2j

That is, the sequence {i } converges in mean to . However, for any [0, 1],
there is an infinite number of intervals of the form [k/2j , (k+1)/2j ] containing
. Thus i () does not converge to 0. In other words, the sequence {i } does
not converge a.s. to .
Convergence Almost Surely vs. Convergence in Distribution
Example 2.28: Convergence in distribution does not imply convergence a.s.
Take an uncertainty space (, L, M) to be {1 , 2 } with M{1 } = M{2 } =
1/2. We define an uncertain variable as
(
1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then i and have the same uncertainty distribution. Thus {i } converges in distribution to . However, the
sequence {i } does not converge a.s. to .
Example 2.29: Convergence a.s. does not imply convergence in distribution.
Take an uncertainty space (, L, M) to be {1 , 2 , } with

sup i/(2i + 1),


if sup i/(2i + 1) < 0.5

i
1 sup i/(2i + 1), if sup i/(2i + 1) < 0.5
M{} =

i 6
i 6

0.5,
otherwise.

98

Chapter 2 - Uncertain Variable

The uncertain variables are defined by


(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and 0. Then the sequence {i } converges a.s. to .
However, the uncertainty distributions of i are

0,
if x < 0

(i + 1)/(2i + 1), if 0 x < i


i (x) =

1,
if x i
for i = 1, 2, , respectively. The uncertainty distribution of is

0, if x < 0
(x) =
1, if x 0.
It is clear that i (x) does not converge to (x) at x > 0. That is, the
sequence {i } does not converge in distribution to .

2.12

Conditional Uncertainty Distribution

Definition 2.25 (Liu [113]) The conditional uncertainty distribution of


an uncertain variable given B is defined by
(x|B) = M { x|B}

(2.201)

provided that M{B} > 0.


Theorem 2.63 Let be an uncertain variable with uncertainty distribution
(x), and t a real number with (t) < 1. Then the conditional uncertainty
distribution of given > t is

0,
if (x) (t)

(x)
0.5, if (t) < (x) (1 + (t))/2
(x|(t, +)) =
1 (t)

(x) (t)

, if (1 + (t))/2 (x).
1 (t)
Proof: It follows from (x|(t, +)) = M {
conditional uncertainty that

M{( x) ( > t)}

M{ > t}

M{( > x) ( > t)}


(x|(t, +)) =
1
,

M{ > t}

0.5,

x| > t} and the definition of

if

M{( x) ( > t)}


< 0.5
M{ > t}

if

M{( > x) ( > t)}


< 0.5
M{ > t}

otherwise.

Section 2.12 - Conditional Uncertainty Distribution

99

When (x) (t), we have x t, and


M{( x) ( > t)}
M{}
=
= 0 < 0.5.
M{ > t}
1 (t)
Thus
(x|(t, +)) =

M{( x) ( > t)}


= 0.
M{ > t}

When (t) < (x) (1 + (t))/2, we have x > t, and


1 (x)
1 (1 + (t))/2
M{( > x) ( > t)}
=

= 0.5
M{ > t}
1 (t)
1 (t)
and

M{( x) ( > t)}


(x)

.
M{ > t}
1 (t)

It follows from the maximum uncertainty principle that


(x|(t, +)) =

(x)
0.5.
1 (t)

When (1 + (t))/2 (x), we have x t, and


1 (x)
1 (1 + (t))/2
M{( > x) ( > t)}
=

0.5.
M{ > t}
1 (t)
1 (t)
Thus
(x|(t, +)) = 1

M{( > x) ( > t)}


1 (x)
(x) (t)
=1
=
.
M{ > t}
1 (t)
1 (t)

The theorem is proved.


Exercise 2.45: Let be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the conditional uncertainty distribution
of given > t is

0,
if x t

xa
0.5, if t < x (b + t)/2
(x|(t, +)) =
bt

xt

1, if (b + t)/2 x.
bt
Theorem 2.64 Let be an uncertain variable with uncertainty distribution
(x), and t a real number with (t) > 0. Then the conditional uncertainty

100

Chapter 2 - Uncertain Variable

(x|(t, +))
....
........
..
...
..
........................................................................
....
.......................................
...
.......
............
...
.
.
.
.
.
.
...
.
.
... ....
...
. ....
...
............
...
.. ....
..... .......
...
. ...
...
..... .......
...
..
..
..... .........
...
.
.
.
.
.
...
..
.
....
.
.
.
.
...................................................
............................................
....
....
.
..... .....
...
....
.
...
.... ... .
.
.
.
.
...
.... ....
...
..........
...
...
.. .
..... ..
...
..
.
.
.
...
...
...
...
.
..
...
.....
.
.
...
...................................................................................................................................................................................................................................................
....
...
.

0.5

Figure 2.20: Conditional Uncertainty Distribution (x|(t, +))


distribution of given t is

(x)

,
if (x) (t)/2

(t)

(x) + (t) 1
(x|(, t]) =
0.5, if (t)/2 (x) < (t)

(t)

1,
if (t) (x).
Proof: It follows from (x|(, t]) = M {
conditional uncertainty that

M{( x) ( t)}

M{ t}

M{( > x) ( t)}


(x|(, t]) =
1
,

M{ t}

0.5,

x| t} and the definition of

if

M{( x) ( t)}
< 0.5
M{ t}

if

M{( > x) ( t)}


< 0.5
M{ t}

otherwise.

When (x) (t)/2, we have x < t, and


M{( x) ( t)}
(x)
(t)/2
=

= 0.5.
M{ t}
(t)
(t)
Thus
(x|(, t]) =

M{( x) ( t)}
(x)
=
.
M{ t}
(t)

When (t)/2 (x) < (t), we have x < t, and


M{( x) ( t)}
(x)
(t)/2
=

= 0.5
M{ t}
(t)
(t)

101

Section 2.13 - Uncertain Vector

and

M{( > x) ( t)}


1 (x)

,
M{ t}
(t)

i.e.,
1

M{( > x) ( t)}


(x) + (t) 1

.
M{ t}
(t)

It follows from the maximum uncertainty principle that


(x|(, t]) =

(x) + (t) 1
0.5.
(t)

When (t) (x), we have x t, and


M{( > x) ( t)}
M{}
=
= 0 < 0.5.
M{ t}
(t)
Thus
(x|(, t]) = 1

M{( > x) ( t)}


= 1 0 = 1.
M{ t}

The theorem is proved.


Exercise 2.46: Let be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the conditional uncertainty distribution
of given t is

xa

0,
if x (a + t)/2

ta



bx
(x|(, t]) =
0.5, if (a + t)/2 x < t
1

ta

1,
if x t.

2.13

Uncertain Vector

As an extension of uncertain variable, this section introduces a concept of


uncertain vector whose components are uncertain variables.
Definition 2.26 (Liu [113]) A k-dimensional uncertain vector is a measurable function from an uncertainty space (, L, M) to the set of k-dimensional
real vectors, i.e., { B} is an event for any k-dimensional Borel set B.
Theorem 2.65 (Liu [113]) The vector (1 , 2 , , k ) is an uncertain vector
if and only if 1 , 2 , , k are uncertain variables.

102

Chapter 2 - Uncertain Variable

(x|(, t])
....
........
..
...
..
........................................................................
.........................................................................
....
..
...
..
.....
..
...
.
..
...
.....
.. .. .
...
.. ....
...
..
...
......
...
.
.
.
.
...
.
.. .......
...
.
.
..
..... ........ ...
...
.
.
..
.
.
.
...
..
....
..
...
.....
....................................
...........................................
..
...
.... . ...
.
.
..
.
...
... ......
.
.
..
.
...
... ....
.
.
..
.
...
.. ...
.
.
.
..
.
...
.... .......
.
..
.
...
.
... ..
.
..
.
...
.
... .....
.
..
.
.
...
......
.
..
.
.
...
... ...
.
.
..
.
...
.........
.
.
..
.
...
..
.
.
.
.
.
...
...................................................................................................................................................................................................................................................
....
..
..

0.5

Figure 2.21: Conditional Uncertainty Distribution (x|(, t])


Proof: Write = (1 , 2 , , k ). Suppose that is an uncertain vector on
the uncertainty space (, L, M). For any Borel set B over <, the set B <k1
is a k-dimensional Borel set. Thus the set
{1 B} = {1 B, 2 <, , k <} = { B <k1 }
is an event. Hence 1 is an uncertain variable. A similar process may prove
that 2 , 3 , , k are uncertain variables.
Conversely, suppose that all 1 , 2 , , k are uncertain variables on the
uncertainty space (, L, M). We define



B = B <k { B} is an event .
The vector = (1 , 2 , , k ) is proved to be an uncertain vector if we can
prove that B contains all k-dimensional Borel sets. First, the class B contains
all open intervals of <k because
(
)
k
k
Y
\

(ai , bi ) =
{i (ai , bi )}
i=1

i=1

is an event. Next, the class B is a -algebra over <k because (i) we have
<k B since { <k } = ; (ii) if B B, then { B} is an event, and
{ B c } = { B}c
is an event. This means that B c B; (iii) if Bi B for i = 1, 2, , then
{ Bi } are events and
(
)

[
[

Bi =
{ Bi }
i=1

i=1

103

Section 2.13 - Uncertain Vector

is an event. This means that i Bi B. Since the smallest -algebra containing all open intervals of <k is just the Borel algebra over <k , the class B
contains all k-dimensional Borel sets. The theorem is proved.
Definition 2.27 (Liu [113]) The joint uncertainty distribution of an uncertain vector (1 , 2 , , k ) is defined by
(x1 , x2 , , xk ) = M {1 x1 , 2 x2 , , k xk }

(2.202)

for any real numbers x1 , x2 , , xk .


Theorem 2.66 Let 1 , 2 , , k be independent uncertain variables with
uncertainty distributions 1 , 2 , , k , respectively. Then the uncertain
vector (1 , 2 , , k ) has a joint uncertainty distribution
(x1 , x2 , , xk ) = 1 (x1 ) 2 (x2 ) k (xk )

(2.203)

for any real numbers x1 , x2 , , xk .


Proof: Since 1 , 2 , , k are independent uncertain variables, we have
( k
)
k
k
\
^
^
(x1 , x2 , , xk ) = M
(i xi ) =
M{i xi } =
i (xi )
i=1

i=1

i=1

for any real numbers x1 , x2 , , xk . The theorem is proved.


Definition 2.28 (Liu [132]) The k-dimensional uncertain vectors 1 , 2 , ,
n are said to be independent if for any k-dimensional Borel sets B1 , B2 , ,
Bn , we have
( n
)
n
\
^
M
( i Bi ) =
M{ i Bi }.
(2.204)
i=1

i=1

Theorem 2.67 (Liu [132]) The k-dimensional uncertain vectors 1 , 2 , ,


n are independent if and only if
( n
)
n
[
_
M
( i Bi ) =
M { i Bi }
(2.205)
i=1

i=1

for any k-dimensional Borel sets B1 , B2 , , Bn .


Proof: It follows from the duality of uncertain measure that 1 , 2 , , n
are independent if and only if
( n
)
( n
)
[
\
c
M
( i Bi ) = 1 M
( i Bi )
i=1
n
^

=1

i=1

M{ i Bic } =

i=1

The theorem is thus proved.

n
_
i=1

M { i Bi } .

104

Chapter 2 - Uncertain Variable

Theorem 2.68 Let 1 , 2 , , n be independent uncertain vectors, and f1 ,


f2 , , fn vector-valued measurable functions. Then f1 ( 1 ), f2 ( 2 ), , fn ( n )
are independent uncertain vectors.
Proof: For any Borel sets B1 , B2 , , Bn , it follows from the definition of
independence that
( n
)
( n
)
\
\
1
M
(fi ( i ) Bi ) = M
( i fi (Bi ))
i=1

n
^

i=1

M{ i fi1 (Bi )} =

i=1

n
^

M{fi ( i ) Bi }.

i=1

Thus f1 ( 1 ), f2 ( 2 ), , fn ( n ) are independent uncertain variables.


Theorem 2.69 Let 1 , 2 , , n be independent uncertain variables. Then
(1 , 2 , , m ) and (m+1 , m+2 , , n ) are independent uncertain vectors.
Multivariate Normal Distribution
Definition 2.29 (Liu [132]) Let 1 , 2 , , m be independent normal uncertain variables with expected value 0 and variance 1. Then the uncertain
vector
= (1 , 2 , , m )
(2.206)
is said to have a multivariate standard normal distribution.
It is easy to verify that a standard normal uncertain vector (1 , 2 , , m )
has a joint uncertainty distribution

(x1 , x2 , , xm ) =

(x1 x2 xm )

1 + exp
3

1
(2.207)

for any real numbers x1 , x2 , , xm . It is also easy to show that


lim (x1 , x2 , , xm ) = 0, for each i,

xi

lim
(x1 ,x2 , ,xm )+

(x1 , x2 , , xm ) = 1.

(2.208)
(2.209)

Furthermore, the limit


lim
(x1 , ,xi1 ,xi+1 , ,xm )+

(x1 , x2 , , xm )

is a standard normal distribution with respect to xi .

(2.210)

105

Section 2.14 - Bibliographic Notes

Definition 2.30 (Liu [132]) Let (1 , 2 , , m ) be a standard normal uncertain vector, and let ei , ij , i = 1, 2, , k, j = 1, 2, , m be real numbers.
Define
m
X
i = ei +
ij j
(2.211)
j=1

for i = 1, 2, , k. Then the uncertain vector (1 , 2 , , k ) is said to have


a multivariate normal distribution.
That is, an uncertain vector has a multivariate normal distribution if it
can be represented in the form
= e +

(2.212)

for some real vector e and some real matrix , where is a standard normal
uncertain vector. Please also note that for every index i, i is a normal
uncertain variable with expected value ei and standard variance
m
X

|ij |.

(2.213)

j=1

Theorem 2.70 Let be a normal uncertain vector, c a real vector, and D


a real matrix. Then
= c + D
(2.214)
is another normal uncertain vector.
Proof: Since is a normal uncertain vector, there exists a standard normal
uncertain vector , a real vector e and a real matrix such that = e + .
It follows that
= c + D = c + D(e + ) = (c + De) + (D) .
Hence is a normal uncertain vector.

2.14

Bibliographic Notes

As a fundamental concept in uncertainty theory, the uncertain variable was


presented by Liu [113] in 2007. In order to describe uncertain variable, Liu
[113] also introduced the concept of uncertainty distribution. Later, Peng and
Iwamura [172] proved a sufficient and necessary condition for uncertainty
distribution. Furthermore, a measure inversion theorem was given by Liu
[120] that may yield uncertain measures of some special events from the
uncertainty distribution of the corresponding uncertain variable.
Following the independence of uncertain variables proposed by Liu [116],
the operational law was given by Liu [120] for calculating the uncertainty
distribution of monotone function of independent uncertain variables.

106

Chapter 2 - Uncertain Variable

In order to rank uncertain variables, Liu [113] proposed the concept of


expected value operator. In addition, the linearity of expected value operator
was verified by Liu [120]. As an important contribution, Liu and Ha [138]
derived a useful formula for calculating the expected values of monotone
functions of uncertain variables. Based on the expected value operator, Liu
[113] presented the concepts of variance, moments and distance of uncertain
variables.
The concept of entropy was proposed by Liu [116] for characterizing the
uncertainty of uncertain variables. Dai and Chen [24] verified the positive
linearity of entropy and derived some formulas for calculating the entropy
of monotone function of uncertain variables. In addition, Chen and Dai
[12] discussed the maximum entropy principle in order to select the uncertainty distribution that has maximum entropy and satisfies the prescribed
constraints. Especially, normal uncertainty distribution is proved to have
maximum entropy when the expected value and variance are fixed in advance. As an extension of entropy, Chen, Kar and Ralescu [13] proposed a
concept of cross entropy for comparing an uncertainty distribution against a
reference uncertainty distribution.
Uncertainty inequalities play an important role in uncertainty theory. Liu
[113] proved Markov inequality, Chebyshev inequality, Holders inequality,
Minkowski inequality, and Jensens inequality for uncertain variables. After
that, Zhang [241] extended the Jensens inequality to the case with multiple
uncertain variables, and Liu and Xu [137] proved a Liapunov inequality about
the moments of uncertain variable.
For uncertain sequence, Liu [113] proposed the concepts of convergence
almost surely, convergence in measure, convergence in mean, and convergence in distribution. Liu [113] also discussed the relationship among those
convergence concepts. Furthermore, Gao [42], You [232] and Zhang [241]
developed some other concepts of convergence and investigated their mathematical properties.
Uncertain vector and joint uncertainty distribution were defined by Liu
[113]. In addition, Liu [132] discussed the independence of uncertain vectors
and proposed the concept of multivariate normal uncertain vector.
From the conditional uncertain measure, Liu [113] proposed the concept of
conditional uncertainty distribution of uncertain variable, and derived some
formulas for calculating it.

Chapter 3

Uncertain Programming
Uncertain programming is a type of mathematical programming involving
uncertain variables. This chapter will provide a theory of uncertain programming, and present some uncertain programming models for machine
scheduling problem, vehicle routing problem, and project scheduling problem.

3.1

Uncertain Programming

Assume that x is a decision vector, and is an uncertain vector. Since an


uncertain objective function f (x, ) cannot be directly minimized, we may
minimize its expected value, i.e.,
min E[f (x, )].
x

(3.1)

In addition, since the uncertain constraints gj (x, ) 0, j = 1, 2, , p do not


define a crisp feasible set, it is naturally desired that the uncertain constraints
hold with confidence levels 1 , 2 , , p . Then we have a set of chance
constraints,
M{gj (x, ) 0} j , j = 1, 2, , p.
(3.2)
In order to obtain a decision with minimum expected objective value subject
to a set of chance constraints, Liu [115] proposed the following uncertain
programming model,

E[f (x, )]

min
x
(3.3)
subject to:

M{gj (x, ) 0} j , j = 1, 2, , p.
Definition 3.1 (Liu [115]) A vector x is called a feasible solution to the
uncertain programming model (3.3) if
M{gj (x, ) 0} j

(3.4)

108

Chapter 3 - Uncertain Programming

for j = 1, 2, , p.
Definition 3.2 (Liu [115]) A feasible solution x is called an optimal solution to the uncertain programming model (3.3) if
E[f (x , )] E[f (x, )]

(3.5)

for any feasible solution x.


Theorem 3.1 Assume the objective function f (x, 1 , 2 , , n ) is strictly
increasing with respect to 1 , 2 , , m and strictly decreasing with respect
to m+1 , m+2 , , n . If 1 , 2 , , n are independent uncertain variables
with uncertainty distributions 1 , 2 , , n , respectively, then the expected
objective function E[f (x, 1 , 2 , , n )] is equal to
1

1
1
1
f (x, 1
1 (), , m (), m+1 (1 ), , n (1 ))d.

(3.6)

Proof: It follows from Theorem 2.32 immediately.


Exercise 3.1: Assume f (x, ) = h1 (x)1 + h2 (x)2 + + hn (x)n + h0 (x)
where h1 (x), h2 (x), , hn (x), h0 (x) are real-valued functions and 1 , 2 , ,
n are independent uncertain variables. Show that
E[f (x, )] = h1 (x)E[1 ] + h2 (x)E[2 ] + + hn (x)E[n ] + h0 (x).

(3.7)

Theorem 3.2 Assume the constraint function g(x, 1 , 2 , , n ) is strictly


increasing with respect to 1 , 2 , , k and strictly decreasing with respect
to k+1 , k+2 , , n . If 1 , 2 , , n are independent uncertain variables
with uncertainty distributions 1 , 2 , , n , respectively, then the chance
constraint
M {g(x, 1 , 2 , , n ) 0}
(3.8)
holds if and only if
1
1
1
g(x, 1
1 (), , k (), k+1 (1 ), , n (1 )) 0.

(3.9)

Proof: It follows from Theorem 2.23 immediately.


Exercise 3.2: Assume x1 , x2 , , xn are nonnegative decision variables, and
1 , 2 , , n , are independent linear uncertain variables L(a1 , b1 ), L(a2 , b2 ),
, L(an , bn ), L(a, b), respectively. Show that for any confidence level
(0, 1), the chance constraint
( n
)
X
M
i xi
(3.10)
i=1

Section 3.1 - Uncertain Programming

109

holds if and only if


n
X

((1 )ai + bi )xi a + (1 )b.

(3.11)

i=1

Exercise 3.3: Assume x1 , x2 , , xn are nonnegative decision variables,


and 1 , 2 , , n , are independent normal uncertain variables N (e1 , 1 ),
N (e2 , 2 ), , N (en , n ), N (e, ), respectively. Show that for any confidence
level (0, 1), the chance constraint
( n
)
X
M
i xi
(3.12)
i=1

holds if and only if


!

i 3
ln
xi e
ln
.
ei +

n
X
i=1

(3.13)

Exercise 3.4: Assume 1 , 2 , , n are independent uncertain variables


with regular uncertainty distributions 1 , 2 , , n , respectively, and h1 (x),
h2 (x), , hn (x), h0 (x) are real-valued functions. Show that
( n
)
X
M
hi (x)i h0 (x)
(3.14)
i=1

holds if and only if


n
X

1
h+
i (x)i ()

i=1

n
X

1
h
i (x)i (1 ) h0 (x)

(3.15)

i=1

where

(
h+
i (x)

=
(

h
i (x)

hi (x), if hi (x) > 0


0,
if hi (x) 0,

(3.16)

hi (x), if hi (x) < 0


0,
if hi (x) 0

(3.17)

for i = 1, 2, , n.
Theorem 3.3 Assume f (x, 1 , 2 , , n ) is strictly increasing with respect
to 1 , 2 , , m and strictly decreasing with respect to m+1 , m+2 , , n ,
and gj (x, 1 , 2 , , n ) are strictly increasing with respect to 1 , 2 , , k
and strictly decreasing with respect to k+1 , k+2 , , n for j = 1, 2, , p.

110

Chapter 3 - Uncertain Programming

If 1 , 2 , , n are independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively, then the uncertain programming

E[f (x, 1 , 2 , , n )]

min
x
(3.18)
subject to:

M{gj (x, 1 , 2 , , n ) 0} j , j = 1, 2, , p
is equivalent to the crisp mathematical programming

Z 1

1
1
1
min
f (x, 1

1 (), , m (), m+1 (1 ), , n (1 ))d

subject to:

1
1
1

gj (x, 1

1 (j ), , k (j ), k+1 (1 j ), , n (1 j )) 0

j = 1, 2, , p.
Proof: It follows from Theorems 3.1 and 3.2 immediately.

3.2

Numerical Method

When the objective functions and constraint functions are monotone with
respect to the uncertain parameters, the uncertain programming model may
be converted to a crisp mathematical programming.
It is fortunate for us that almost all objective and constraint functions
in practical problems are indeed monotone with respect to the uncertain
parameters (not decision variables).
From the mathematical viewpoint, there is no difference between crisp
mathematical programming and classical mathematical programming except
for an integral. Thus we may solve it by simplex method, branch-and-bound
method, cutting plane method, implicit enumeration method, interior point
method, gradient method, genetic algorithm, particle swarm optimization,
neural networks, tabu search, and so on.
Example 3.1: Assume that x1 , x2 , x3 are nonnegative decision variables,
1 , 2 , 3 are independent linear uncertain variables L(1, 2), L(2, 3), L(3, 4),
and 1 , 2 , 3 are independent zigzag uncertain variables Z(1, 2, 3), Z(2, 3, 4),
Z(3, 4, 5), respectively. Consider the uncertain programming,




max E x1 + 1 + x2 + 2 + x3 + 3

x
,x
,x

1
2
3

subject to:

M{(x1 + 1 )2 + (x2 + 2 )2 + (x3 + 3 )2 100} 0.9

x1 , x2 , x3 0.

Note that x1 + 1 + x2 + 2 + x3 + 3 is a strictly increasing function


with respect to 1 , 2 , 3 , and (x1 + 1 )2 + (x2 + 2 )2 + (x3 + 3 )2 is a strictly

Section 3.3 - Machine Scheduling Problem

111

increasing function with respect to 1 , 2 , 3 . It is easy to verify that the


uncertain programming model can be converted to the crisp model,

max

x1 ,x2 ,x3


q
q
q
1
1
x1 + 1
()
+
x
+

()
+
x
+

()
d
2
3
1
2
3

subject to:
1
1
2
2
2
(x1 + 1
1 (0.9)) + (x2 + 2 (0.9)) + (x3 + 3 (0.9)) 100

x1 , x2 , x3 0

1
1
1
1
1
where 1
1 , 2 , 3 , 1 , 2 , 3 are inverse uncertainty distributions of
uncertain variables 1 , 2 , 3 , 1 , 2 , 3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and obtain an optimal solution

(x1 , x2 , x3 ) = (2.9735, 1.9735, 0.9735)


whose objective value is 6.3419.
Example 3.2: Assume that x1 and x2 are decision variables, 1 and 2 are iid
linear uncertain variables L(0, /2). Consider the uncertain programming,

min E [x1 sin(x1 1 ) x2 cos(x2 + 2 )]

x1 ,x2
subject to:

0 x1 , 0 x2 .
2
2
It is clear that x1 sin(x1 1 ) x2 cos(x2 + 2 ) is strictly decreasing with
respect to 1 and strictly increasing with respect to 2 . Thus the uncertain
programming is equivalent to the crisp model,

Z 1


min
x1 sin(x1 1
(1 )) x2 cos(x2 + 1
()) d

1
2

x1 ,x2 0
subject to:

0 x1 ,
2

0 x2

1
where 1
are inverse uncertainty distributions of 1 , 2 , respectively.
1 , 2
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution

(x1 , x2 ) = (0.4026, 0.4026)


whose objective value is 0.2708.

112

3.3

Chapter 3 - Uncertain Programming

Machine Scheduling Problem

Machine scheduling problem is concerned with finding an efficient schedule


during an uninterrupted period of time for a set of machines to process a set
of jobs. A lot of research work has been done on this type of problem. The
study of machine scheduling problem with uncertain processing times was
started by Liu [120] in 2010.
Machine
..

...
.......
..
.
.............................................................................................................................................................................................
....
....
....
...
...
...
...
...
...
.
.
.
...
.
6
7
3 ..
....
...
..
...
..
...
..............................................................................................................................................................................................
..
....
....
....
..
...
...
...
..
...
...
..
..
...
...
4
5
2 ......
..
...
...
..
...
...
....
..
.........................................................................................................................................................................
..
...
...
...
....
..
...
...
...
...
..
...
...
...
...
..
.
.
.
.
.
.
.
.
3
1 ...
1
2
..
..
..
..
.
.
.
..
.....
....
....
....
.......................................................................................................................................................................................................................
..
...
..
...
.
.
.............................................
.............................................

M
M

Time

Makespan

Figure 3.1: A Machine Schedule with 3 Machines and 7 Jobs


In a machine scheduling problem, we assume that (a) each job can be
processed on any machine without interruption; (b) each machine can process
only one job at a time; and (c) the processing times are uncertain variables
with known uncertainty distributions. We also use the following indices and
parameters:
i = 1, 2, , n: jobs;
k = 1, 2, , m: machines;
ik : uncertain processing time of job i on machine k;
ik : uncertainty distribution of ik .
How to Represent a Schedule?
Liu [105] suggested that a schedule should be represented by two decision
vectors x and y, where
x = (x1 , x2 , , xn ): integer decision vector representing n jobs with
1 xi n and xi 6= xj for all i 6= j, i, j = 1, 2, , n. That is, the sequence
{x1 , x2 , , xn } is a rearrangement of {1, 2, , n};
y = (y1 , y2 , , ym1 ): integer decision vector with y0 0 y1 y2
ym1 n ym .
We note that the schedule is fully determined by the decision vectors x
and y in the following way. For each k (1 k m), if yk = yk1 , then the
machine k is not used; if yk > yk1 , then the machine k is used and processes
jobs xyk1 +1 , xyk1 +2 , , xyk in turn. Thus the schedule of all machines is

113

Section 3.3 - Machine Scheduling Problem

as follows,
Machine 1: xy0 +1 xy0 +2 xy1 ;
Machine 2: xy1 +1 xy1 +2 xy2 ;

Machine m: xym1 +1 xym1 +2 xym .

y0

...
...
.......
...
...... ........
... ....
..
.
... ...
... ..... 1......
.............
...
...
..................................
... ...

y1

...
...
.......
...
...... ........
... ....
..
.
... ...
... ..... 3......
.............
...
...
...........................................................................
.

...................
...
...
.....
.
... 2 ....
.................

M-1

y2

y3

...
...
..
...
.......
.......
.......
...
...... ........
...... ........
...... ........ ....
... ....
...
...
..
..
.. ...
.
.
.
.
.
.
.
.
... ..
... 6 ..
... 7 .. ...
...
... ..... 5......
....
.
.
.
...... ......
.............
................
...
...
.......
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
........................................................
......................................................................................
.
.

...................
...
...
.....
.
... 4 ....
.................

M-2

(3.19)

M-3

Figure 3.2: Formulation of Schedule in which Machine 1 processes Jobs x1 , x2 ,


Machine 2 processes Jobs x3 , x4 and Machine 3 processes Jobs x5 , x6 , x7 .

Completion Times
Let Ci (x, y, ) be the completion times of jobs i, i = 1, 2, , n, respectively.
For each k with 1 k m, if the machine k is used (i.e., yk > yk1 ), then
we have
Cxyk1 +1 (x, y, ) = xyk1 +1 k

(3.20)

Cxyk1 +j (x, y, ) = Cxyk1 +j1 (x, y, ) + xyk1 +j k

(3.21)

and

for 2 j yk yk1 .
If the machine k is used, then the completion time Cxyk1 +1 (x, y, ) of
job xyk1 +1 is an uncertain variable whose inverse uncertainty distribution is
1
xy

k1 +1

(x, y, ) = 1
xy

k1 +1

k ().

(3.22)

Generally, suppose the completion time Cxyk1 +j1 (x, y, ) has an inverse uncertainty distribution 1
xyk1 +j1 (x, y, ). Then the completion time
Cxyk1 +j (x, y, ) has an inverse uncertainty distribution
1
xy

k1 +j

(x, y, ) = 1
xy

k1 +j1

(x, y, ) + 1
xy

k1 +j

k ().

(3.23)

This recursive process may produce all inverse uncertainty distributions of


completion times of jobs.

114

Chapter 3 - Uncertain Programming

Makespan
Note that, for each k (1 k m), the value Cxyk (x, y, ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by
f (x, y, ) = max Cxyk (x, y, )
1km

(3.24)

whose inverse uncertainty distribution is


1 (x, y, ) = max 1
xy (x, y, ).
1km

(3.25)

Machine Scheduling Model


In order to minimize the expected makespan E[f (x, y, )], we have the following machine scheduling model,

E[f (x, y, )]

min
x,y

subject to:

1 xi n, i = 1, 2, , n
(3.26)

xi 6= xj , i 6= j, i, j = 1, 2, , n

0 y1 y2 ym1 n

xi , yj , i = 1, 2, , n, j = 1, 2, , m 1, integers.
Since 1 (x, y, ) is the inverse uncertainty distribution of f (x, y, ), the
machine scheduling model is simplified as follows,

Z 1

min
1 (x, y, )d

x,y 0

subject to:
(3.27)
1 xi n, i = 1, 2, , n

xi 6= xj , i 6= j, i, j = 1, 2, , n

0 y1 y2 ym1 n

xi , yj , i = 1, 2, , n, j = 1, 2, , m 1, integers.
Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear uncertain processing times
ik L(i, i + k),

i = 1, 2, , 7, k = 1, 2, 3

where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the

115

Section 3.4 - Vehicle Routing Problem

optimal solution is
x = (1, 4, 5, 3, 7, 2, 6),

y = (3, 5).

(3.28)

In other words, the optimal machine schedule is


Machine 1: 1 4 5
Machine 2: 3 7
Machine 3: 2 6
whose expected makespan is 12.

3.4

Vehicle Routing Problem

Vehicle routing problem (VRP) is concerned with finding efficient routes,


beginning and ending at a central depot, for a fleet of vehicles to serve a
number of customers.
...................
...
.....
..........
...
...... .........
...
...
...
.
....
...................
..................................
..
.
.....
.
.
.
...
...
....
...
.....................
..
...
....
...
...... ........
.
.
...
.
.
............
.
.
...
.
.
.
.
...
...... .........
........... .....
...
...
.....
...
...
...
.....
...
...
...
.....
...
..
..
.....
.
.
.
...
.....
.
..
.
.
.....
..
.
.
..
.
.
.
..... ............
................
.
..
.
..........................
.
.
.
.
...
..
.... ..
.
..
......
.
...
.
.
.
......
.
..
... ....
.
..
.
.
.
.
.
......................................................
.
.
.. ..
...
...
...
............................ ............
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.....
.
.........................
......... ........
....
... ........................
.. ....
..
.
.
.
.....
........
..
...
.....
....
.....
...
...
...
.....
...
...
.....
.....
...................
..
.
.....
..
.....
.
.....
..
.
.....
..
.
.....
.
.
.
.
.
.
.......................
.
.
.
.
.
.
.
.
.
.
.....
...
...
.....
....
.
...
...
..
...
....
...
.
..................................................................
.
..
.
...
...
.
.
...... ......
...... ........
...........
...........

Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers
Due to its wide applicability and economic importance, vehicle routing
problem has been extensively studied. Liu [120] first introduced uncertainty
theory into the research area of vehicle routing problem in 2010. In this
section, vehicle routing problem will be modelled by uncertain programming
in which the travel times are assumed to be uncertain variables with known
uncertainty distributions.
We assume that (a) a vehicle will be assigned for only one route on which
there may be more than one customer; (b) a customer will be visited by one
and only one vehicle; (c) each route begins and ends at the depot; and (d) each
customer specifies its time window within which the delivery is permitted or
preferred to start.
Let us first introduce the following indices and model parameters:
i = 0: depot;
i = 1, 2, , n: customers;

116

Chapter 3 - Uncertain Programming

k = 1, 2, , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, , n;
ij : uncertainty distribution of Tij , i, j = 0, 1, 2, , n;
[ai , bi ]: time window of customer i, i = 1, 2, , n.
Operational Plan
Liu [105] suggested that an operational plan should be represented by three
decision vectors x, y and t, where
x = (x1 , x2 , , xn ): integer decision vector representing n customers
with 1 xi n and xi 6= xj for all i 6= j, i, j = 1, 2, , n. That is, the
sequence {x1 , x2 , , xn } is a rearrangement of {1, 2, , n};
y = (y1 , y2 , , ym1 ): integer decision vector with y0 0 y1 y2
ym1 n ym ;
t = (t1 , t2 , , tm ): each tk represents the starting time of vehicle k at
the depot, k = 1, 2, , m.
We note that the operational plan is fully determined by the decision
vectors x, y and t in the following way. For each k (1 k m), if yk = yk1 ,
then vehicle k is not used; if yk > yk1 , then vehicle k is used and starts from
the depot at time tk , and the tour of vehicle k is 0 xyk1 +1 xyk1 +2
xyk 0. Thus the tours of all vehicles are as follows:
Vehicle 1: 0 xy0 +1 xy0 +2 xy1 0;
Vehicle 2: 0 xy1 +1 xy1 +2 xy2 0;

Vehicle m: 0 xym1 +1 xym1 +2 xym 0.


y0

...
...
.....
...
....... .........
... ....
..
... ...
.
... ..... 1......
.............
...
...
..........................................
.

y1

...
...
.....
...
....... .........
... ....
..
... ...
.
... ..... 3......
.............
...
...
..............................................................................
.

.....
....... .........
...
..
....
.
... 2 ....
.................

V-1

y2

V-2

y3

...
...
...
..
.....
.....
.....
...
....... .........
....... .........
....... ......... ....
...
...
... ....
..
..
.. ...
....
....
... ...
.
.
.
.
.
.
.
... 6 ...
... 7 ... ....
... ..... 5.....
.................
.................
...
.............
...
...
...
.
................................................................................................
............................................................
.
.

.....
....... .........
...
..
....
.
... 4 ....
.................

V-3

Figure 3.4: Formulation of Operational Plan in which Vehicle 1 visits Customers x1 , x2 , Vehicle 2 visits Customers x3 , x4 and Vehicle 3 visits Customers
x5 , x6 , x7 .
It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.

117

Section 3.4 - Vehicle Routing Problem

Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 k m, if vehicle k is used (i.e., yk > yk1 ), then we have
fxyk1 +1 (x, y, t) = tk + T0xyk1 +1
and
fxyk1 +j (x, y, t) = fxyk1 +j1 (x, y, t) axyk1 +j1 + Txyk1 +j1 xyk1 +j
for 2 j yk yk1 . If the vehicle k is used, i.e., yk > yk1 , then the arrival
time fxyk1 +1 (x, y, t) at the customer xyk1 +1 is an uncertain variable whose
inverse uncertainty distribution is
1
xy

k1 +1

(x, y, t, ) = tk + 1
0xy

k1 +1

().

Generally, suppose the arrival time fxyk1 +j1 (x, y, t) has an inverse uncertainty distribution 1
xyk1 +j1 (x, y, t, ). Then fxyk1 +j (x, y, t) has an inverse uncertainty distribution
1
xy

(x, y, t, ) = 1
xy

k1 +j

(x, y, t, )axyk1 +j1 +1


xy

k1 +j1

k1 +j1

xyk1 +j ()

for 2 j yk yk1 . This recursive process may produce all inverse


uncertainty distributions of arrival times at customers.
Travel Distance
Let g(x, y) be the total travel distance of all vehicles. Then we have
g(x, y) =

m
X

gk (x, y)

(3.29)

k=1

where

yP
k 1
D
Dxj xj+1 + Dxyk 0 , if yk > yk1
0xyk1 +1 +
gk (x, y) =
j=yk1 +1

0,
if yk = yk1
for k = 1, 2, , m.

118

Chapter 3 - Uncertain Programming

Vehicle Routing Model


If we hope that each customer i (1 i n) is visited within its time window
[ai , bi ] with confidence level i (i.e., the vehicle arrives at customer i before
time bi ), then we have the following chance constraint,
M{fi (x, y, t) bi } i .

(3.30)

If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,

min g(x, y)

x,y,t

subject to:

M{fi (x, y, t) bi } i , i = 1, 2, , n

1 xi n, i = 1, 2, , n

x
i 6= j, i, j = 1, 2, , n
i 6= xj ,

0 y1 y2 ym1 n

xi , yj , i = 1, 2, , n, j = 1, 2, , m 1,

(3.31)

integers

which is equivalent to

min g(x, y)

x,y,t

subject to:

1
i = 1, 2, , n

i (x, y, t, i ) bi ,
1 xi n, i = 1, 2, , n

xi 6= xj , i 6= j, i, j = 1, 2, , n

0 y1 y2 ym1 n

xi , yj , i = 1, 2, , n, j = 1, 2, , m 1,

(3.32)

integers

where 1
i (x, y, t, ) are the inverse uncertainty distributions of fi (x, y, t)
for i = 1, 2, , n, respectively.
Numerical Experiment
Assume that there are 3 vehicles and 7 customers with the following time
windows,
Node
Window
1
[7 : 00, 9 : 00]
2
[7 : 00, 9 : 00]
3
[15 : 00, 17 : 00]
4
[15 : 00, 17 : 00]

Node
Window
5
[15 : 00, 17 : 00]
6
[19 : 00, 21 : 00]
7
[19 : 00, 21 : 00]

Section 3.5 - Project Scheduling Problem

119

and each customer is visited within time windows with confidence level 0.90.
We also assume that the distances are
Dij = |i j|,

i, j = 0, 1, 2, , 7

and the travel times are normal uncertain variables


Tij N (2|i j|, 1),

i, j = 0, 1, 2, , 7.

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the optimal solution is
x = (1, 3, 2, 5, 7, 4, 6),
y = (2, 5),
t = (6 : 18, 4 : 18, 8 : 18).

(3.33)

In other words, the optimal operational plan is


Vehicle 1: depot 1 3 depot, starting time: 6:18
Vehicle 2: deport 2 5 7 depot, starting time: 4:18
Vehicle 3: depot 4 6 depot, starting time: 8:18
whose total travel distance is 32.

3.5

Project Scheduling Problem

Project scheduling problem is to determine the schedule of allocating resources so as to balance the total cost and the completion time. The study
of project scheduling problem with uncertain factors was started by Liu [120]
in 2010. This section presents an uncertain programming model for project
scheduling problem in which the duration times are assumed to be uncertain
variables with known uncertainty distributions.
Project scheduling is usually represented by a directed acyclic network
where nodes correspond to milestones, and arcs to activities which are basically characterized by the times and costs consumed.
Let (V, A) be a directed acyclic graph, where V = {1, 2, , n, n + 1} is
the set of nodes, A is the set of arcs, (i, j) A is the arc of the graph (V, A)
from nodes i to j. It is well-known that we can rearrange the indexes of the
nodes in V such that i < j for all (i, j) A.
Before we begin to study project scheduling problem with uncertain activity duration times, we first make some assumptions: (a) all of the costs
needed are obtained via loans with some given interest rate; and (b) each
activity can be processed only if the loan needed is allocated and all the
foregoing activities are finished.
In order to model the project scheduling problem, we introduce the following indices and parameters:
ij : uncertain duration time of activity (i, j) in A;

120

Chapter 3 - Uncertain Programming


.................
....................
....
...
...
...
.. ...
..
.....
........................................................................
.
..
.
.
.
................................
..........................................
.
.
.
.
......
....
....
.
.
......
.
.
.
.
.
.
.
.
......
....
....
......
......
......
......
......
......
......
......
......
.
.
.
.
.
.
......
.
.
.
.
....
....
......
.
.
.
.
.
.
.
.
.
.
......
....
....
.
.
.
.
...... .
.
.
.
.
.
.
.......... .................
.............. ..........
.............. ..........
..............
.
.
.
.
.
.
.
.
.
.
.
.
...
....
.
.
.
....
....
...
...
....
.......................................................................
.......................................................................
.......................................................................
.
.
.
.
.
...
.
.
.
.
.
.
... ...
..
.
.
.
...
...
..
...
.
.
.
.
....
.
...
.
.
.
...
.
.
.
.
.
.
.
................. ......
.................
............... ........
..............................
.
.
......
.
......
.
.
......
....
......
.
.
.
.
.
.
......
.
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
.
.
.
.
......
......
.
....... ......... ..........
........ .........
................ ......
.................. .............
...
...
.
. ..
.
.....
..........................................................................
.
.
...
...
...
...................
...................

Figure 3.5: A Project with 8 Milestones and 11 Activities


ij : uncertainty distribution of ij ;
cij : cost of activity (i, j) in A;
r: interest rate;
xi : integer decision variable representing the allocating time of all loans
needed for all activities (i, j) in A.
Starting Times
For simplicity, we write = {ij : (i, j) A} and x = (x1 , x2 , , xn ). Let
Ti (x, ) denote the starting time of all activities (i, j) in A. According to the
assumptions, the starting time of the total project (i.e., the starting time of
of all activities (1, j) in A) should be
T1 (x, ) = x1

(3.34)

whose inverse uncertainty distribution may be written as


1
1 (x, ) = x1 .

(3.35)

From the starting time T1 (x, ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ) = x2 (x1 + 12 )
(3.36)
whose inverse uncertainty distribution may be written as
1
1
2 (x, ) = x2 (x1 + 12 ()).

(3.37)

Generally, suppose that the starting time Tk (x, ) of all activities (k, i) in A
has an inverse uncertainty distribution 1
k (x, ). Then the starting time
Ti (x, ) of all activities (i, j) in A should be
Ti (x, ) = xi max (Tk (x, ) + ki )

(3.38)

(k,i)A

whose inverse uncertainty distribution is


1
i (x, ) = xi max

(k,i)A


1
1
k (x, ) + ki () .

(3.39)

Section 3.5 - Project Scheduling Problem

121

This recursive process may produce all inverse uncertainty distributions of


starting times of activities.
Completion Time
The completion time T (x, ) of the total project (i.e, the finish time of all
activities (k, n + 1) in A) is
T (x, ) =

max
(k,n+1)A

(Tk (x, ) + k,n+1 )

whose inverse uncertainty distribution is




1
1 (x, ) = max
1
(x,
)
+

()
.
k
k,n+1

(3.40)

(3.41)

(k,n+1)A

Total Cost
Based on the completion time T (x, ), the total cost of the project can be
written as
X
dT (x,)xi e
C(x, ) =
cij (1 + r)
(3.42)
(i,j)A

where dae represents the minimal integer greater than or equal to a. Note that
C(x, ) is a discrete uncertain variable whose inverse uncertainty distribution
is
X
d1 (x;)xi e
1 (x, ) =
cij (1 + r)
(3.43)
(i,j)A

for 0 < < 1.


Project Scheduling Model
In order to minimize the expected cost of the project under the completion
time constraint, we may construct the following project scheduling model,

E[C(x, )]

min
x

subject to:
(3.44)

M{T (x, ) T0 } 0

x 0, integer vector
where T0 is a due date of the project, 0 is a predetermined confidence level,
T (x, ) is the completion time defined by (3.40), and C(x, ) is the total cost
defined by (3.42). This model is equivalent to

Z 1

min
1 (x, )d

0
subject to:
(3.45)

(x,

0
0

x 0, integer vector

122

Chapter 3 - Uncertain Programming

where 1 (x, ) is the inverse uncertainty distribution of T (x, ) determined


by (3.41) and 1 (x, ) is the inverse uncertainty distribution of C(x, )
determined by (3.43).
Numerical Experiment
Consider a project scheduling problem shown by Figure 3.5 in which there are
8 milestones and 11 activities. Assume that all duration times of activities
are linear uncertain variables,
ij L(3i, 3j),

(i, j) A

and assume that the costs of activities are


cij = i + j,

(i, j) A.

In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is 0 = 0.85. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x = (7, 24, 17, 16, 35, 33, 30).
(3.46)
In other words, the optimal allocating times
ities are
Date
7 16 17 24
Node 1
4 3 2
Loan 12 11 27 7

of all loans needed for all activ30 33 35


7
6 5
15 14 13

whose expected total cost is 190.6, and M{T (x , ) 60} = 0.88.

3.6

Uncertain Multiobjective Programming

It has been increasingly recognized that many real decision-making problems


involve multiple, noncommensurable, and conflicting objectives which should
be considered simultaneously. In order to optimize multiple objectives, multiobjective programming has been well developed and applied widely. For
modelling multiobjective decision-making problems with uncertain parameters, Liu and Chen [129] presented the following uncertain multiobjective
programming,

(E[f1 (x, )], E[f2 (x, )], , E[fm (x, )])

min
x
(3.47)
subject to:

M{gj (x, ) 0} j , j = 1, 2, , p
where fi (x, ) are return functions for i = 1, 2, , m, and gj (x, ) are constraint functions for j = 1, 2, , p.

123

Section 3.7 - Uncertain Goal Programming

Since the objectives are usually in conflict, there is no optimal solution


that simultaneously minimizes all the objective functions. In this case, we
have to introduce the concept of Pareto solution, which means that it is
impossible to improve any one objective without sacrificing on one or more
of the other objectives.
Definition 3.3 A feasible solution x is said to be Pareto to the uncertain
multiobjective programming (3.47) if there is no feasible solution x such that
E[fi (x, )] E[fi (x , )],

i = 1, 2, , m

(3.48)

and E[fj (x, )] < E[fj (x , )] for at least one index j.


If the decision maker has a real-valued preference function aggregating
the m objective functions, then we may minimize the aggregating preference
function subject to the same set of chance constraints. This model is referred
to as a compromise model whose solution is called a compromise solution.
It has been proved that the compromise solution is Pareto to the original
multiobjective model.
The first well-known compromise model is set up by weighting the objective functions, i.e.,

m
P

i E[fi (x, )]

min
x i=1

subject to:

M{gj (x, ) 0} j ,

(3.49)

j = 1, 2, , p

where the weights 1 , 2 , , m are nonnegative numbers with 1 + 2 +


+ m = 1, for example, i 1/m for i = 1, 2, , m.
The second way is related to minimizing the distance function from a
solution
(E[f1 (x, )], E[f2 (x, )], , E[fm (x, )])
(3.50)

to an ideal vector (f1 , f2 , , fm


), where fi are the optimal values of the
ith objective functions without considering other objectives, i = 1, 2, , m,
respectively. That is,

m
P

min
i (E[fi (x, )] fi )2

x i=1

subject to:

M{gj (x, ) 0} j ,

(3.51)

j = 1, 2, , p

where the weights 1 , 2 , , m are nonnegative numbers with 1 + 2 +


+ m = 1, for example, i 1/m for i = 1, 2, , m.
By the third way a compromise solution can be found via an interactive
approach consisting of a sequence of decision phases and computation phases.
Various interactive approaches have been developed.

124

3.7

Chapter 3 - Uncertain Programming

Uncertain Goal Programming

The concept of goal programming was presented by Charnes and Cooper [8] in
1961 and subsequently studied by many researchers. Goal programming can
be regarded as a special compromise model for multiobjective optimization
and has been applied in a wide variety of real-world problems. In multiobjective decision-making problems, we assume that the decision-maker is able
to assign a target level for each goal and the key idea is to minimize the deviations (positive, negative, or both) from the target levels. In the real-world
situation, the goals are achievable only at the expense of other goals and
these goals are usually incompatible. In order to balance multiple conflicting
objectives, a decision-maker may establish a hierarchy of importance among
these incompatible goals so as to satisfy as many goals as possible in the
order specified. For multiobjective decision-making problems with uncertain
parameters, Liu and Chen [129] proposed an uncertain goal programming,

l
m
P
P

min
Pj
(uij d+

i + vij di )

j=1
i=1

subject to:
(3.52)
+

E[fi (x, )] + d

i di = bi , i = 1, 2, , m

M{gj (x, ) 0} j ,
j = 1, 2, , p

+
di , di 0,
i = 1, 2, , m
where Pj is the preemptive priority factor which expresses the relative importance of various goals, Pj  Pj+1 , for all j, uij is the weighting factor
corresponding to positive deviation for goal i with priority j assigned, vij
is the weighting factor corresponding to negative deviation for goal i with

priority j assigned, d+
i is the positive deviation from the target of goal i, di
is the negative deviation from the target of goal i, fi is a function in goal constraints, gj is a function in real constraints, bi is the target value according
to goal i, l is the number of priorities, m is the number of goal constraints,
and p is the number of real constraints. Note that the positive and negative
deviations are calculated by
(
E[fi (x, )] bi , if E[fi (x, )] > bi
+
di =
(3.53)
0,
otherwise
and

(
d
i

bi E[fi (x, )], if E[fi (x, )] < bi


0,

otherwise

(3.54)

for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m
)
m
m
X
X
X
+

lexmin
(ui1 di + vi1 di ),
(ui2 di + vi2 di ), ,
(uil di + vil di )
i=1

i=1

i=1

Section 3.8 - Uncertain Multilevel Programming

125

where lexmin represents lexicographically minimizing the objective vector.

3.8

Uncertain Multilevel Programming

Multilevel programming offers a means of studying decentralized decision


systems in which we assume that the leader and followers may have their
own decision variables and objective functions, and the leader can only influence the reactions of followers through his own decision variables, while the
followers have full authority to decide how to optimize their own objective
functions in view of the decisions of the leader and other followers.
Assume that in a decentralized two-level decision system there is one
leader and m followers. Let x and y i be the control vectors of the leader
and the ith followers, i = 1, 2, , m, respectively. We also assume that the
objective functions of the leader and ith followers are F (x, y 1 , , y m , ) and
fi (x, y 1 , , y m , ), i = 1, 2, , m, respectively, where is an uncertain
vector.
Let the feasible set of control vector x of the leader be defined by the
chance constraint
M{G(x, ) 0}
(3.55)
where G is a constraint function, and is a predetermined confidence level.
Then for each decision x chosen by the leader, the feasibility of control vectors y i of the ith followers should be dependent on not only x but also
y 1 , , y i1 , y i+1 , , y m , and generally represented by the chance constraints,
M{gi (x, y 1 , y 2 , , y m , ) 0} i
(3.56)
where gi are constraint functions, and i are predetermined confidence levels,
i = 1, 2, , m, respectively.
Assume that the leader first chooses his control vector x, and the followers determine their control array (y 1 , y 2 , , y m ) after that. In order to
minimize the expected objective of the leader, Liu and Yao [128] proposed
the following uncertain multilevel programming,

E[F (x, y 1 , y 2 , , y m , )]

min
x

subject to:

M{G(x, ) 0}

(y 1 , y 2 , , y m ) solves problems (i = 1, 2, , m)
(3.57)

min
E[f
(x,
y
,
y
,

,
y
,
)]
i

1
2
m
yi

subject to:

M{gi (x, y 1 , y 2 , , y m , ) 0} i .
Definition 3.4 Let x be a feasible control vector of the leader. A Nash
equilibrium of followers is the feasible array (y 1 , y 2 , , y m ) with respect to

126

Chapter 3 - Uncertain Programming

x if
E[fi (x, y 1 , , y i1 , y i , y i+1 , , y m , )]
E[fi (x, y 1 , , y i1 , y i , y i+1 , , y m , )]

(3.58)

for any feasible array (y 1 , , y i1 , y i , y i+1 , , y m ) and i = 1, 2, , m.


Definition 3.5 Suppose that x is a feasible control vector of the leader and
(y 1 , y 2 , , y m ) is a Nash equilibrium of followers with respect to x . We call
the array (x , y 1 , y 2 , , y m ) a Stackelberg-Nash equilibrium to the uncertain
multilevel programming (3.57) if
E[F (x, y 1 , y 2 , , y m , )] E[F (x , y 1 , y 2 , , y m , )]

(3.59)

for any feasible control vector x and the Nash equilibrium (y 1 , y 2 , , y m )


with respect to x.

3.9

Bibliographic Notes

Uncertain programming was founded by Liu [115] in 2009 and was applied to
machine scheduling problem, vehicle routing problem and project scheduling
problem by Liu [120] in 2010.
As extensions of uncertain programming theory, Liu and Chen [129] developed an uncertain multiobjective programming and an uncertain goal programming. In addition, Liu and Yao [128] suggested an uncertain multilevel
programming for modeling decentralized decision systems with uncertain factors.
After that, the uncertain programming has obtained fruitful results in
both theory and practice. For exploring more books and papers, the interested reader may visit the website at http://orsc.edu.cn/online.

Chapter 4

Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting experts
experimental data by uncertainty theory. This chapter will design a questionnaire survey for collecting experts experimental data, and introduce the empirical uncertainty distribution (i.e., linear interpolation method), the principle of least squares, the method of moments, and the Delphi method for
determining uncertainty distributions from experts experimental data.

4.1

Experts Experimental Data

Uncertain statistics is based on experts experimental data rather than historical data. How do we obtain experts experimental data? Liu [120] proposed
a questionnaire survey for collecting experts experimental data. The starting point is to invite one or more domain experts who are asked to complete
a questionnaire about the meaning of an uncertain variable like how far
from Beijing to Tianjin.
We first ask the domain expert to choose a possible value x (say 110km)
that the uncertain variable may take, and then quiz him
How likely is less than or equal to x?

(4.1)

Denote the experts belief degree by (say 0.6). Note that the experts belief
degree of greater than x must be 1 due to the self-duality of uncertain
measure. An experts experimental data
(x, ) = (110, 0.6)

(4.2)

is thus acquired from the domain expert.


Repeating the above process, the following experts experimental data are
obtained by the questionnaire,
(x1 , 1 ), (x2 , 2 ), , (xn , n ).

(4.3)

128

Chapter 4 - Uncertain Statistics

............................................................................
...........................................................................
.....
.....
.....
...
.....
.....
.....
...
.....
.....
.
.
.....
.
.
.
.
..... ... ......
..... .. .....
..... .. .....
. ..
....................................................................................................................................................................................................................................
..
...
..

M{ x}

M{ x}

Figure 4.1: Experts Experimental Data (x, )


Remark 4.1: None of x, and n could be assigned a value in the questionnaire before asking the domain expert. Otherwise, the domain expert may
have no knowledge or experiments enough to answer your questions.

4.2

Questionnaire Survey

Beijing is the capital of China, and Tianjin is a coastal city. Assume that
the real distance between them is not exactly known for us. It is more acceptable to regard such an unknown quantity as an uncertain variable than
a random variable or a fuzzy variable. Chen and Ralescu [15] employed uncertain statistics to estimate the travel distance between Beijing and Tianjin.
The consultation process is as follows:
Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?
A1: 100km. (an experts experimental data (100, 0) is acquired)
Q2: What do you think is the maximum distance?
A2: 150km. (an experts experimental data (150, 1) is acquired)
Q3: What do you think is a likely distance?
A3: 130km.
Q4: What is the belief degree that the real distance is less than 130km?
A4: 0.6. (an experts experimental data (130, 0.6) is acquired)
Q5: Is there another number this distance may be?
A5: 140km.
Q6: What is the belief degree that the real distance is less than 140km?
A6: 0.9. (an experts experimental data (140, 0.9) is acquired)
Q7: Is there another number this distance may be?
A7: 120km.

Section 4.4 - Principle of Least Squares

129

Q8: What is the belief degree that the real distance is less than 120km?
A8: 0.3. (an experts experimental data (120, 0.3) is acquired)
Q9: Is there another number this distance may be?
A9: No idea.
By using the questionnaire survey, five experts experimental data of the
travel distance between Beijing and Tianjin are acquired from the domain
expert,
(100, 0), (120, 0.3), (130, 0.6), (140, 0.9), (150, 1).
(4.4)

4.3

Empirical Uncertainty Distribution

How do we determine the uncertainty distribution for an uncertain variable?


Assume that we have obtained a set of experts experimental data
(x1 , 1 ), (x2 , 2 ), , (xn , n )

(4.5)

that meet the following consistence condition (perhaps after a rearrangement)


x1 < x2 < < xn ,

0 1 2 n 1.

(4.6)

Based on those experts experimental data, Liu [120] suggested an empirical


uncertainty distribution,

0,
if x < x1

(i+1 i )(x xi )
, if xi x xi+1 , 1 i < n
i +
(4.7)
(x) =
xi+1 xi

1,
if x > xn
Essentially, it is a type of linear interpolation method.
The empirical uncertainty distribution determined by (4.7) has an expected value


n1
X i+1 i1
n1 + n
1 + 2
x1 +
xi + 1
xn .
(4.8)
E[] =
2
2
2
i=2
If all xi s are nonnegative, then the k-th empirical moments are
k

E[ ] =

1 xk1

n1 k
1 XX
k
+
(i+1 i )xji xkj
i+1 + (1 n )xn .
k + 1 i=1 j=0

(4.9)

Example 4.1: Recall that five experts experimental data (100, 0), (120, 0.3),
130, 0.6), (140, 0.9), (150, 1) of the travel distance between Beijing and Tianjin have been acquired in Section 4.2. Based on those experts experimental
data, an empirical uncertainty distribution of travel distance is shown in
Figure 4.3.

130

Chapter 4 - Uncertain Statistics

(x)
1

....
........
..
...
..
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...............................................
...
...
.
...
...
.........
...
...............
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
5
5
..........
.
.
.
.
.
.
.
.
.
4
4
.
.
...
.
.
.
....
...
..
.
...
.
.
...
...
...
...
...
...
..
...
.
...
...
...
...
2
2
...
..........................................
...
...
.
.
...
3
3
..
...
...
...
...
...
...
...
.
.
...
..
...
...
...
...
...
...
...
.
.
...
.
1
1 .....
...
...
.....
...
...
....
...................................................................................................................................................................................................................................................................................
....
.

(x , )

(x , )

(x , )

(x , )

(x , )

Figure 4.2: Empirical Uncertainty Distribution (x)

4.4

Principle of Least Squares

Assume that an uncertainty distribution to be determined has a known functional form (x|) with an unknown parameter . In order to estimate the
parameter , Liu [120] employed the principle of least squares that minimizes
the sum of the squares of the distance of the experts experimental data to
the uncertainty distribution. This minimization can be performed in either
the vertical or horizontal direction. If the experts experimental data
(x1 , 1 ), (x2 , 2 ), , (xn , n )

(4.10)

are obtained and the vertical direction is accepted, then we have


min

n
X

((xi |) i )2 .

(4.11)

i=1

The optimal solution b of (4.11) is called the least squares estimate of , and
b
then the least squares uncertainty distribution is (x|).
Example 4.2: Assume that an uncertainty
with two unknown parameters a and b, i.e.,

0,

(x a)/(b a),
(x) =

1,

distribution has a linear form


if x a
if a x b
if x b.

(4.12)

We also assume the following experts experimental data,


(1, 0.15), (2, 0.45), (3, 0.55), (4, 0.85), (5, 0.95).

(4.13)

131

Section 4.4 - Principle of Least Squares

(x)
1

....
........
.
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........
........................................
....
...
........
...
.........
........
.
.
...

.
...
...
....
...
.....
.....
...
....
.
.
.
...
...
...
....
.....
...
.....
...
....
.
.
.
...
..
.
...
....
.....
...
.....
...
....
.
.
.
...
...
...
.....
...
.....
.....
...
....
.
.
...
.
.....
...
.....
...
.......
......
...
......
.
.
.
.
.
...
.
.......
...
......
...
.......
.......
...
......
.
.
.
.
...
.
....
.....................................
.................................................................................................................................................................................................................................
....

(140, 0.9)

(150, 1)

(130, 0.6)

(120, 0.3)

(100, 0)

Figure 4.3: Empirical Uncertainty Distribution of Travel Distance between


Beijing and Tianjin. Note that the empirical expected distance is 125.5km
and the real distance is 127km in the google earth.
(x|)
....
.........
...
....
..........................
..
..................................
...
.... .....................................
.
...
..
..........
.
...
.

.
......
...
......
...
.....
...
.....
....
.
.
...
.
.
....
...
....
...
....
........
...
...
.
.
...
.
.
...
...
...
...
....
...
....
.. .......
...

.. ....
...
......
.
...
.....
......
...
.........................
.
..................................................................................................................................................................................................................................................
..
...
...
..

Figure 4.4: Principle of Least Squares


The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
yield that a = 0.2273, b = 4.7727 and the least squares uncertainty distribution is

0,
if x 0.2273

(x 0.2273)/4.5454, if 0.2273 x 4.7727


(x) =
(4.14)

1,
if x 4.7727.
Example 4.3: Assume that an uncertainty distribution has a lognormal
form with two unknown parameters e and , i.e.,


1
(e ln x)

(x|e, ) = 1 + exp
.
3

(4.15)

132

Chapter 4 - Uncertain Statistics

We also assume the following experts experimental data,


(0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9).

(4.16)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that e = 0.4825, = 0.7852 and the least squares uncertainty distribution is


1
0.4825 ln x
(x) = 1 + exp
.
(4.17)
0.4329

4.5

Method of Moments

Assume that a nonnegative uncertain variable has an uncertainty distribution


(x|1 , 2 , , p )

(4.18)

with unknown parameters 1 , 2 , , p . Given a set of experts experimental


data
(x1 , 1 ), (x2 , 2 ), , (xn , n )
(4.19)
with
0 x1 < x2 < < xn ,

0 1 2 n 1,

(4.20)

Wang and Peng [207] proposed a method of moments to estimate the unknown parameters of uncertainty distribution. At first, the kth empirical
moments of the experts experimental data are defined as that of the corresponding empirical uncertainty distribution, i.e.,
k = 1 xk1 +

n1 k
1 XX
k
(i+1 i )xji xkj
i+1 + (1 n )xn .
k + 1 i=1 j=0

(4.21)

The moment estimates b1 , b2 , , bp are then obtained by equating the first


p moments of (x|1 , 2 , , p ) to the corresponding first p empirical moments. In other words, the moment estimates b1 , b2 , , bp should solve the
system of equations,
Z

(1 ( k x | 1 , 2 , , p ))dx = k ,

k = 1, 2, , p

(4.22)

where 1 , 2 , , p are empirical moments determined by (4.21).


Example 4.4: Assume that a questionnaire survey has successfully acquired
the following experts experimental data,
(1.2, 0.1), (1.5, 0.3), (1.8, 0.4), (2.5, 0.6), (3.9, 0.8), (4.6, 0.9).

(4.23)

Section 4.6 - Multiple Domain Experts

133

Then the first three empirical moments are 2.5100, 7.7226 and 29.4936. We
also assume that the uncertainty distribution to be determined has a zigzag
form with three unknown parameters a, b and c, i.e.,

0,
if x a

(x a)/2(b a),
if a x b
(x|a, b, c) =
(4.24)
(x + c 2b)/2(c b), if b x c

1,
if x c.
From the experts experimental data, we may believe that the unknown parameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution (x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
It follows from the method of moments that the unknown parameters a, b, c
should solve the system of equations,

a + 2b + c = 4 2.5100

a2 + ab + 2b2 + bc + c2 = 6 7.7226
(4.25)

3
a + a2 b + ab2 + 2b3 + b2 c + bc2 + c3 = 8 29.4936.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
yield that the moment estimates are (a, b, c) = (0.9804, 2.0303, 4.9991) and
the corresponding uncertainty distribution is

0,
if x 0.9804

(x 0.9804)/2.0998, if 0.9804 x 2.0303


(x) =
(4.26)

(x + 0.9385)/5.9376, if 2.0303 x 4.9991

1,
if x 4.9991.

4.6

Multiple Domain Experts

Assume there are m domain experts and each produces an uncertainty distribution. Then we may get m uncertainty distributions 1 (x), 2 (x), , m (x).
It was suggested by Liu [120] that the m uncertainty distributions should be
aggregated to an uncertainty distribution
(x) = w1 1 (x) + w2 2 (x) + + wm m (x)

(4.27)

134

Chapter 4 - Uncertain Statistics

where w1 , w2 , , wm are convex combination coefficients (i.e., they are nonnegative numbers and w1 + w2 + + wn = 1) representing weights of the
domain experts. For example, we may set
wi =

1
,
m

i = 1, 2, , n.

(4.28)

Since 1 (x), 2 (x), , m (x) are uncertainty distributions, they are increasing functions taking values in [0, 1] and are not identical to either 0 or 1. It
is easy to verify that their convex combination (x) is also an increasing
function taking values in [0, 1] and (x) 6 0, (x) 6 1. Hence (x) is also
an uncertainty distribution by Peng-Iwamura theorem.

4.7

Delphi Method

The Delphi method was originally developed in the 1950s by the RAND
Corporation based on the assumption that group experience is more valid
than individual experience. This method asks the domain experts answer
questionnaires in two or more rounds. After each round, a facilitator provides
an anonymous summary of the answers from the previous round as well as the
reasons that the domain experts provided for their opinions. Then the domain
experts are encouraged to revise their earlier answers in light of the summary.
It is believed that during this process the opinions of domain experts will
converge to an appropriate answer. Wang, Gao and Guo [205] recast the
Delphi method as a process to determine the uncertainty distributions. The
main steps are listed as follows:
Step 1. The m domain experts provide their experts experimental data,
(xij , ij ),

j = 1, 2, , ni , i = 1, 2, , m.

(4.29)

Step 2. Use the i-th experts experimental data (xi1 , i1 ), (xi2 , i2 ), ,


(xini , ini ) to generate the uncertainty distributions i of the ith domain experts, i = 1, 2, , m, respectively.
Step 3. Compute (x) = w1 1 (x) + w2 2 (x) + + wm m (x) where
w1 , w2 , , wm are convex combination coefficients representing
weights of the domain experts.
Step 4. If |ij (xij )| are less than a given level > 0 for all i and j, then
go to Step 5. Otherwise, the i-th domain experts receive the summary (for example, the function obtained in the previous round
and the reasons of other experts), and then provide a set of revised
experts experimental data (xi1 , i1 ), (xi2 , i2 ), , (xini , ini ) for
i = 1, 2, , m. Go to Step 2.
Step 5. The last function is the uncertainty distribution to be determined.

Section 4.8 - Bibliographic Notes

4.8

135

Bibliographic Notes

The study of uncertain statistics was started by Liu [120] in 2010 in which a
questionnaire survey for collecting experts experimental data was designed.
It was shown among others by Chen and Ralescu [15] that the questionnaire
survey may successfully acquire the experts experimental data.
Parametric uncertain statistics assumes that the uncertainty distribution
to be determined has a known functional form but with unknown parameters. In order to estimate the unknown parameters, Liu [120] suggested the
principle of least squares, and Wang and Peng [207] proposed the method of
moments.
Nonparametric uncertain statistics does not rely on the experts experimental data belonging to any particular uncertainty distribution. In order to
determine the uncertainty distributions, Liu [120] introduced the linear interpolation method (i.e., empirical uncertainty distribution), and Chen and
Ralescu [15] proposed a series of spline interpolation methods.
When multiple domain experts are available, Wang, Gao and Guo [205]
recast the Delphi method as a process to determine the uncertainty distributions.

Chapter 5

Uncertain Risk Analysis


The term risk has been used in different ways in literature. Here the risk
is defined as the accidental loss plus uncertain measure of such loss.
Uncertain risk analysis is a tool to quantify risk via uncertainty theory. One
main feature of this topic is to model events that almost never occur. This
chapter will introduce a definition of risk index and provide some useful
formulas for calculating risk index. This chapter will also discuss structural
risk analysis and investment risk analysis in uncertain environments.

5.1

Loss Function

A system usually contains some factors 1 , 2 , , n that may be understood as lifetime, strength, demand, production rate, cost, profit, and resource. Generally speaking, some specified loss is dependent on those factors.
Although loss is a problem-dependent concept, usually such a loss may be
represented by a loss function.
Definition 5.1 Consider a system with factors 1 , 2 , , n . A function f
is called a loss function if some specified loss occurs if and only if
f (1 , 2 , , n ) > 0.

(5.1)

Example 5.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables 1 , 2 , , n . Such a system works whenever
all elements work. Thus the system lifetime is
= 1 2 n .

(5.2)

If the loss is understood as the case that the system fails before the time T ,
then we have a loss function
f (1 , 2 , , n ) = T 1 2 n .

(5.3)

138

Chapter 5 - Uncertain Risk Analysis


.................................
.................................
.................................
...
...
...
...
...
...
..................................
..................................
....................................
.
..
...
...
...
...
.
.
.
.
.
.
.
..
...............................
...............................
...............................

Input ........................................... 1

Output

Figure 5.1: A Series System


Hence the system fails if and only if f (1 , 2 , , n ) > 0.
Example 5.2: Consider a parallel system in which there are n elements
whose lifetimes are uncertain variables 1 , 2 , , n . Such a system works
whenever at least one element works. Thus the system lifetime is
= 1 2 n .

(5.4)

If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T 1 2 n .

(5.5)

Hence the system fails if and only if f (1 , 2 , , n ) > 0.


.................................
...
..
................................
..................................
....
...
.................................
...
...
...
...
.................................
..
.
.
.
.
................................................................
...................................................................
.
...
...
...
....
...
.............................
...
...
...
...
...
...
.....................................
...............................
...............................
...
..
................................

Input

Output

Figure 5.2: A Parallel System


Example 5.3: Consider a k-out-of-n system in which there are n elements
whose lifetimes are uncertain variables 1 , 2 , , n . Such a system works
whenever at least k of n elements work. Thus the system lifetime is
= k-max [1 , 2 , , n ].

(5.6)

If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T k-max [1 , 2 , , n ].

(5.7)

Hence the system fails if and only if f (1 , 2 , , n ) > 0. Note that a series
system is an n-out-of-n system, and a parallel system is a 1-out-of-n system.
Example 5.4: Consider a standby system in which there are n redundant
elements whose lifetimes are 1 , 2 , , n . For this system, only one element
is active, and one of the redundant elements begins to work only when the
active element fails. Thus the system lifetime is
= 1 + 2 + + n .

(5.8)

139

Section 5.2 - Risk Index

If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (1 , 2 , , n ) = T (1 + 2 + + n ).

(5.9)

Hence the system fails if and only if f (1 , 2 , , n ) > 0.


................................
......
...
..
.......
............................ ..................................
................................
..
..
...
.
...............................
...
.....
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...
.
.
......
.
.
.
.
.
.
.
.
.
.
.
................................................................
............................................................. .................................
.
...
..
...
...
.
.
.
...
.............................
.....
....
...
.
................................
.....
...
...
..
.
.......
............................ ....................................
.................................
.
..
...............................

Input

Output

Figure 5.3: A Standby System

5.2

Risk Index

In practice, the factors 1 , 2 , , n of a system are usually uncertain variables rather than known constants. Thus the risk index is defined as the
uncertain measure that some specified loss occurs.
Definition 5.2 (Liu [119]) Assume that a system contains uncertain factors
1 , 2 , , n and has a loss function f . Then the risk index is the uncertain
measure that the system is loss-positive, i.e.,
Risk = M{f (1 , 2 , , n ) > 0}.

(5.10)

Theorem 5.1 (Liu [119], Risk Index Theorem) Assume a system contains
independent uncertain variables 1 , 2 , , n with regular uncertainty distributions 1 , 2 , , n , respectively. If the loss function f (1 , 2 , , n ) is
strictly increasing with respect to 1 , 2 , , m and strictly decreasing with
respect to m+1 , m+2 , , n , then the risk index is just the root of the
equation
1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) = 0.

(5.11)

Proof: It follows from Definition 5.2 and Theorem 2.22 immediately.


1
1
1
Remark 5.1: Since f (1
1 (1 ), , m (1 ), m+1 (), , n ()) is
a strictly decreasing function with respect to , its root may be estimated
by the bisection method.

Remark 5.2: Keep in mind that sometimes the equation (5.11) may not
have a root. In this case, if
1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) < 0

(5.12)

140

Chapter 5 - Uncertain Risk Analysis

for all , then we set the root = 0; and if


1
1
1
f (1
1 (1 ), , m (1 ), m+1 (), , n ()) > 0

(5.13)

for all , then we set the root = 1.

5.3

Series System

Consider a series system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T 1 2 n

(5.14)

and the risk index is


Risk = M{f (1 , 2 , , n ) > 0}.

(5.15)

Since f is a strictly decreasing function with respect to 1 , 2 , , n , the risk


index theorem says that the risk index is just the root of the equation
1
1
1
1 () 2 () n () = T.

(5.16)

It is easy to verify that


Risk = 1 (T ) 2 (T ) n (T ).

5.4

(5.17)

Parallel System

Consider a parallel system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T 1 2 n

(5.18)

and the risk index is


Risk = M{f (1 , 2 , , n ) > 0}.

(5.19)

Since f is a strictly decreasing function with respect to 1 , 2 , , n , the risk


index theorem says that the risk index is just the root of the equation
1
1
1
1 () 2 () n () = T.

(5.20)

It is easy to verify that


Risk = 1 (T ) 2 (T ) n (T ).

(5.21)

Section 5.7 - Hazard Distribution

5.5

141

k-out-of-n System

Consider a k-out-of-n system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T k-max [1 , 2 , , n ]

(5.22)

and the risk index is


Risk = M{f (1 , 2 , , n ) > 0}.

(5.23)

Since f is a strictly decreasing function with respect to 1 , 2 , , n , the risk


index theorem says that the risk index is just the root of the equation
1
1
k-max [1
1 (), 2 (), , n ()] = T.

(5.24)

It is easy to verify that


Risk = k-min [1 (T ), 2 (T ), , n (T )].

(5.25)

Note that a series system is essentially an n-out-of-n system. In this case,


the risk index formula (5.25) becomes (5.17). In addition, a parallel system
is essentially a 1-out-of-n system. In this case, the risk index formula (5.25)
becomes (5.21).

5.6

Standby System

Consider a standby system in which there are n elements whose lifetimes are
independent uncertain variables 1 , 2 , , n with uncertainty distributions
1 , 2 , , n , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (1 , 2 , , n ) = T (1 + 2 + + n )

(5.26)

and the risk index is


Risk = M{f (1 , 2 , , n ) > 0}.

(5.27)

Since f is a strictly decreasing function with respect to 1 , 2 , , n , the risk


index theorem says that the risk index is just the root of the equation
1
1
1
1 () + 2 () + + n () = T.

(5.28)

142

5.7

Chapter 5 - Uncertain Risk Analysis

Hazard Distribution

Suppose that is the lifetime of some element. Here we assume it is an


uncertain variable with a prior uncertainty distribution . At some time t,
it is observed that the element is working. What is the residual lifetime of
the element? The following definition answers this question.
Definition 5.3 (Liu [119]) Let be a nonnegative uncertain variable representing lifetime of some element. If has a prior uncertainty distribution ,
then the hazard distribution at time t is

0,
if (x) (t)

(x)
0.5, if (t) < (x) (1 + (t))/2
(5.29)
(x|t) =
1 (t)

(x) (t) , if (1 + (t))/2 (x)


1 (t)
that is just the conditional uncertainty distribution of given > t.
The hazard distribution is essentially the posterior uncertainty distribution just after time t given that it is working at time t.
Exercise 5.1: Let be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the hazard distribution at time t is

0,
if x t

xa
0.5, if t < x (b + t)/2
(x|t) =
bt

xt

1, if (b + t)/2 x.
bt
Theorem 5.2 (Liu [119], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors 1 , 2 , , n , and has a loss function f .
Suppose 1 , 2 , , n are independent uncertain variables with uncertainty
distributions 1 , 2 , , n , respectively, and f (1 , 2 , , n ) is strictly increasing with respect to 1 , 2 , , m and strictly decreasing with respect to
m+1 , m+2 , , n . If it is observed that all elements are working at some
time t, then the risk index is just the root of the equation
1
1
1
f (1
1 (1 |t), , m (1 |t), m+1 (|t), , n (|t)) = 0

where i (x|t) are hazard distributions determined by

0,
if i (x) i (t)

i (x)
0.5, if i (t) < i (x) (1 + i (t))/2
i (x|t) =
1 i (t)

i (x) i (t) , if (1 + i (t))/2 i (x)


1 i (t)

(5.30)

(5.31)

143

Section 5.8 - Structural Risk Analysis

for i = 1, 2, , n.
Proof: It follows from Definition 5.3 that each hazard distribution of element is determined by (5.31). Thus the conditional risk index is obtained by
Theorem 5.1 immediately.

5.8

Structural Risk Analysis

Consider a structural system in which the strengths and loads are assumed
to be uncertain variables. We will suppose that a structural system fails
whenever for each rod, the load variable exceeds its strength variable. If the
structural risk index is defined as the uncertain measure that the structural
system fails, then
( n
)
[
Risk = M
(i < i )
(5.32)
i=1

where 1 , 2 , , n are strength variables, and 1 , 2 , , n are load variables of the n rods.
Example 5.5: (The Simplest Case) Assume there is only a single strength
variable and a single load variable with continuous uncertainty distributions and , respectively. In this case, the structural risk index is
Risk = M{ < }.
It follows from the risk index theorem that the risk index is just the root
of the equation
1 () = 1 (1 ).
(5.33)
Especially, if the strength variable has a normal uncertainty distribution
N (es , s ) and the load variable has a normal uncertainty distribution
N (el , l ), then the structural risk index is

Risk =


1 + exp

(e el )
s
3(s + l )

1
.

(5.34)

Example 5.6: (Constant Loads) Assume the uncertain strength variables


1 , 2 , , n are independent and have continuous uncertainty distributions
1 , 2 , , n , respectively. In many cases, the load variables 1 , 2 , , n
degenerate to crisp values c1 , c2 , , cn (for example, weight limits allowed
by the legislation), respectively. In this case, it follows from (5.32) and independence that the structural risk index is
( n
)
n
[
_
Risk = M
(i < ci ) =
M{i < ci }.
i=1

i=1

144

Chapter 5 - Uncertain Risk Analysis

That is,
Risk = 1 (c1 ) 2 (c2 ) n (cn ).

(5.35)

Example 5.7: (Independent Load Variables) Assume the uncertain strength


variables 1 , 2 , , n are independent and have continuous uncertainty distributions 1 , 2 , , n , respectively. Also assume the uncertain load variables 1 , 2 , , n are independent and have continuous uncertainty distributions 1 , 2 , , n , respectively. In this case, it follows from (5.32) and
independence that the structural risk index is
( n
)
n
_
[
Risk = M
(i < i ) =
M{i < i }.
i=1

i=1

That is,
Risk = 1 2 n

(5.36)

where i are the roots of the equations


1
1
i () = i (1 )

(5.37)

for i = 1, 2, , n, respectively.
However, generally speaking, the load variables 1 , 2 , , n are neither
constants nor independent. For examples, the load variables 1 , 2 , , n
may be functions of independent uncertain variables 1 , 2 , , m . In this
case, the formula (5.36) is no longer valid. Thus we have to deal with those
structural systems case by case.
Example 5.8: (Series System) Consider a structural system shown in Figure 5.4 that consists of n rods in series and an object. Assume that the
strength variables of the n rods are uncertain variables 1 , 2 , , n with
uncertainty distributions 1 , 2 , , n , respectively. We also assume that
the gravity of the object is an uncertain variable with uncertainty distribution . For each i (1 i n), the load variable of the rod i is just the
gravity of the object. Thus the structural system fails whenever the load
variable exceeds at least one of the strength variables 1 , 2 , , n . Hence
the structural risk index is
( n
)
[
Risk = M
(i < ) = M{1 2 n < }.
i=1

Define the loss function as


f (1 , 2 , , n , ) = 1 2 n .
Then
Risk = M{f (1 , 2 , , n , ) > 0}.

Section 5.8 - Structural Risk Analysis

145

Since the loss function f is strictly increasing with respect to and strictly
decreasing with respect to 1 , 2 , , n , it follows from the risk index theorem that the risk index is just the root of the equation
1
1
1 (1 ) 1
1 () 2 () n () = 0.

(5.38)

Or equivalently, let i be the roots of the equations


1 (1 ) = 1
i ()

(5.39)

for i = 1, 2, , n, respectively. Then the structural risk index is


Risk = 1 2 n .

(5.40)

////////////////

....................................................................................................................................................................................
...
...
...
...
...
...
............
........
...
...
...
...
...
..
..
.
.
.
.... .....
......
...
...
...
...
...
..
.
.
.
.... .....
......
...
...
...
...
...
...
.........................................
...
...
...
...
....
...
...
...
...
...
..
..
.....................................

Figure 5.4: A Structural System with n Rods and an Object

Example 5.9: Consider a structural system shown in Figure 5.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
right rods are uncertain variables 1 and 2 with uncertainty distributions
1 and 2 , respectively. We also assume that the gravity of the object is an
uncertain variable with uncertainty distribution . In this case, the load
variables of left and right rods are respectively equal to
sin 2
,
sin(1 + 2 )

sin 1
.
sin(1 + 2 )

Thus the structural system fails whenever for any one rod, the load variable

146

Chapter 5 - Uncertain Risk Analysis

exceeds its strength variable. Hence the structural risk index is



 

sin 2
sin 1
Risk = M
1 <
2 <
sin(1 + 2 )
sin(1 + 2 )

 

1

=M
<

<
sin 2
sin(1 + 2 )
sin 1
sin(1 + 2 )


1
2

=M

<
sin 2 sin 1
sin(1 + 2 )
Define the loss function as
f (1 , 2 , ) =

1
2

.
sin(1 + 2 ) sin 2 sin 1

Then
Risk = M{f (1 , 2 , ) > 0}.
Since the loss function f is strictly increasing with respect to and strictly
decreasing with respect to 1 , 2 , it follows from the risk index theorem that
the risk index is just the root of the equation
1 (1 ) 1
() 1
()
1
2
= 0.
sin(1 + 2 )
sin 2
sin 1

(5.41)

Or equivalently, let 1 be the root of the equation


1 ()
1 (1 )
= 1
sin(1 + 2 )
sin 2

(5.42)

and let 2 be the root of the equation


1 ()
1 (1 )
= 2
.
sin(1 + 2 )
sin 1

(5.43)

Then the structural risk index is


Risk = 1 2 .

5.9

(5.44)

Investment Risk Analysis

Assume that an investor has n projects whose returns are uncertain variables
1 , 2 , , n . If the loss is understood as the case that total return 1 + 2 +
+ n is below a predetermined value c (e.g., the interest rate), then the
investment risk index is
Risk = M{1 + 2 + + n < c}.

(5.45)

If 1 , 2 , , n are independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively, then the investment risk index is just
the root of the equation
1
1
1
1 () + 2 () + + n () = c.

(5.46)

147

Section 5.10 - Bibliographic Notes

////////////////

.......................................................................................................................................................................................
...
.
..
...
...
...
...
...
...
...
..
...
..
.
...
.
.
..
...
...
...
...
..
...
...
..
...
..
...
...
.
.
.
...
...
...
...
..
...
..
...
...
..
...
...
...
...
..
.
...
..
..
...
...
.
...
...
... 1 ... 2 ....
...
.
.
.
.
...
.
... .. .....
... .. ..
... . ...
... . ...
........
.......................................
...
...
...
...
....
...
...
...
...
...
...
..
......................................

Figure 5.5: A Structural System with 2 Rods and an Object

5.10

Bibliographic Notes

Uncertain risk analysis was proposed by Liu [119] in 2010 in which a risk
index was defined and a risk index theorem was proved.
As a substitute of risk index, Peng [171] suggested a concept of valueat-risk that is the maximum possible loss when the right tail distribution is
ignored.

Chapter 6

Uncertain Reliability
Analysis
Uncertain reliability analysis is a tool to deal with system reliability via
uncertainty theory. This chapter will introduce a definition of reliability
index and provide some useful formulas for calculating reliability index.

6.1

Structure Function

Many real systems may be simplified to a Boolean system in which each


element (including the system itself) has two states: working and failure.
Let Boolean variables xi denote the states of elements i for i = 1, 2, , n,
and
(
1, if element i works
(6.1)
xi =
0, if element i fails.
We also suppose the Boolean variable X indicates the state of the system,
i.e.,
(
1, if the system works
X=
(6.2)
0, if the system fails.
Usually, the state of the system is completely determined by the states of its
elements via the so-called structure function.
Definition 6.1 Assume that X is a Boolean system containing elements
x1 , x2 , , xn . A Boolean function f is called a structure function of X
if
X = 1 if and only if f (x1 , x2 , , xn ) = 1.
(6.3)
It is obvious that X = 0 if and only if f (x1 , x2 , , xn ) = 0 whenever f is
indeed the structure function of the system.

150

Chapter 6 - Uncertain Reliability Analysis

Example 6.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,
f (x1 , x2 , , xn ) = x1 x2 xn .
................................
................................
................................
...
...
...
....
....
....
..................................
..................................
.....................................
.
..
...
...
.
...
.
.
.
.
.
.
.
.
.
..............................
...............................
................................

Input .......................................... 1
.

(6.4)

Output

Figure 6.1: A Series System


Example 6.2: For a parallel system, the structure function is a mapping
from {0, 1}n to {0, 1}, i.e.,
f (x1 , x2 , , xn ) = x1 x2 xn .
.................................
..
...
................................
................................
.
...
...
...
..............................
...
...
...
...
................................
...
...
.
...
.
.
.
.
.
.
.
..............................................................
....................................................................
..
..
..
.
...............................
...
...
...
...
.................................
..
...
...................................
..................................
...
..
................................

(6.5)

Input

Output

Figure 6.2: A Parallel System


Example 6.3: For a k-out-of-n system that works whenever at least k of the
n elements work, the structure function is a mapping from {0, 1}n to {0, 1},
i.e.,
(
1, if x1 + x2 + + xn k
f (x1 , x2 , , xn ) =
(6.6)
0, if x1 + x2 + + xn < k.
Especially, when k = 1, it is a parallel system; when k = n, it is a series
system.

6.2

Reliability Index

The element in a Boolean system is usually represented by a Boolean uncertain variable, i.e.,
(
1 with uncertain measure a
=
(6.7)
0 with uncertain measure 1 a.
In this case, we will say is an uncertain element with reliability a. Reliability
index is defined as the uncertain measure that the system is working.

151

Section 6.4 - Parallel System

Definition 6.2 (Liu [119]) Assume a Boolean system has uncertain elements 1 , 2 , , n and a structure function f . Then the reliability index
is the uncertain measure that the system is working, i.e.,
Reliability = M{f (1 , 2 , , n ) = 1}.

(6.8)

Theorem 6.1 (Liu [119], Reliability Index Theorem) Assume that a system
contains uncertain elements 1 , 2 , , n , and has a structure function f . If
1 , 2 , , n are independent uncertain elements with reliabilities a1 , a2 , ,
an , respectively, then the reliability index is

sup
min i (xi ),

f (x1 ,x2 , ,xn )=1 1in

if
sup
min i (xi ) < 0.5

f (x1 ,x2 , ,xn )=1 1in


(6.9)
Reliability =

1
sup
min i (xi ),

f (x1 ,x2 , ,xn )=0 1in

if
sup
min i (xi ) 0.5

f (x1 ,x2 , ,xn )=1 1in

where xi take values either 0 or 1, and i are defined by


(
ai ,
if xi = 1
i (xi ) =
1 ai , if xi = 0

(6.10)

for i = 1, 2, , n, respectively.
Proof: Since 1 , 2 , , n are independent Boolean uncertain variables and
f is a Boolean function, the equation (6.9) follows from Definition 6.2 and
Theorem 2.24 immediately.

6.3

Series System

Consider a series system having independent uncertain elements 1 , 2 , , n


with reliabilities a1 , a2 , , an , respectively. Note that the structure function
is
f (x1 , x2 , , xn ) = x1 x2 xn .
(6.11)
It follows from the reliability index theorem that the reliability index is
Reliability = M{1 2 n = 1} = a1 a2 an .

6.4

(6.12)

Parallel System

Consider a parallel system having independent uncertain elements 1 , 2 , ,


n with reliabilities a1 , a2 , , an , respectively. Note that the structure function is
f (x1 , x2 , , xn ) = x1 x2 xn .
(6.13)

152

Chapter 6 - Uncertain Reliability Analysis

It follows from the reliability index theorem that the reliability index is
Reliability = M{1 2 n = 1} = a1 a2 an .

6.5

(6.14)

k-out-of-n System

Consider a k-out-of-n system having independent uncertain elements 1 , 2 ,


, n with reliabilities a1 , a2 , , an , respectively. Note that the structure
function has a Boolean form,
(
1, if x1 + x2 + + xn k
f (x1 , x2 , , xn ) =
(6.15)
0, if x1 + x2 + + xn < k.
It follows from the reliability index theorem that the reliability index is the
kth largest value of a1 , a2 , , an , i.e.,
Reliability = k-max[a1 , a2 , , an ].

(6.16)

Note that a series system is essentially an n-out-of-n system. In this case,


the reliability index formula (6.16) becomes (6.12). In addition, a parallel
system is essentially a 1-out-of-n system. In this case, the reliability index
formula (6.16) becomes (6.14).

6.6

General System

It is almost impossible to find an analytic formula of reliability risk for general


systems. In this case, we have to employ numerical method.
................................
................................
..
..
...
...
..
...
...
.
.................................................................
................................
................................
...
...
...
.
...
...
...
...
.................................
.
.................................
...
...
....
...
...
...
...
...
.................................
...
...
....
...
...
...
.
.
.
.
.
.
....................................
.
.
.................................
.
.
...
..
....
....
..
...
...
...
...............................
...
...
...
...
....
...
...
...
...............................
...............................
...
...
...
..
...
..
...
.
..
....
..
...
..
..
...........................................................
..............................
..............................
.
.
...
...
.
.
.....
.
.
.
...............................
.................................

Input

Output

Figure 6.3: A Bridge System


Consider a bridge system shown in Figure 6.3 that consists of 5 independent uncertain elements whose states are denoted by 1 , 2 , 3 , 4 , 5 . Assume
each path works if and only if all elements on which are working and the
system works if and only if there is a path of working elements. Then the
structure function of the bridge system is
f (x1 , x2 , x3 , x4 , x5 ) = (x1 x4 ) (x2 x5 ) (x1 x3 x5 ) (x2 x3 x4 ).

Section 6.7 - Bibliographic Notes

153

The Boolean System Calculator, a function in the Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm), may yield the reliability index.
Assume the 5 independent uncertain elements have reliabilities
0.91, 0.92, 0.93, 0.94, 0.95
in uncertain measure. A run of Boolean System Calculator shows that the
reliability index is
Reliability = M{f (1 , 2 , , 5 ) = 1} = 0.92
in uncertain measure.

6.7

Bibliographic Notes

Uncertain reliability analysis was proposed by Liu [119] in 2010 in which a


reliability index was defined and a reliability index theorem was proved.

Chapter 7

Uncertain Propositional
Logic
Propositional logic, originated from the work of Aristotle (384-322 BC), is a
branch of logic that studies the properties of complex propositions composed
of simpler propositions and logical connectives. Note that the propositions
considered in propositional logic are not arbitrary statements but are the
ones that are either true or false and not both.
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. This chapter will deal with uncertain propositional logic, including
uncertain proposition, truth value definition, and truth value theorem.

7.1

Uncertain Proposition

Definition 7.1 (Li and Liu [91]) An uncertain proposition is a statement


whose truth value is quantified by an uncertain measure.
That is, if we use X to express an uncertain proposition and use to express
its truth value in uncertain measure, then the uncertain proposition X is
essentially a Boolean uncertain variable
(
1 with uncertain measure
X=
(7.1)
0 with uncertain measure 1
where X = 1 means X is true and X = 0 means X is false.
Example 7.1: Tom is tall with truth value 0.7 is an uncertain proposition,
where Tom is tall is a statement, and its truth value is 0.7 in uncertain
measure.

156

Chapter 7 - Uncertain Propositional Logic

Example 7.2: John is young with truth value 0.8 is an uncertain proposition, where John is young is a statement, and its truth value is 0.8 in
uncertain measure.
Example 7.3: Beijing is a big city with truth value 0.9 is an uncertain
proposition, where Beijing is a big city is a statement, and its truth value
is 0.9 in uncertain measure.
Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol , conjunction symbol , disjunction symbol , conditional symbol
, and biconditional symbol . Note that
X means not X;

(7.2)

X Y means X and Y ;

(7.3)

X Y means X or Y ;

(7.4)

X Y = (X) Y means if X then Y ,

(7.5)

X Y = (X Y ) (Y X) means X if and only if Y .

(7.6)

Boolean Function of Uncertain Propositions


Assume X1 , X2 , , Xn are uncertain propositions. Then their Boolean function
Z = f (X1 , X2 , , Xn )
(7.7)
is a Boolean uncertain variable. Thus Z is also an uncertain proposition
provided that it makes sense. Usually, such a Boolean function is a finite
sequence of uncertain propositions and connective symbols. For example,
Z = X1 ,

Z = X1 (X2 ),

Z = X1 X2

(7.8)

are all uncertain propositions.


Independence of Uncertain Propositions
Uncertain propositions are called independent if they are independent uncertain variables. Assume X1 , X2 , , Xn are independent uncertain propositions. Then
f1 (X1 ), f2 (X2 ) , fn (Xn )
(7.9)
are also independent uncertain propositions for any Boolean functions f1 , f2 ,
, fn . For example, if X1 , X2 , , X5 are independent uncertain propositions, then X1 , X2 X3 , X4 X5 are also independent.

157

Section 7.2 - Truth Value

7.2

Truth Value

Truth value is a key concept in uncertain propositional logic, and is defined


as the uncertain measure that the uncertain proposition is true.
Definition 7.2 (Li and Liu [91]) Let X be an uncertain proposition. Then
the truth value of X is defined as the uncertain measure that X is true, i.e.,
T (X) = M{X = 1}.

(7.10)

Example 7.4: Let X be an uncertain proposition with truth value . Then


T (X) = M{X = 0} = 1 .

(7.11)

Example 7.5: Let X and Y be two independent uncertain propositions with


truth values and , respectively. Then
T (X Y ) = M{X Y = 1} = M{(X = 1) (Y = 1)} = ,

(7.12)

T (X Y ) = M{X Y = 1} = M{(X = 1) (Y = 1)} = ,

(7.13)

T (X Y ) = T (X Y ) = (1 ) .

(7.14)

Theorem 7.1 (Law of Excluded Middle) Let X be an uncertain proposition.


Then X X is a tautology, i.e.,
T (X X) = 1.

(7.15)

Proof: It follows from the definition of truth value and property of uncertain
measure that
T (X X) = M{X X = 1} = M{(X = 1) (X = 0)} = M{} = 1.
The theorem is proved.
Theorem 7.2 (Law of Contradiction) Let X be an uncertain proposition.
Then X X is a contradiction, i.e.,
T (X X) = 0.

(7.16)

Proof: It follows from the definition of truth value and property of uncertain
measure that
T (X X) = M{X X = 1} = M{(X = 1) (X = 0)} = M{} = 0.
The theorem is proved.

158

Chapter 7 - Uncertain Propositional Logic

Theorem 7.3 (Law of Truth Conservation) Let X be an uncertain proposition. Then we have
T (X) + T (X) = 1.
(7.17)
Proof: It follows from the duality axiom of uncertain measure that
T (X) = M{X = 1} = M{X = 0} = 1 M{X = 1} = 1 T (X).
The theorem is proved.
Theorem 7.4 Let X be an uncertain proposition. Then X X is a tautology, i.e.,
T (X X) = 1.
(7.18)
Proof: It follows from the definition of conditional symbol and the law of
excluded middle that
T (X X) = T (X X) = 1.
The theorem is proved.
Theorem 7.5 Let X be an uncertain proposition. Then we have
T (X X) = 1 T (X).

(7.19)

Proof: It follows from the definition of conditional symbol and the law of
truth conservation that
T (X X) = T (X X) = T (X) = 1 T (X).
The theorem is proved.
Theorem 7.6 (De Morgans Law) For any uncertain propositions X and Y ,
we have
T ((X Y )) = T ((X) (Y )),
(7.20)
T ((X Y )) = T ((X) (Y )).

(7.21)

Proof: It follows from the basic properties of uncertain measure that


T ((X Y )) = M{X Y = 0} = M{(X = 0) (Y = 0)}
= M{(X) (Y ) = 1} = T ((X) (Y ))
which proves the first equality. A similar way may verify the second equality.
Theorem 7.7 (Law of Contraposition) For any uncertain propositions X
and Y , we have
T (X Y ) = T (Y X).
(7.22)
Proof: It follows from the definition of conditional symbol and basic properties of uncertain measure that
T (X Y ) = M{(X) Y = 1} = M{(X = 0) (Y = 1)}
= M{Y (X) = 1} = T (Y X).
The theorem is proved.

Section 7.3 - Chen-Ralescu Theorem

7.3

159

Chen-Ralescu Theorem

An important contribution to uncertain propositional logic is the ChenRalescu theorem that provides a numerical method for calculating the truth
values of uncertain propositions.
Theorem 7.8 (Chen-Ralescu Theorem [11]) Assume that X1 , X2 , , Xn
are independent uncertain propositions with truth values 1 , 2 , , n , respectively. Then for a Boolean function f , the uncertain proposition
Z = f (X1 , X2 , , Xn ).
has a truth value

sup
min i (xi ),

f (x1 ,x2 , ,xn )=1 1in

if
sup
min i (xi ) < 0.5

f (x1 ,x2 , ,xn )=1 1in


T (Z) =

1
sup
min i (xi ),

f (x1 ,x2 , ,xn )=0 1in

if
sup
min i (xi ) 0.5

1in

(7.23)

(7.24)

f (x1 ,x2 , ,xn )=1

where xi take values either 0 or 1, and i are defined by


(
i ,
if xi = 1
i (xi ) =
1 i , if xi = 0

(7.25)

for i = 1, 2, , n, respectively.
Proof: Since Z = 1 if and only if f (X1 , X2 , , Xn ) = 1, we immediately
have
T (Z) = M{f (X1 , X2 , , Xn ) = 1}.
Thus the equation (7.24) follows from Theorem 2.24 immediately.
Exercise 7.1: Let X1 , X2 , , Xn be independent uncertain propositions
with truth values 1 , 2 , , n , respectively. Then
Z = X1 X2 Xn

(7.26)

is an uncertain proposition. Show that the truth value of Z is


T (Z) = 1 2 n .

(7.27)

Exercise 7.2: Let X1 , X2 , , Xn be independent uncertain propositions


with truth values 1 , 2 , , n , respectively. Then
Z = X1 X2 Xn

(7.28)

160

Chapter 7 - Uncertain Propositional Logic

is an uncertain proposition. Show that the truth value of Z is


T (Z) = 1 2 n .

(7.29)

Example 7.6: Let X1 and X2 be independent uncertain propositions with


truth values 1 and 2 , respectively. Then
Z = X1 X2

(7.30)

is an uncertain proposition. It is clear that Z = f (X1 , X2 ) if we define


f (1, 1) = 1,

f (1, 0) = 0,

f (0, 1) = 0,

f (0, 0) = 1.

At first, we have
sup

min i (xi ) = max{1 2 , (1 1 ) (1 2 )},

sup

min i (xi ) = max{(1 1 ) 2 , 1 (1 2 )}.

f (x1 ,x2 )=1 1i2

f (x1 ,x2 )=0 1i2

When 1 0.5 and 2 0.5, we have


sup

min i (xi ) = 1 2 0.5.

f (x1 ,x2 )=1 1i2

It follows from Chen-Ralescu theorem that


T (Z) = 1

sup

min i (xi ) = 1 (1 1 ) (1 2 ) = 1 2 .

f (x1 ,x2 )=0 1i2

When 1 0.5 and 2 < 0.5, we have


sup

min i (xi ) = (1 1 ) 2 0.5.

f (x1 ,x2 )=1 1i2

It follows from Chen-Ralescu theorem that


T (Z) =

sup

min i (xi ) = (1 1 ) 2 .

f (x1 ,x2 )=1 1i2

When 1 < 0.5 and 2 0.5, we have


sup

min i (xi ) = 1 (1 2 ) 0.5.

f (x1 ,x2 )=1 1i2

It follows from Chen-Ralescu theorem that


T (Z) =

sup

min i (xi ) = 1 (1 2 ).

f (x1 ,x2 )=1 1i2

161

Section 7.5 - Bibliographic Notes

When 1 < 0.5 and 2 < 0.5, we have


sup

min i (xi ) = (1 1 ) (1 2 ) > 0.5.

f (x1 ,x2 )=1 1i2

It follows from Chen-Ralescu theorem that


T (Z) = 1

sup

Thus we have

T (Z) =

7.4

min i (xi ) = 1 1 2 = (1 1 ) (1 2 ).

f (x1 ,x2 )=0 1i2

1 2 ,
(1 1 ) 2 ,
1 (1 2 ),
(1 1 ) (1 2 ),

if
if
if
if

1
1
1
1

0.5
0.5
< 0.5
< 0.5

and
and
and
and

2
2
2
2

0.5
< 0.5
0.5
< 0.5.

(7.31)

Boolean System Calculator

Boolean System Calculator is a software that may compute the truth value
of uncertain formula. This software may be downloaded from the website at
http://orsc.edu.cn/liu/resources.htm. For example, assume 1 , 2 , 3 , 4 , 5
are independent uncertain propositions with truth values 0.1, 0.3, 0.5, 0.7, 0.9,
respectively. Consider an uncertain formula,
X = (1 2 ) (2 3 ) (3 4 ) (4 5 ).

(7.32)

It is clear that the corresponding Boolean function of X has the form

1, if x1 + x2 = 2

1, if x2 + x3 = 2

1, if x3 + x4 = 2
f (x1 , x2 , x3 , x4 , x5 ) =

1, if x4 + x5 = 2

0, otherwise.
A run of Boolean System Calculator shows that the truth value of X is 0.7
in uncertain measure.

7.5

Bibliographic Notes

Uncertain propositional logic was designed by Li and Liu [91] in which every proposition is abstracted into a Boolean uncertain variable and the truth
value is defined as the uncertain measure that the proposition is true. An important contribution is Chen-Ralescu theorem [11] that provides a numerical
method for calculating the truth value of uncertain propositions.

Chapter 8

Uncertain Entailment
Uncertain entailment is a methodology for calculating the truth value of an
uncertain formula via the maximum uncertainty principle when the truth
values of other uncertain formulas are given. In some sense, uncertain propositional logic and uncertain entailment are mutually inverse, the former attempts to compose a complex proposition from simpler ones, while the latter
attempts to decompose a complex proposition into simpler ones.
This chapter will present an uncertain entailment model. In addition,
uncertain modus ponens, uncertain modus tollens and uncertain hypothetical
syllogism are deduced from the uncertain entailment model.

8.1

Uncertain Entailment Model

Assume X1 , X2 , , Xn are independent uncertain propositions with unknown truth values 1 , 2 , , n , respectively. Also assume that
Yj = fj (X1 , X2 , , Xn )

(8.1)

are uncertain propositions with known truth values cj , j = 1, 2, , m, respectively. Now let
Z = f (X1 , X2 , , Xn )
(8.2)
be an additional uncertain proposition. What is the truth value of Z? This
is just the uncertain entailment problem. In order to solve it, let us consider
what values 1 , 2 , , n may take. The first constraint is
0 i 1,

i = 1, 2, , n.

(8.3)

The second type of constraints is represented by


T (Yj ) = cj

(8.4)

164

Chapter 8 - Uncertain Entailment

where T (Yj ) are determined by 1 , 2 , , n via

sup
min i (xi ),

fj (x1 ,x2 , ,xn )=1 1in

if
sup
min i (xi ) < 0.5

fj (x1 ,x2 , ,xn )=1 1in


T (Yj ) =

1
sup
min i (xi ),

fj (x1 ,x2 , ,xn )=0 1in

if
sup
min i (xi ) 0.5

(8.5)

fj (x1 ,x2 , ,xn )=1 1in

for j = 1, 2, , m and
(
i (xi ) =

i ,
if xi = 1
1 i , if xi = 0

(8.6)

for i = 1, 2, , n. Please note that the additional uncertain proposition


Z = f (X1 , X2 , , Xn ) has a truth value

sup
min i (xi ),

f (x1 ,x2 , ,xn )=1 1in

if
sup
min i (xi ) < 0.5

f (x1 ,x2 , ,xn )=1 1in


(8.7)
T (Z) =

1
sup
min i (xi ),

f (x1 ,x2 , ,xn )=0 1in

if
sup
min i (xi ) 0.5.

1in
f (x1 ,x2 , ,xn )=1

Since the truth values 1 , 2 , , n are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) 0.5| via choosing appreciate values of
1 , 2 , , n . The uncertain entailment model is thus written by Liu [117]
as follows,

min |T (Z) 0.5|

subject to:
(8.8)

0 i 1, i = 1, 2, , n

T (Yj ) = cj , j = 1, 2, , m
where T (Z), T (Yj ), j = 1, 2, , m are functions of unknown truth values
1 , 2 , , n .
Example 8.1: Let A and B be independent uncertain propositions. It is
known that
T (A B) = a, T (A B) = b.
(8.9)

165

Section 8.2 - Uncertain Modus Ponens

What is the truth value of A B? Denote the truth values of A and B by


1 and 2 , respectively, and write
Y1 = A B,

Y2 = A B,

Z = A B.

It is clear that
T (Y1 ) = 1 2 = a,
T (Y2 ) = 1 2 = b,
T (Z) = (1 1 ) 2 .
In this case, the uncertain entailment model (8.8) becomes

min |(1 1 ) 2 0.5|

subject to:

0 1 1

0 2 1

1 2 = a

1 2 = b.

(8.10)

When a b, there are only two feasible solutions (1 , 2 ) = (a, b) and


(1 , 2 ) = (b, a). If a + b < 1, the optimal solution produces
T (Z) = (1 1 ) 2 = 1 a;
if a + b = 1, the optimal solution produces
T (Z) = (1 1 ) 2 = a or b;
if a + b > 1, the optimal solution produces
T (Z) = (1 1 ) 2 = b.
When a < b, there is no feasible solution and the truth values are ill-assigned.
In summary, from T (A B) = a and T (A B) = b we entail

1 a, if a b and a + b < 1

a or b, if a b and a + b = 1
T (A B) =
(8.11)

b,
if a b and a + b > 1

illness, if a < b.

8.2

Uncertain Modus Ponens

Uncertain modus ponens was presented by Liu [117]. Let A and B be independent uncertain propositions. Assume A and A B have truth values a

166

Chapter 8 - Uncertain Entailment

and b, respectively. What is the truth value of B? Denote the truth values
of A and B by 1 and 2 , respectively, and write
Y2 = A B,

Y1 = A,

Z = B.

It is clear that
T (Y1 ) = 1 = a,
T (Y2 ) = (1 1 ) 2 = b,
T (Z) = 2 .
In this case, the uncertain entailment model (8.8) becomes

min |2 0.5|

subject to:

0 1 1

0 2 1

1 = a

(1 1 ) 2 = b.

(8.12)

When a + b > 1, there is a unique feasible solution and then the optimal
solution is
1 = a, 2 = b.
Thus T (B) = 2 = b. When a + b = 1, the feasible set is {a} [0, b] and the
optimal solution is
1 = a, 2 = 0.5 b.
Thus T (B) = 2 = 0.5 b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from
T (A) = a,
we entail
T (B) =

T (A B) = b

b,
if a + b > 1
0.5 b, if a + b = 1

illness, if a + b < 1.

(8.13)

(8.14)

This result coincides with the classical modus ponens that if both A and
A B are true, then B is true.

8.3

Uncertain Modus Tollens

Uncertain modus tollens was presented by Liu [117]. Let A and B be independent uncertain propositions. Assume A B and B have truth values a

Section 8.4 - Uncertain Hypothetical Syllogism

167

and b, respectively. What is the truth value of A? Denote the truth values
of A and B by 1 and 2 , respectively, and write
Y1 = A B,

Y2 = B,

Z = A.

It is clear that
T (Y1 ) = (1 1 ) 2 = a,
T (Y2 ) = 2 = b,
T (Z) = 1 .
In this case, the uncertain entailment model (8.8) becomes

min |1 0.5|

subject to:

0 1 1

0 2 1

(1
1 ) 2 = a

2 = b.

(8.15)

When a > b, there is a unique feasible solution and then the optimal solution
is
1 = 1 a,

2 = b.

Thus T (A) = 1 = 1 a. When a = b, the feasible set is [1 a, 1] {b} and


the optimal solution is
1 = (1 a) 0.5,

2 = b.

Thus T (A) = 1 = (1 a) 0.5. When a < b, there is no feasible solution


and the truth values are ill-assigned. In summary, from
T (A B) = a,

T (B) = b

(8.16)

we entail

1 a,
if a > b
(1 a) 0.5, if a = b
T (A) =

illness,
if a < b.

(8.17)

This result coincides with the classical modus tollens that if A B is true
and B is false, then A is false.

168

8.4

Chapter 8 - Uncertain Entailment

Uncertain Hypothetical Syllogism

Uncertain hypothetical syllogism was presented by Liu [117]. Let A, B, C be


independent uncertain propositions. Assume A B and B C have truth
values a and b, respectively. What is the truth value of A C? Denote the
truth values of A, B, C by 1 , 2 , 3 , respectively, and write
Y1 = A B,

Y2 = B C,

Z = A C.

It is clear that
T (Y1 ) = (1 1 ) 2 = a,
T (Y2 ) = (1 2 ) 3 = b,
T (Z) = (1 1 ) 3 .
In this case, the uncertain entailment model (8.8) becomes

min |(1 1 ) 3 0.5|

subject to:

0 1 1

0 2 1

0 3 1

(1 1 ) 2 = a

(1 2 ) 3 = b.

(8.18)

Write the optimal solution by (1 , 2 , 3 ). When a b 0.5, we have


T (A C) = (1 1 ) 3 = a b.
When a + b 1 and a b < 0.5, we have
T (A C) = (1 1 ) 3 = 0.5.
When a + b < 1, there is no feasible solution and the truth values are illassigned. In summary, from
T (A B) = a,

T (B C) = b

(8.19)

we entail

a b, if a 0.5 and b 0.5

0.5,
if a + b 1 and a b < 0.5
T (A C) =

illness, if a + b < 1.

(8.20)

This result coincides with the classical hypothetical syllogism that if both
A B and B C are true, then A C is true.

Section 8.5 - Bibliographic Notes

8.5

169

Bibliographic Notes

Uncertain entailment was proposed by Liu [117] for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given. From the
uncertain entailment model, Liu [117] also deduced uncertain modus ponens,
uncertain modus tollens, and uncertain hypothetical syllogism.

Chapter 9

Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts that are essentially sets but their boundaries
are not sharply described (because of the ambiguity of human language).
Some typical examples include young, tall, warm, and most.
This chapter will introduce the concepts of uncertain set, membership
function, independence, expected value, variance, entropy, and distance. This
chapter will also introduce the operational law for uncertain sets via membership functions or inverse membership functions, and uncertain statistics
for determining membership functions.

9.1

Uncertain Set

Roughly speaking, an uncertain set is a measurable function from an uncertainty space to a collection of sets. A formal definition is given as follows.
Definition 9.1 (Liu [118]) An uncertain set is a measurable function from
an uncertainty space (, L, M) to a collection of sets, i.e., both {B } and
{ B} are events for any Borel set B.
Remark 9.1: It is clear that uncertain set (Liu [118]) is very different from
random set (Robbins [184] and Matheron [158]) and fuzzy set (Zadeh [234]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.
Example 9.1: Take an uncertainty space (, L, M) to be {1 , 2 , 3 } with
power set L. Then the set-valued function

[1, 3], if = 1

[2, 4], if = 2
() =
(9.1)

[3, 5], if = 3

172

Chapter 9 - Uncertain Set

is an uncertain set on (, L, M).


<..

..
.........
....
........
..................................................
...
... ...
...
.... ....
...
.
... ...
...............................
.......
... ...
... ..
....
... ...
... ...
...
... ...
... ...
...
.
.......
..................................................
.....
.
.
...
... ...
... ...
..
...
..
..... ....
..... ....
...
.
.
..
.
.
.
.
...............................
.
......
..
..... ...
....
..
..
...
... ....
..
... ..
...
...
......
..
.
..........
...
..
..
..
...
..
...
...
...
.
.
.
.
.
............................................................................................................................................................................
..
...
1
2
3

Figure 9.1: An Uncertain Set


Example 9.2: Take an uncertainty space (, L, M) to be < with Borel
algebra L. Then the set-valued function
() = [, + 1],

(9.2)

is an uncertain set on (, L, M).


Theorem 9.1 Let be an uncertain set and let B be a Borel set. Then the
set

{B 6 } = { B 6 ()}
(9.3)
is an event.
Proof: Since is an uncertain set and B is a Borel set, the set {B } is an
event. Thus {B 6 } is an event by using the relation {B 6 } = {B }c .
Theorem 9.2 Let be an uncertain set and let B be a Borel set. Then the
set

{ 6 B} = { () 6 B}
(9.4)
is an event.
Proof: Since is an uncertain set and B is a Borel set, the set { B} is an
event. Thus { 6 B} is an event by using the relation { 6 B} = { B}c .
Union, Intersection and Complement
Definition 9.2 Let and be two uncertain sets on the uncertainty space
(, L, M). Then (i) the union of the uncertain sets and is
( )() = () (),

(9.5)

(ii) the intersection of the uncertain sets and is


( )() = () (),

(9.6)

173

Section 9.1 - Uncertain Set

(iii) the complement c of the uncertain set is


c () = ()c ,

(9.7)

Example 9.3: Take an uncertainty space (, L, M) to be {1 , 2 , 3 }. Let


and be two uncertain sets,

(2, 3), if = 1

[1, 2], if = 1

[1, 3], if = 2
(2, 4), if = 2
() =
() =

[1, 4], if = 3 ,
(2, 5), if = 3 .
Then their union is

[1, 3), if = 1

[1, 4), if = 2
( )() =

[1, 5), if = 3 ,
their intersection is

( )() =

if = 1

(2, 3], if = 2
(2, 4], if = 3 ,

and their complements are

(, 1) (2, +),

c
(, 1) (3, +),
() =

(, 1) (4, +),

(, 2] [3, +),

c
(, 2] [4, +),
() =

(, 2] [5, +),

if = 1
if = 2
if = 3 ,
if = 1
if = 2
if = 3 .

Theorem 9.3 Let be an uncertain set and let < be the set of real numbers.
Then
< = <, < = .
(9.8)
Proof: For each , it follows from the definition of uncertain set that
the union is
( <)() = () < = <.
Thus we have < = <. In addition, the intersection is
( <)() = () < = ().
Thus we have < = .

174

Chapter 9 - Uncertain Set

Theorem 9.4 Let be an uncertain set and let be the empty set. Then
= ,

= .

(9.9)

Proof: For each , it follows from the definition of uncertain set that
the union is
( )() = () = ().
Thus we have = . In addition, the intersection is
( )() = () = .
Thus we have = .
Theorem 9.5 (Idempotent Law) Let be an uncertain set. Then we have
= ,

= .

(9.10)

Proof: For each , it follows from the definition of uncertain set that
the union is
( )() = () () = ().
Thus we have = . In addition, the intersection is
( )() = () () = ().
Thus we have = .
Theorem 9.6 (Double-Negation Law) Let be an uncertain set. Then we
have
( c )c = .
(9.11)
Proof: For each , it follows from the definition of complement that
( c )c () = ( c ())c = (()c )c = ().
Thus we have ( c )c = .
Theorem 9.7 (Law of Excluded Middle and Law of Contradiction) Let be
an uncertain set and let c be its complement. Then
c = <,

c = .

(9.12)

Proof: For each , it follows from the definition of uncertain set that
the union is
( c )() = () c () = () ()c = <.
Thus we have c <. In addition, the intersection is
( c )() = () c () = () ()c = .
Thus we have c .

175

Section 9.1 - Uncertain Set

Theorem 9.8 (Commutative Law) Let and be uncertain sets. Then we


have
= , = .
(9.13)
Proof: For each , it follows from the definition of uncertain set that
( )() = () () = () () = ( )().
Thus we have = . In addition, it follows that
( )() = () () = () () = ( )().
Thus we have = .
Theorem 9.9 (Associative Law) Let , , be uncertain sets. Then we have
( ) = ( ),

( ) = ( ).

(9.14)

Proof: For each , it follows from the definition of uncertain set that
(( ) )() = (() ()) ()
= () (() ()) = ( ( ))().
Thus we have ( ) = ( ). In addition, it follows that
(( ) )() = (() ()) ()
= () (() ()) = ( ( ))().
Thus we have ( ) = ( ).
Theorem 9.10 (Distributive Law) Let , , be uncertain sets. Then we
have
( ) = ( ) ( ),

( ) = ( ) ( ).

(9.15)

Proof: For each , it follows from the definition of uncertain set that
( ( ))() = () (() ())
= (() ()) (() ())
= (( ) ( ))().
Thus we have ( ) = ( ) ( ). In addition, it follows that
( ( ))() = () (() ())
= (() ()) (() ())
= (( ) ( ))().
Thus we have ( ) = ( ) ( ).

176

Chapter 9 - Uncertain Set

Theorem 9.11 (Absorbtion Law) Let and be uncertain sets. Then we


have
( ) = , ( ) = .
(9.16)
Proof: For each , it follows from the definition of uncertain set that
( ( ))() = () (() ()) = ().
Thus we have ( ) = . In addition, since
( ( ))() = () (() ()) = (),
we get ( ) = .
Theorem 9.12 (De Morgans Law) Let and be uncertain sets. Then
( )c = c c ,

( )c = c c .

(9.17)

Proof: For each , it follows from the definition of complement that


( )c () = ((() ())c = ()c ()c = ( c c )().
Thus we have ( )c = c c . In addition, since
( )c () = ((() ())c = ()c ()c = ( c c )(),
we get ( )c = c c .
Function of Uncertain Sets
Definition 9.3 Let 1 , 2 , , n be uncertain sets on the uncertainty space
(, L, M), and f a measurable function. Then = f (1 , 2 , , n ) is an
uncertain set defined by
() = f (1 (), 2 (), , n ()),

(9.18)

Example 9.4: Let be an uncertain set on the uncertainty space (, L, M)


and let A be a classical set. Then + A is also an uncertain set determined
by
( + A)() = () + A, .
(9.19)
Example 9.5: Take an uncertainty space (, L, M) to be {1 , 2 , 3 }. Let
and be two uncertain sets,

[1, 2], if = 1
(2, 3), if = 1

[1, 3], if = 2
(2, 4), if = 2
() =
() =

[1, 4], if = 3 ,
(2, 5), if = 3 .

177

Section 9.2 - Membership Function

Then their sum is

(3, 5), if = 1

(3, 7), if = 2
( + )() =

(3, 9), if = 3 ,
and their product is

(2, 6), if = 1

(2, 12), if = 2
( )() =

(2, 20), if = 3 .

9.2

Membership Function

Definition 9.4 (Liu [124]) An uncertain set is said to have a membership


function if for any Borel set B, we have
M{B } = inf (x),

(9.20)

M{ B} = 1 sup (x).

(9.21)

xB

xB c

The above equations will be called measure inversion formulas.


(x)

(x)
....
........
....
..
...... ...........
...
....
....
...
....
...
...
..
...
.
...
...
...
...
..
...
.
...
..
...
...
.
...
..
...
.
...
..
...
.
...
..
...
.
.
.. ...................................................................
............
.
......
... .... ...
.. .....
. ...
.
..
xB
.. ......
......
.
.
...
.
.
.. .......
.
.
.
.....
... ...
.
.
..
.
.
..
.
..
.... ...
.
............................................................................................................................................................................
.. ..
.
.
.
....
.............................
.............................
..

sup (x)

inf (x)
0

....
........
....
..
...... ...........
...
....
....
...
....
...
...
..
...
.
...
...
...
...
..
...
.
.
.................................................................................
... .
.
...
.
... ..
.
....
.
.
... .
xB c
.... .... ...
.....
....
... ... ..
......
... ... ..
..
.. ......
... ....
....
.
.. ......
.
.
...
.....
.
.
.
..
.
.....
.
... ...
.
.
.
.
..
...
.
.
.... ...
.
.
.................................................................................................................................................................................
..............................
.
.
....
..............................
...
..

Figure 9.2: M{B } = inf (x) and M{ B} = 1 sup (x)


xB

xB c

Remark 9.2: It is not true that every uncertain set has a membership
function. For example, the uncertain set
(
[1, 3] with uncertain measure 0.6
=
(9.22)
[0, 2] with uncertain measure 0.4
has no membership function.

178

Chapter 9 - Uncertain Set

Remark 9.3: When an uncertain set does have a membership function ,


it follows from the first measure inversion formula that
(x) = M{x }.

(9.23)

Remark 9.4: The value of (x) represents the membership degree that x
belongs to the uncertain set . If (x) = 1, then x completely belongs to ;
if (x) = 0, then x does not belong to at all. Thus the larger the value of
(x) is, the more true x belongs to .
Remark 9.5: If an element x belongs to an uncertain set with membership
degree , then x does not belong to the uncertain set with membership degree
1 . This fact follows from the duality property of uncertain measure. In
other words, if the uncertain set has a membership function , then for any
real number x, we have M{x 6 } = 1 M{x } = 1 (x). That is,
M{x 6 } = 1 (x).

(9.24)

Remark 9.6: Note that the membership functions may be defined for not
only uncertain set but also fuzzy set and random set. If the membership
function is denoted by (x), then the membership degree of x belonging to
an uncertain set is (x) in uncertain measure; the membership degree of x
belonging to a fuzzy set is (x) in possibility measure; and the membership
degree of x belonging to a random set is (x) in probability measure.
Example 9.6: Let us take an uncertainty space (, L, M) to be [0, 1] with
M{[0, ]} = for each [0, 1]. Then the uncertain set
h p
i
p
() = 1 , 1
(9.25)
has a membership function
(
(x) =

1 x2 , if x [1, 1]
0,

otherwise.

(9.26)

Example 9.7: The set < of real numbers is a special uncertain set () <.
Such an uncertain set has a membership function
(x) 1,

x <.

(9.27)

In this case, the membership function is identical with the characteristic


function of <.
Example 9.8: The empty set is a special uncertain set () . Such an
uncertain set has a membership function
(x) 0,

x <.

(9.28)

Section 9.2 - Membership Function

179

In this case, the membership function is identical with the characteristic


function of .
Example 9.9: A completely unknown set is a special uncertain set with
membership function
(x) 0.5, x <.
(9.29)
Example 9.10: Let c be a number in < and let be a number in (0, 1).
Then the membership function
(
, if x = c
(x) =
(9.30)
0, if x =
6 c
represents the uncertain set
(
{c} with uncertain measure
=
with uncertain measure 1

(9.31)

that takes values either the singleton {c} or the empty set . This fact states
that uncertainty may exist even when there is a single element in the universe.
Example 9.11: By a rectangular uncertain set we mean the uncertain set
fully determined by the pair (a, b) of crisp numbers with a < b, whose membership function is
(
1, if a x b
(x) =
0, otherwise.
Example 9.12: By a triangular uncertain set we mean the uncertain set
fully determined by the triplet (a, b, c) of crisp numbers with a < b < c,
whose membership function is
xa

, if a x b

ba
(x) =

x c , if b x c.
bc
Example 9.13: By a trapezoidal uncertain set we mean the uncertain set
fully determined by the quadruplet (a, b, c, d) of crisp numbers with a < b <
c < d, whose membership function is

xa

, if a x b

ba
1,
if b x c
(x) =

xd

, if c x d.
cd

180

Chapter 9 - Uncertain Set

(x)

(x)

(x)

...
...
...
..........
..........
..........
..
..
..
... . . . . ......................................................... . . . . . . . . . . . ..... . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . ... . . . . . . . . ........................................
.
.
.
.
.
....
....
....
.....
.
.
.......
....
....
.
.
.
.. . ....
....
.
...
.
...
. ...
.
.
... . ...
... .
...
....
...
.
.
.
.. ..
.. .. ....
.
. .....
.
.
.
.
.
...
...
.. . ....
. ..
.
.
..
.
.. .
.
.
.
...
.
. .
.
.
. . ..
. ...
.
.
.
.
.
.
.
.
.
.
...
.
. .
.
.
.. ..
..
.. .. ....
. ....
.
.
.
.
....
...
.
...
.. .
. ..
.
.
.
..
..
.
.
.
.
.
...
...
.
.
. ...
.
.
.
. .
.
.
.
.
.
.
.
...
.
.
. .
.
.
.
...
.
. ..
..
.
.
. ....
.
.
.
.
.
.
....
...
...
.
.
.
.
.
.
.
.
...
.
.
..
.
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
...
...
.
.
.
.
.
.
...
.
.
..
.
.
.
.
.
.
.
.
.
....
...
.
...
...
.
.
.
.
.
.
.
.
.
.
..
.
.
.
...
.
.
..
... ...
.
.
.
.
.
.
.
. ....
.
.
.
.
.
.
.
.
.
.
.
.
..............................................................................................................................
.......................................................................................................
........................................................................................................
..
..
..
...
...
...
...
...
...

a b

c d

Figure 9.3: Rectangular, Triangular and Trapezoidal Membership Functions


What is young?
Sometimes we say those students are young. What ages can be considered
young? In this case, young may be regarded as an uncertain set whose
membership function is

0,
if x 15

(x 15)/5, if 15 x 20
1,
if 20 x 35
(9.32)
(x) =

(45

x)/10,
if
35

45

0,
if x 45.
Note that we do not say young if the age is below 15.
(x)
...
..........
...
.........................................................................................
...
....
......
...
.....
.. ....
...
... .
.. ....
.. ...
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
..
...
.
.
...............................................................................................................................................................................................................................................................
...
..

15yr 20yr

35yr

45yr

Figure 9.4: Membership Function of young

What is tall?
Sometimes we say those sportsmen are tall. What heights (centimeters)
can be considered tall? In this case, tall may be regarded as an uncertain

181

Section 9.2 - Membership Function

set whose membership function is

0,

(x 180)/5,
1,
(x) =

(200 x)/5,

0,

if
if
if
if
if

x 180
180 x 185
185 x 195
195 x 200
x 200.

(9.33)

Note that we do not say tall if the height is over 200cm.


(x)
..
.........
...
.........................................................................................
....
.....
......
..
....
.. ...
...
... ..
.. ....
.. ..
...
.
.. ....
.. ..
...
.
..
.. ...
..
...
.
...
..
..
..
...
...
.
..
..
...
..
...
.
..
..
.
...
.
...
.
...
..
..
...
...
.
...
..
..
..
...
...
.
..
..
...
..
...
.
..
..
...
..
...
.
...
..
..
..
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
..
...
..
...
.
..
..
.
.
.
.
.
.
..........................................................................................................................................................................................................................................................
..
...

180cm 185cm

195cm 200cm

Figure 9.5: Membership Function of tall

What is warm?
Sometimes we say those days are warm. What temperatures can be considered warm? In this case, warm may be regarded as an uncertain set
whose membership function is

0,

(x

15)/3,

1,
(x) =

(28 x)/4,

0,

if
if
if
if
if

x 15
15 x 18
18 x 24
24 x 28
28 x.

(9.34)

What is most?
Sometimes we say most students are boys. What percentages can be considered most? In this case, most may be regarded as an uncertain set

182

Chapter 9 - Uncertain Set

(x)
....
........
..
...
........................................................................
...
......
.....
...
.. ...
... .
.. ...
...
.. ....
.
.. ..
...
.. ....
.
.. ..
...
...
..
.
...
.. ..
...
..
.
...
..
..
...
..
.
...
..
..
...
.
.
...
.
..
..
...
...
.
.
.
..
...
..
...
.
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
..
...
.
...
.
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
.
..
.
.
.
...................................................................................................................................................................................................................................
..

...

15 C 18 C

24 C

28 C

Figure 9.6: Membership Function of warm


whose membership function is

0,

20(x 0.7),
1,
(x) =

20(0.9 x),

0,

if
if
if
if
if

0 x 0.7
0.7 x 0.75
0.75 x 0.85
0.85 x 0.9
0.9 x 1.

(9.35)

(x)
...
..........
...
...................................................................
...
....
......
...
.....
.. ...
...
... .
.. ....
.. ...
...
.
.. ....
.. ..
...
.
.. ...
.. ..
...
.
.. ....
.. ..
...
.
...
..
..
...
...
...
..
..
...
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
.
.
.
...........................................................................................................................................................................................................................
...
....

70% 75%

85% 90%

Figure 9.7: Membership Function of most

What uncertain sets have membership functions?


It is not true that every uncertain set has a membership function. What
uncertain sets have membership functions?
Case I: If an uncertain set degenerates to a classical set A, then has a
membership function that is just the characteristic function of A.
Case II: Let be an uncertain set taking values in a nested class of sets.
That is, for any given 1 and 2 , at least one of the following alternatives
holds,
(i) (1 ) (2 ),
(9.36)

183

Section 9.2 - Membership Function

(ii) (2 ) (1 ).

(9.37)

Then the uncertain set has a membership function.


Sufficient and Necessary Condition
Theorem 9.13 (Liu [121]) A real-valued function is a membership function if and only if
0 (x) 1.
(9.38)
Proof: If is a membership function of the uncertain set , then (x) =
M{x } and 0 (x) 1. Conversely, suppose is a function such
that 0 (x) 1. We take an uncertainty space (, L, M) to be [0, 1] with
M{[0, ]} = for each [0, 1]. Then the uncertain set
() = {x | (x) }

(9.39)

has the membership function .


Membership Function of Nonempty Uncertain Set
An uncertain set is said to be nonempty if () 6= for almost all .
That is,
M{ = } = 0.
(9.40)
Note that nonempty uncertain set does not necessarily have a membership
function. However, when it does have, the following theorem gives a sufficient
and necessary condition of membership function.
Theorem 9.14 Let be an uncertain set whose membership function exists. Then is nonempty if and only if
sup (x) = 1.

(9.41)

x<

Proof: Since the membership function exists, it follows from the measure
inversion formula that
M{ = } = 1 sup (x) = 1 sup (x).
xc

x<

Thus is a nonempty uncertain set if and only if (9.41) holds.


Inverse Membership Function
Definition 9.5 (Liu [124]) Let be an uncertain set with membership function . Then the set-valued function



1 () = x < (x) , [0, 1]
(9.42)
is called the inverse membership function of . Sometimes, for each given ,
the set 1 () is also called the -cut of .

184

Chapter 9 - Uncertain Set

(x)
....
........
.....
..
........ ...............
...
.....
.....
.....
...
.....
.....
....
.
.
...
.
.....
..
.
.....
...
.
.....
...
...
.
....
.
.
.
.............
.
.
..............................................
....
.
.. ......
.. ..
...
.
. .
.. .......
.
...
.
.....
.. .
..
...
.....
... ...
..
.....
... .......
.....
..
...
... .......
.....
..
..
......
......
.
...
.
.
.......
.
.
.
.
.
.
.......
... ...
.
.
.
.
.
.
.
....
.
..
......
...
....
.................................................................................................................................................................................................................
.
.... .
.
....
........................ 1
............................
..

()

Figure 9.8: Inverse Membership Function 1 ()


Remark 9.7: It is clear that inverse membership function always exists.
Please also note that
1 (0) <
(9.43)
and 1 () may take value of empty set .
Example 9.14: The rectangular uncertain set = (a, b) has an inverse
membership function
1 () [a, b].
(9.44)
Example 9.15: The triangular uncertain set = (a, b, c) has an inverse
membership function
1 () = [(1 )a + b, b + (1 )c].

(9.45)

Example 9.16: The trapezoidal uncertain set = (a, b, c, d) has an inverse


membership function
1 () = [(1 )a + b, c + (1 )d].

(9.46)

Theorem 9.15 Let be an uncertain set with inverse membership function


1 (). Then the membership function of is determined by



(x) = sup [0, 1] x 1 () .
(9.47)
Proof: It is easy to verify that 1 is the inverse membership function of .
Thus is the membership function of .
Theorem 9.16 (Liu [124], Sufficient and Necessary Condition) A function
1 () is an inverse membership function if and only if it is a monotone
decreasing set-valued function with respect to [0, 1]. That is,
1 () 1 (),

if > .

(9.48)

185

Section 9.2 - Membership Function

Proof: Suppose 1 () is an inverse membership function. For any x


1 (), we have (x) . Since > , we have (x) > and then x
1 (). Hence 1 () 1 (). Conversely, suppose 1 () is a monotone
decreasing set-valued function. Then



(x) = sup [0, 1] x 1 ()
is a membership function of some uncertain set. It is easy to verify that
1 () is the inverse membership function of the uncertain set. The theorem
is proved.
Uncertain set does not necessarily take values of its -cuts!
Please keep in mind that uncertain set does not necessarily take values of its
-cuts. In fact, an -cut is included in the uncertain set with uncertain measure . Conversely, the uncertain set is included in its -cut with uncertain
measure 1 . More precisely, we have the following theorem.
Theorem 9.17 (Liu [124]) Let be an uncertain set with inverse membership function 1 (). Then for each [0, 1], we have
M{1 () } ,

(9.49)

M{ 1 ()} 1 .

(9.50)

Proof: For each x 1 (), we have (x) . It follows from the measure
inversion formula that
M{1 () } =

inf

x1 ()

(x) .

For each x 6 1 (), we have (x) < . It follows from the measure inversion
formula that
M{ 1 ()} = 1

sup

(x) 1 .

x61 ()

Regular Membership Function


Definition 9.6 (Liu [124]) A membership function is said to be regular
if there exists a point x0 such that (x0 ) = 1 and (x) is unimodal about
the mode x0 . That is, (x) is increasing on (, x0 ] and decreasing on
[x0 , +).
If is a regular membership function, then 1 () is an interval for each
. In this case, the function
1
1
()
l () = inf

(9.51)

186

Chapter 9 - Uncertain Set

is called the left inverse membership function, and the function


1
1
()
r () = sup

(9.52)

is called the right inverse membership function. It is clear that the left inverse
membership function 1
l () is increasing, and the right inverse membership
function 1
()
is
decreasing
with respect to .
r
Conversely, suppose an uncertain set has a left inverse membership
1
function 1
l () and right inverse membership function r (). Then the
membership function is determined by

0, if x 1

l (0)

1
, if (0) x 1 (1) and 1 () = x

l
l
l

1
1
(9.53)
(x) =
1, if l (1) x r (1)

1
1
1

, if r (1) x r (0) and r () = x

0, if x 1
r (0).
Note that the values of and may not be unique. In this case, we will take
the maximum values.

9.3

Independence

Definition 9.7 (Liu [127]) The uncertain sets 1 , 2 , , n are said to be


independent if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^

M
(i Bi ) =
M {i Bi }
(9.54)
i=1

i=1

and
(
M

n
[

(i
i=1

Bi )

n
_

M {i Bi }

(9.55)

i=1

where i are arbitrarily chosen from {i , ic }, i = 1, 2, , n, respectively.


Remark 9.8: Note that (9.54) represents 2n equations. For example, when
n = 2, the four equations are
M{(1
M{(1c
M{(1
M{(1c

B1 ) (2
B1 ) (2
B1 ) (2c
B1 ) (2c

B2 )} = M{1
B2 )} = M{1c
B2 )} = M{1
B2 )} = M{1c

B1 } M{2
B1 } M{2
B1 } M{2c
B1 } M{2c

B2 },
B2 },
B2 },
B2 }.

187

Section 9.3 - Independence

Also note that (9.55) represents other 2n equations. For example, when
n = 2, the four equations are
M{(1
M{(1c
M{(1
M{(1c

B1 ) (2
B1 ) (2
B1 ) (2c
B1 ) (2c

B2 )} = M{1
B2 )} = M{1c
B2 )} = M{1
B2 )} = M{1c

B1 } M{2
B1 } M{2
B1 } M{2c
B1 } M{2c

B2 },
B2 },
B2 },
B2 }.

Theorem 9.18 Let 1 , 2 , , n be uncertain sets, and let i be arbitrarily chosen uncertain sets from {i , ic }, i = 1, 2, , n, respectively. Then
1 , 2 , , n are independent if and only if 1 , 2 , , n are independent.
Proof: Let i be arbitrarily chosen uncertain sets from {i , ic }, i =
1, 2, , n, respectively. Then 1 , 2 , , n and 1 , 2 , , n represent
the same 2n combinations. This fact implies that (9.54) and (9.55) are equivalent to
( n
)
n
^
\
M {i Bi } ,
(9.56)
M
(i Bi ) =
i=1

i=1

(
M

n
[

)
(i Bi )

n
_

M {i Bi } .

(9.57)

i=1

i=1

Hence 1 , 2 , , n are independent if and only if 1 , 2 , , n are independent.


Exercise 9.1: Show that the following four statements are equivalent: (i)
1 and 2 are independent; (ii) 1c and 2 are independent; (iii) 1 and 2c are
independent; and (iv) 1c and 2c are independent.
Theorem 9.19 The uncertain sets 1 , 2 , , n are independent if and only
if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^

M
(i 6 Bi ) =
M {i 6 Bi }
(9.58)
i=1

and

(
M

n
[

(i
i=1

i=1

)
6 Bi )

n
_

M {i 6 Bi }

(9.59)

i=1

where i are arbitrarily chosen from {i , ic }, i = 1, 2, , n, respectively.


Proof: Since {i 6 Bi }c = {i Bi } for i = 1, 2, , n, it follows from the
duality of uncertain measure that
( n
)
( n
)
\
[

M
(i 6 Bi ) = 1 M
(i Bi ) ,
(9.60)
i=1

i=1

188

Chapter 9 - Uncertain Set


n
^

M {i 6 Bi } = 1

i=1

(
M

M{i Bi },

(9.61)

i=1

n
[

(i
i=1
n
_

n
_

(
=1M

6 Bi )

n
\

)
(i

Bi ) ,

(9.62)

i=1

M {i 6 Bi } = 1

i=1

n
^

M{i Bi }.

(9.63)

i=1

It follows from (9.60), (9.61), (9.62) and (9.63) that (9.58) and (9.59) are
valid if and only if
( n
)
n
\
^

M
(i Bi ) =
M{i Bi },
(9.64)
i=1

n
[

i=1

)
(i Bi )

i=1

n
_

M{i Bi }.

(9.65)

i=1

The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.
Theorem 9.20 The uncertain sets 1 , 2 , , n are independent if and only
if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^
M
(Bi i ) =
M {Bi i }
(9.66)
i=1

and

(
M

n
[

i=1

)
(Bi i )

i=1

n
_

M {Bi i }

(9.67)

i=1

where i are arbitrarily chosen from {i , ic }, i = 1, 2, , n, respectively.


Proof: Since {Bi i } = {ic Bic } for i = 1, 2, , n, we immediately
have
( n
)
( n
)
\
\

c
c
M
(Bi i ) = M
(i Bi ) ,
(9.68)
i=1
n
^

i=1

M {Bi i } =

i=1

(
M

n
^

)
(Bi

i )

(
=M

i=1

i=1

(9.69)

i=1

n
[

n
_

M{ic Bic },

M {Bi i } =

n
_
i=1

n
[

(ic
i=1

Bic )

M{ic Bic }.

(9.70)

(9.71)

189

Section 9.4 - Set Operational Law

It follows from (9.68), (9.69), (9.70) and (9.71) that (9.66) and (9.67) are
valid if and only if
( n
)
n
\
^
c
c
M
(i Bi ) =
M{ic Bic },
(9.72)
i=1

n
[

i=1

)
(ic Bic )

i=1

n
_

M{ic Bic }.

(9.73)

i=1

The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.
Theorem 9.21 The uncertain sets 1 , 2 , , n are independent if and only
if for any Borel sets B1 , B2 , , Bn , we have
( n
)
n
\
^

M
(Bi 6 i ) =
M {Bi 6 i }
(9.74)
i=1

and

(
M

i=1

n
[

(Bi 6 i )

i=1

where i are arbitrarily chosen from


i }c

n
_

M {Bi 6 i }

i=1
{i , ic },

(9.75)

i = 1, 2, , n, respectively.

i }

Proof: Since {Bi 6


= {Bi
for i = 1, 2, , n, it follows from the
duality of uncertain measure that
( n
)
( n
)
\
[

M
(Bi 6 i ) = 1 M
(Bi i ) ,
(9.76)
i=1
n
^

i=1

M {Bi 6 i } = 1

i=1
( n
[

n
_

M{Bi i },

(9.77)

i=1

)
(Bi 6

i )

(
=1M

i=1
n
_

n
^

i=1

i=1

n
\

)
(Bi

i )

(9.78)

i=1

M {Bi 6 i } = 1

M{Bi i }.

(9.79)

It follows from (9.76), (9.77), (9.78) and (9.79) that (9.74) and (9.75) are
valid if and only if
( n
)
n
\
^

M
(Bi i ) =
M{Bi i },
(9.80)
i=1

(
M

n
[

i=1

)
(Bi i )

i=1

n
_

M{Bi i }.

(9.81)

i=1

The above two equations are also equivalent to the independence of the uncertain sets 1 , 2 , , n . The theorem is thus proved.

190

Chapter 9 - Uncertain Set

9.4

Set Operational Law

This section will discuss the union, intersection and complement of independent uncertain sets via membership functions.
Union of Uncertain Sets
Theorem 9.22 (Liu [124]) Let and be independent uncertain sets with
membership functions and , respectively. Then their union has a
membership function
(x) = (x) (x).
(9.82)
Proof: In order to prove is a membership function of , we must
verify the two measure inversion formulas. Let B be any Borel set, and write
= inf (x) (x).
xB

Then B

()

(). By the independence of and , we have

M{B ( )} M{(1 () 1 ()) ( )}


M{(1 () ) ( 1 () )}
= M{1 () } M{ 1 () }
= .
Thus
M{B ( )} inf (x) (x).
xB

(9.83)

On the other hand, for any x B, we have


M{B ( )} M{x ( )} = M{(x ) (x )}
= M{x } M{x } = (x) (x).
Thus
M{B ( )} inf (x) (x).
xB

(9.84)

It follows from (9.83) and (9.84) that


M{B ( )} = inf (x) (x).
xB

(9.85)

The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of and , we have
M{( ) B} = M{( B) ( B)} = M{ B} M{ B}

 

= 1 sup (x) 1 sup (x)
xB c

= 1 sup (x) (x).


xB c

xB c

191

Section 9.4 - Set Operational Law

That is,
M{( ) B} = 1 sup (x) (x).

(9.86)

xB c

The second measure inversion formula is verified. Therefore, the union


is proved to have the membership function by the measure inversion
formulas (9.85) and (9.86).
(x)

(x)

(x)

....
.........
....
........
..
...... ..........
..... .........
....
...
...
....
...
...
...
...
...
...
..
...
.
..
...
.
.
.
...
...
...
...
...
...
...
..
..
...
...
.
.
...
...
..
..
...
...
.
.
...
.
.
...
..
...
.
.
...
.
...
.
..
...
.
.
... ..
...
..
...
.
.
.
... ..
...
..
...
.
.
..
...
.
..
.
...
...
.
.. ...
...
..
...
.
.
.
.
....
..
.
.
.
...
.
.....
..
...
..
.
.
...
.
.....
.
.
.
...
...
.
.
......
.
...
.
.
.
.
...
.......
..
.
.
... ...........
.
.
...............................................................................................................................................................
.....................................................................................................
....
..

Figure 9.9: Membership Function of Union of Uncertain Sets

Intersection of Uncertain Sets


Theorem 9.23 (Liu [124]) Let and be independent uncertain sets with
membership functions and , respectively. Then their intersection has
a membership function
(x) = (x) (x).

(9.87)

Proof: In order to prove is a membership function of , we must


verify the two measure inversion formulas. Let B be any Borel set. By the
independence of and , we have
M{B ( )} = M{(B ) (B )} = M{B } M{B }
= inf (x) inf (x) = inf (x) (x).
xB

xB

xB

That is,
M{B ( )} = inf (x) (x).
xB

(9.88)

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
= sup (x) (x).
xB c

192

Chapter 9 - Uncertain Set

Then for any given number > 0, we have 1 ( + ) 1 ( + ) B. By


the independence of and , we obtain
M{( ) B} M{( ) (1 ( + ) 1 ( + ))}
M{( 1 ( + )) ( 1 ( + ))}
= M{ 1 ( + )} M{ 1 ( + )}
(1 ) (1 ) = 1 .
Letting 0, we get
M{( ) B} 1 sup (x) (x).

(9.89)

xB c

On the other hand, for any x B c , we have


M{( ) B} M{x 6 ( )} = M{(x 6 ) (x 6 )}
= M{x 6 } M{x 6 } = (1 (x)) (1 (x))
= 1 (x) (x).
Thus
M{( ) B} 1 sup (x) (x).

(9.90)

xB c

It follows from (9.89) and (9.90) that


M{( ) B} = 1 sup (x) (x).

(9.91)

xB c

The second measure inversion formula is verified. Therefore, the intersection


is proved to have the membership function by the measure inversion
formulas (9.88) and (9.91).
(x)

(x)

(x)

...
..........
.........
.........
...
..
..
..
..
....
..
..
..
..
..
..
..
..
..
.
.
...
.
.
..
..
.
.
...
.
..
..
.
..
...
.
.
..
..
.
.
...
.
.
.
..
.
.
..
.
.
...
..
.
.. ..
.
..
...
.. ..
..
..
...
.
.
...
.
..
...
.
.
.
..
.....
.
.
...
.
..
. ....
..
.
...
.
.
..
....
.
..
.
.
.
..
.
.
...
.
.
.....
..
..
.
..
.
.
.
.
...
.
.....
.
...
...
.
.
.
.
.
.
.
... .....
.
.
...
.........
......
.
.
.
.
.
.
.
.
.
.
.....................................................................................................................................................................................................................................
........................
..
...
.

Figure 9.10: Membership Function of Intersection of Uncertain Sets

193

Section 9.5 - Arithmetic Operational Law

Complement of Uncertain Set


Theorem 9.24 (Liu [124]) Let be an uncertain set with membership function . Then its complement c has a membership function
(x) = 1 (x).

(9.92)

Proof: In order to prove 1 is a membership function of c , we must verify


the two measure inversion formulas. Let B be any Borel set. It follows from
the definition of membership function that
M{B c } = M{ B c } = 1 sup (x) = inf (1 (x)),
xB

x(B c )c

M{ c B} = M{B c } = inf c (x) = 1 sup (1 (x)).


xB

xB c

Thus c has a membership function 1 .


(x)

(x)

...
..........
.........
.
...............
..
.... ....................
...
.........
..
.......
..
...
.......
..
......
..
.
......
...
.....
..
.....
..
.
.....
.
.
.
.
...
.
..
..
.
.....
...
..
.....
..
.....
..
.....
..
....
...
.... ...
.. ........
...
.... .
.. ....
.....
...
.
...
...
.....
......
... ...
.. .....
...
...
...
..
..
...
...
..
...
..
.
.
.
.
...
.
..
....
..
....
..
....
...
..
.....
....
...
.
.
.
.
.
...
.
.....
...
..
...
.
.
.
.
.
.
...
.
.
....
.....
..
...
.
.
......
.
.....
.
.... ........
.
.
...................................................................................................................................................................................................................................................
..................................
..
....

Figure 9.11: Membership Function of Complement of Uncertain Set

9.5

Arithmetic Operational Law

This section will present an arithmetic operational law of independent uncertain sets via inverse membership functions, including addition, subtraction,
multiplication and division.
Theorem 9.25 (Liu [124]) Let 1 , 2 , , n be independent uncertain sets
1
1
with inverse membership functions 1
1 , 2 , , n , respectively. If f is a
measurable function, then
= f (1 , 2 , , n )

(9.93)

is an uncertain set with inverse membership function,


1
1
1 () = f (1
1 (), 2 (), , n ()).

(9.94)

194

Chapter 9 - Uncertain Set

Proof: For simplicity, we only prove the case n = 2. Let B be any Borel
set, and write
= inf (x).
xB

Then B 1 (). Since 1 () = f (11 (), 1


2 ()), by the independence
of 1 and 2 , we have
1
M{B } M{1 () } = M{f (1
1 (), 2 ()) }
1
M{(1
1 () 1 ) (2 () 2 )}
1
= M{1
1 () 1 } M{2 () 2 }

= .
Thus
M{B } inf (x).
xB

(9.95)

On the other hand, for any given number > 0, we have B 6 1 ( + ).


1
Since 1 ( + ) = f (1
1 ( + ), 2 ( + )), we obtain
1
M{B 6 } M{ 1 ( + )} = M{ f (1
1 ( + ), 2 ( + ))}
1
M{(1 1
1 ( + )) (2 2 ( + ))}
1
= M{1 1
1 ( + )} M{2 2 ( + )}

(1 ) (1 ) = 1
and then
M{B } = 1 M{B 6 } + .
Letting 0, we get
M{B } = inf (x).
xB

(9.96)

It follows from (9.95) and (9.96) that


M{B } = inf (x).
xB

(9.97)

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
= sup (x).
xB c

Then for any given number > 0, we have 1 ( + ) B. Please note that
1
1 ( + ) = f (1
1 ( + ), 2 ( + )). By the independence of 1 and 2 ,

195

Section 9.5 - Arithmetic Operational Law

we obtain
1
M{ B} M{ 1 ( + )} = M{ f (1
1 ( + ), 2 ( + ))}
1
M{(1 1
1 ( + )) (2 2 ( + ))}
1
= M{1 1
1 ( + )} M{2 2 ( + )}

(1 ) (1 ) = 1 .
Letting 0, we get
M{ B} 1 sup (x).

(9.98)

xB c

On the other hand, for any given number > 0, we have 1 ( ) 6 B.


1
Since 1 ( ) = f (1
1 ( ), 2 ( )), we obtain
1
M{ 6 B} M{1 ( ) } = M{f (1
1 ( ), 2 ( )) }
1
M{(1
1 ( ) 1 ) (2 ( ) 2 )}
1
= M{1
1 ( ) 1 } M{2 ( ) 2 }

( ) ( ) =
and then
M{ B} = 1 M{ 6 B} 1 + .
Letting 0, we get
M{ B} 1 = 1 sup (x).

(9.99)

xB c

It follows from (9.98) and (9.99) that


M{ B} = 1 sup (x).

(9.100)

xB c

The second measure inversion formula is verified. Therefore, is proved to


have the membership function by the measure inversion formulas (9.97)
and (9.100).
Example 9.17: Let = (a1 , a2 , a3 ) and = (b1 , b2 , b3 ) be two independent
triangular uncertain sets. At first, has an inverse membership function,
1 () = [(1 )a1 + a2 , a2 + (1 )a3 ],

(9.101)

and has an inverse membership function,


1 () = [(1 )b1 + b2 , b2 + (1 )b3 ].

(9.102)

It follows from the operational law that the sum + has an inverse membership function,
1 () = [(1 )(a1 + b1 ) + (a2 + b2 ), (a2 + b2 ) + (1 )(a3 + b3 )]. (9.103)

196

Chapter 9 - Uncertain Set

In other words, the sum + is also a triangular uncertain set, and


+ = (a1 + b1 , a2 + b2 , a3 + b3 ).

(9.104)

Example 9.18: Let = (a1 , a2 , a3 ) and = (b1 , b2 , b3 ) be two independent triangular uncertain sets. It follows from the operational law that the
difference has an inverse membership function,
1 () = [(1 )(a1 b3 ) + (a2 b2 ), (a2 b2 ) + (1 )(a3 b1 )]. (9.105)
In other words, the difference is also a triangular uncertain set, and
= (a1 b3 , a2 b2 , a3 b1 ).

(9.106)

Example 9.19: Let = (a1 , a2 , a3 ) be a triangular uncertain set, and k


a real number. When k 0, the product k has an inverse membership
function,
1 () = [(1 )(ka1 ) + (ka2 ), (ka2 ) + (1 )(ka3 )].

(9.107)

That is, the product k is a triangular uncertain set (ka1 , ka2 , ka3 ). When
k < 0, the product k has an inverse membership function,
1 () = [(1 )(ka3 ) + (ka2 ), (ka2 ) + (1 )(ka1 )].

(9.108)

That is, the product k is a triangular uncertain set (ka3 , ka2 , ka1 ). In
summary, we have
(
k =

(ka1 , ka2 , ka3 ), if k 0


(ka3 , ka2 , ka1 ), if k < 0.

(9.109)

Exercise 9.2: Let = (a1 , a2 , a3 , a4 ) and = (b1 , b2 , b3 , b4 ) be two independent trapezoidal uncertain sets, and k a real number. Show that
+ = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ),

(9.110)

= (a1 b4 , a2 b3 , a3 b2 , a4 b1 ),

(9.111)

(
k =

(ka1 , ka2 , ka3 , ka4 ), if k 0


(ka4 , ka3 , ka2 , ka1 ), if k < 0.

(9.112)

Section 9.5 - Arithmetic Operational Law

197

Monotone Function of Regular Uncertain Sets


In practice, it is usually required to deal with monotone functions of regular
uncertain sets. In this case, we have the following shortcut.
Theorem 9.26 (Liu [124]) Let 1 , 2 , , n be independent uncertain sets
with regular membership functions 1 , 2 , , n , respectively. If the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and
strictly decreasing with respect to xm+1 , xm+2 , , xn , then
= f (1 , 2 , , n )

(9.113)

is an uncertain set with regular membership function, and


1
1
1
1
1
l () = f (1l (), , ml (), m+1,r (), , nr ()),

(9.114)

1
1
1
1
1
r () = f (1r (), , mr (), m+1,l (), , nl ()),

(9.115)

1
1
1
1
where 1
l , 1l , 2l , , nl are left inverse membership functions, and r ,
1
1
1
1r , 2r , , nr are right inverse membership functions of , 1 , 2 , , n ,
respectively.
1
1
Proof: Note that 1
1 (), 2 (), , n () are intervals for each . Since
f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and
strictly decreasing with respect to xm+1 , xm+2 , , xn , the value
1
1
1
1 () = f (1
1 (), , m (), m+1 (), , n ())

is also an interval. Thus has a regular membership function, and its left and
right inverse membership functions are determined by (9.114) and (9.115),
respectively.
Exercise 9.3: Let and be independent uncertain sets with left inverse
membership functions 1
and l1 and right inverse membership functions
l
1
1
r and r , respectively. Show that the sum + is an uncertain set with
left and right inverse membership functions,
1
1
1
l () = l () + l (),

(9.116)

1
1
1
r () = r () + r ().

(9.117)

Exercise 9.4: Let and be independent uncertain sets with left inverse
membership functions 1
and l1 and right inverse membership functions
l
1
1
r and r , respectively. Show that the difference is an uncertain set
with left and right inverse membership functions,
1
1
1
l () = l () r (),

(9.118)

198

Chapter 9 - Uncertain Set


1
1
1
r () = r () l ().

(9.119)

Exercise 9.5: Let and be independent and positive uncertain sets with
left inverse membership functions 1
and l1 and right inverse membership
l
1
1
functions r and r , respectively. Show that

(9.120)

is an uncertain set with left and right inverse membership functions,

9.6

1
l () =

1
l ()
,
1
()
+ r1 ()
l

(9.121)

1
r () =

1
r ()
.
1
r () + l1 ()

(9.122)

Expected Value

Recall that an uncertain set is nonempty if () 6= for almost all


. This section will introduce a concept of expected value for nonempty
uncertain set.
Definition 9.8 (Liu [118]) Let be a nonempty uncertain set. Then the
expected value of is defined by
Z
E[] =

M{  r}dr

M{  r}dr

(9.123)

provided that at least one of the two integrals is finite.


Please note that  r represents is imaginarily included in [r, +),
and  r represents is imaginarily included in (, r]. What are the
appropriate values of M{  r} and M{  r}? Unfortunately, this problem
is not as simple as you think.
...................................................................................
................
............
............
..........
..........
........
........ ....... ....... ....... ....... ....... .......
.......
.
.
.
.
.
.
.......
.......
.... . .......
.
.
.
.
......
.
.......
..... ......
.....
.
.
.
.
.....
......
..............................................................................
.
.
.....
.
.
........
...
.
.
.......................
.
.
.
.
....
......
...
..........
.....
.
...
....
...
..........
...
.
...
..
...
.....
...
...
..
..
..
....
..
.
.
.
.
.
..
.......
.
.
.
.
.
.
..
.............
.
.
.
.
.
.
.
.
.
.
.
.... .......
.
...
..
........ ..........
.......
.
....
........ ...............
.........
......
.....
....
..... .....
................
.....
...... .. . ..........................
.....
......
. .
.
...... ......
.
.
.
.
.
.
.
.
.
.
.
....... ....... ..
.......
..... ....... ....... ....... ....... .......
........
.......
.........
.........
...........
...........
...............
..............
.
.............................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.........................

r

6< r

Figure 9.12: { r} {  r} { 6< r}

199

Section 9.6 - Expected Value

Intuitively, for M{  r}, it is too conservative if we take the value


M{ r}, and it is too adventurous if we take the value 1 M{ < r}. Thus
we assign M{  r} the middle value between M{ r} and 1 M{ < r}.
That is,
1
(9.124)
M{  r} = (M{ r} + 1 M{ < r}) .
2
Similarly, we also define
1
(M{ r} + 1 M{ > r}) .
2

M{  r} =

Example 9.20: In order to illustrate the expected


consider an uncertain set,

[1, 2] with uncertain measure


[2, 3] with uncertain measure
=

[3, 4] with uncertain measure

(9.125)

value operator, let us


0.6
0.3
0.2.

It follows from the definition of M{  r} and M{  r} that

1, if r 1

0.7,
if 1 < r 2

0.3, if 2 < r 3
M{  r} =

0.1, if 3 < r 4

0, if r > 4,
M{  r} 0,
Thus
Z
E[] =

Z
1dr +

Z
0.7dr +

r 0.
3

Z
0.3dr +

0.1dr = 2.1.
3

How to Obtain Expected Value from Membership Function?


Let be an uncertain set with membership function . In order to calculate
its expected value via (9.123), we must determine the values of M{  x}
and M{  x} from the membership function .
Theorem 9.27 Let be an uncertain set with membership function . Then
for any number x, we have


1
M{  x} =
sup (y) + 1 sup (y) ,
(9.126)
2 yx
y<x


1
M{  x} =
sup (y) + 1 sup (y) .
(9.127)
2 yx
y>x

200

Chapter 9 - Uncertain Set

Proof: Since the uncertain set has a membership function , the second
measure inversion formula tells us that
M{ x} = 1 sup (y),
y<x

M{ < x} = 1 sup (y).


yx

Thus (9.126) follows from (9.124) immediately. We may also prove (9.127)
similarly.
Theorem 9.28 (Liu [120]) Let be an uncertain set with regular membership function . If the expected value exists, then
Z
Z
1 x0
1 +
(x)dx
(x)dx
(9.128)
E[] = x0 +
2 x0
2
where x0 is a point such that (x0 ) = 1.
Proof: Since is increasing on (, x0 ] and decreasing on [x0 , +), it
follows from Theorem 9.27 that for almost all x, we have
(
1 (x)/2, if x x0
M{  x} =
(9.129)
(x)/2,
if x x0
and

(
M{  x} =

(x)/2,

if x x0

1 (x)/2, if x x0

for any real number x. If x0 0, then


Z +
Z
E[] =
M{  x}dx

M{  x}dx



Z +
Z 0
(x)
(x)
(x)
dx +
dx
dx
=
1
2
2
2
x0

0
Z
Z
1 x0
1 +
(x)dx
(x)dx.
= x0 +
2 x0
2
x0

If x0 < 0, then
Z

M{  x}dx

E[] =

M{  x}dx


Z x0
Z 0
(x)
(x)
(x)
dx
dx
1
dx
2
2
2
0

x0
Z
Z
1 x0
1 +
= x0 +
(x)dx
(x)dx.
2 x0
2
Z

(9.130)

201

Section 9.6 - Expected Value

The theorem is thus proved.


Remark 9.9: If the membership function of the uncertain set is not
assumed to be regular, then
Z
Z
1 +
1 x0
E[] = x0 +
sup (y)dx
sup (y)dx.
(9.131)
2 x0 yx
2 yx
Exercise 9.6: Show that the rectangular uncertain set = (a, b) has an
expected value
a+b
.
(9.132)
E[] =
2
Exercise 9.7: Show that the triangular uncertain set = (a, b, c) has an
expected value
a + 2b + c
E[] =
.
(9.133)
4
Exercise 9.8: Show that the trapezoidal uncertain set = (a, b, c, d) has an
expected value
a+b+c+d
E[] =
.
(9.134)
4
Theorem 9.29 (Liu [124]) Let be a nonempty uncertain set with membership function . If the expected value exists, then
Z

1 1
E[] =
inf 1 () + sup 1 () d
(9.135)
2 0
where inf 1 () and sup 1 () are the infimum and supremum of the set
1 () for each , respectively.
Proof: Since is a nonempty uncertain set and has a finite expected value,
we may assume that there exists a point x0 such that (x0 ) = 1 (perhaps
after a small perturbation). It is clear that the two integrals
Z +
Z 1
sup (y)dx and
(sup 1 () x0 )d
x0

yx

make an identical acreage. Thus


Z +
Z 1
Z
1
sup (y)dx =
(sup () x0 )d =
x0

yx

sup 1 ()d x0 .

Similarly, we may prove


Z 1
Z
Z x0
sup (y)dx =
(x0 inf 1 ())d = x0
yx

inf 1 ()d.

202

Chapter 9 - Uncertain Set

It follows from (9.131) that


Z
Z
1 x0
1 +
sup (y)dx
sup (y)dx
E[] = x0 +
2 x0 yx
2 yx


Z 1

Z 1
1
1
1
1
= x0 +
sup ()d x0
inf ()d
x0
2
2
0
0
Z
1 1
=
(inf 1 () + sup 1 ())d.
2 0
The theorem is thus verified.
Theorem 9.30 (Liu [124]) Let 1 , 2 , , n be independent uncertain sets
with regular membership functions 1 , 2 , , n , respectively. If the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , , xm and
strictly decreasing with respect to xm+1 , xm+2 , , xn , then
E[] =

1
2

Z
0


1
1
l () + r () d

(9.136)

1
where 1
l () and r () are determined by
1
1
1
1
1
l () = f (1l (), , ml (), m+1,r (), , nr ()),

(9.137)

1
1
1
1
1
r () = f (1r (), , mr (), m+1,l (), , nl ()).

(9.138)

Proof: It follows from Theorems 9.26 and 9.29 immediately.


Exercise 9.9: Let and be independent and nonnegative uncertain sets
with regular membership functions and , respectively. Show that
Z

1 1 1
1
l ()l1 () + 1
(9.139)
E[] =
r ()r () d.
2 0
Exercise 9.10: Let and be independent and positive uncertain sets with
regular membership functions and , respectively. Show that
 

Z 

1 1 1
1
r ()
l ()
E
=
+ 1
d.
(9.140)

2 0
r1 ()
l ()
Exercise 9.11: Let and be independent and positive uncertain sets with
regular membership functions and , respectively. Show that



Z 
1
1 1
1

r ()
l ()
=
+
d. (9.141)
E
1
1
+
2 0
1
1
r () + l ()
l () + r ()

203

Section 9.7 - Variance

Linearity of Expected Value Operator


Theorem 9.31 (Liu [124]) Let and be independent uncertain sets whose
membership functions exist. If E[] and E[] are finite, then for any real
numbers a and b, we have
E[a + b] = aE[] + bE[].

(9.142)

Proof: Denote the membership functions of and by and , respectively.


Then
Z

1 1
E[] =
inf 1 () + sup 1 () d,
2 0
Z

1 1
E[] =
inf 1 () + sup 1 () d.
2 0
Step 1: We first prove E[a] = aE[]. The product a has an inverse
membership function,
1 () = a1 ().
It follows from Theorem 9.29 that
Z

1 1
inf 1 () + sup 1 () d
E[a] =
2 0
Z

a 1
=
inf 1 () + sup 1 () d = aE[].
2 0
Step 2: We then prove E[ + ] = E[] + E[]. The sum + has an
inverse membership function,
1 () = 1 () + 1 ().
It follows from Theorem 9.29 that
Z

1 1
E[ + ] =
inf 1 () + sup 1 () d
2 0
Z

1 1
=
inf 1 () + sup 1 () d
2 0
Z

1 1
+
inf 1 () + sup 1 () d
2 0
= E[] + E[].
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[a + b] = E[a] + E[b] = aE[] + bE[].
The theorem is proved.

204

9.7

Chapter 9 - Uncertain Set

Variance

The variance of uncertain set provides a degree of the spread of the membership function around its expected value.
Definition 9.9 (Liu [121]) Let be an uncertain set with finite expected
value e. Then the variance of is defined by
V [] = E[( e)2 ].

(9.143)

This definition says that the variance is just the expected value of ( e)2 .
Since ( e)2 is a nonnegative uncertain set, we also have
Z
V [] =

M{( e)2  r}dr.

(9.144)

Please note that ( e)2  r represents ( e)2 is imaginarily included in


[r, +). What are the appropriate values of M{( e)2  r}? Intuitively,
it is too conservative if we take the value M{( e)2 r}, and it is too
adventurous if we take the value 1 M{( e)2 < r}. Thus we assign
M{( e)2  r} the middle value between them. That is,
M{( e)2  r} =


1
M{( e)2 r} + 1 M{( e)2 < r} .
2

(9.145)

Theorem 9.32 If is an uncertain set with finite expected value, a and b


are real numbers, then
V [a + b] = a2 V [].
(9.146)
Proof: If has an expected value e, then a + b has an expected value ae + b.
It follows from the definition of variance that


V [a + b] = E (a + b ae b)2 = a2 E[( e)2 ] = a2 V [].
How to Obtain Variance from Membership Function?
Let be an uncertain set with membership function . In order to calculate
its variance by (9.144), we must determine the value of M{( e)2  r} from
the membership function .
Theorem 9.33 Let be an uncertain set with membership function . Then
for any numbers e and x, we have
!
1
sup (y) + 1 sup (y) .
(9.147)
M{( e)2  x} =
2 (ye)2 x
(ye)2 <x

205

Section 9.8 - Entropy

Proof: Since is an uncertain set with membership function , it follows


from the measure inversion formula that for any real numbers e and x, we
have
M{( e)2 x} = 1 sup (y),
(ye)2 <x

M{( e)2 < x} = 1

sup

(y).

(ye)2 x

The equation (9.147) is thus proved by (9.145).


Theorem 9.34 Let be an uncertain set with membership function . If
has an expected value e, then its variance is
!
Z
1 +
sup (y) + 1 sup (y) dx.
(9.148)
V [] =
2 0
(ye)2 x
(ye)2 <x
Proof: This theorem follows from (9.144) and Theorem 9.33 immediately.
Example 9.21: Let be an uncertain set with expected value e whose
membership function is unimodal and (e) = 1. Then

sup (y) = (e + x) (e x),


(ye)2 x

sup

(y) = 1

(ye)2 <x

for any x > 0. It follows from the equation (9.148) that


1
V [] =
2

(e +

x) (e

x)dx.

(9.149)

Exercise 9.12: Let be a rectangular uncertain set (a, b). Show that its
variance is
(b a)2
.
(9.150)
V [] =
8
Exercise 9.13: Let be a symmetric triangular uncertain set (a, b, c). Show
that its variance is
(c a)2
V [] =
.
(9.151)
24

9.8

Entropy

This section provides a definition of entropy to characterize the uncertainty


of uncertain sets.

206

Chapter 9 - Uncertain Set

Definition 9.10 (Liu [121]) Suppose that is an uncertain set with membership function . Then its entropy is defined by
Z

H[] =

S((x))dx

(9.152)

where S(t) = t ln t (1 t) ln(1 t).


Remark 9.10: Note that the entropy (9.152) has the same form with De
Luca and Terminis entropy for fuzzy set [28].
Remark 9.11: If is a discrete uncertain set taking values in {x1 , x2 , },
then the entropy becomes
H[] =

S((xi )).

(9.153)

i=1

Example 9.22: Let be a classical set B (including the empty set ). In


this case, the membership function is
(
1, if x B
(x) =
0, if x 6 B
and the entropy is
Z

H[] =

S((x))dx =

0dx = 0.

Especially, the rectangular uncertain set has an entropy zero.


Exercise 9.14: Let = (a, b, c) be a triangular uncertain set. Show that its
entropy is
ca
.
(9.154)
H[] =
2
Exercise 9.15: Let = (a, b, c, d) be a trapezoidal uncertain set. Show that
its entropy is
ba+dc
.
(9.155)
H[] =
2
Theorem 9.35 Let be an uncertain set. Then H[] 0 and equality holds
if is essentially a classical set.
Proof: The nonnegativity is clear. In addition, when an uncertain set tends
to a classical set, its entropy tends to the minimum value 0.

207

Section 9.8 - Entropy

Theorem 9.36 Let be an uncertain set on the interval [a, b]. Then
H[] (b a) ln 2

(9.156)

and equality holds if has a membership function (x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum value ln 2 at t = 0.5.
Theorem 9.37 Let be an uncertain set, and let c be its complement. Then
H[ c ] = H[].

(9.157)

Proof: Write the membership function of by . Then its complement c


has a membership function 1 (x). It follows from the definition of entropy
that
Z +
Z +
H[ c ] =
S (1 (x)) dx =
S((x))dx = H[].

The theorem is proved.


Theorem 9.38 (Yao [226]) Let be an uncertain set with regular membership function . Then
Z 1

1
H[] =
(1
d.
(9.158)
l () r ()) ln
1

0
Proof: It is clear that S() = ln (1 ) ln(1 ) is a derivable
function whose derivative is

.
S 0 () = ln
1
Let x0 be a point such that (x0 ) = 1. Then we have
Z +
Z x0
Z
H[] =
S((x))dx =
S((x))dx +

x0

(x)

S 0 ()ddx +

1
1
(1
l () r ()) ln

The theorem is verified.

S 0 ()ddx.

1
r ()

S 0 ()dxd

x0
1
0
(1
r () x0 )S ()d

1
0
(1
r () l ())S ()d

(x)

=
Z

0
(x0 1
l ())S ()d +

x0

S((x))dx

x0

It follows from Fubini theorem that


Z 1 Z x0
Z
H[] =
S 0 ()dxd +
1
l ()

d.
1

208

Chapter 9 - Uncertain Set

Positive Linearity of Entropy


Theorem 9.39 (Yao [226]) Let and be independent uncertain sets with
regular membership functions. Then for any real numbers a and b, we have
H[a + b] = |a|H[] + |b|H[].

(9.159)

Proof: Assume the uncertain sets and have membership functions and
, respectively.
Step 1: We prove H[a] = |a|H[]. If a > 0, then the left and right
inverse membership functions of a are
1
1
l () = al (),

1
1
r () = ar ().

It follows from Theorem 9.38 that


Z 1
1
H[a] =
(a1
l () ar ()) ln
0

d = aH[] = |a|H[].
1

If a = 0, then we immediately have H[a] = 0 = |a|H[]. If a < 0, then we


have
1
1
1
1
r () = al ()
l () = ar (),
and
1

Z
H[a] =
0

1
(a1
r () al ()) ln

d = (a)H[] = |a|H[].
1

Thus we always have H[a] = |a|H[].


Step 2: We prove H[ + ] = H[] + H[]. Note that the left and right
inverse membership functions of + are
1
1
1
l () = l () + l (),

1
1
1
r () = r () + r ().

It follows from Theorem 9.38 that


Z 1
1
H[ + ] =
(1
l () r ()) ln
0

Z
=
0

d
1

1
1
1
1
(1
l () + l () r () r ()) ln

d
1

= H[] + H[].
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[a + b] = H[a] + H[b] = |a|H[] + |b|H[].

209

Section 9.10 - Conditional Membership Function

The theorem is proved.


Exercise 9.16: Let be an uncertain set, and let A be a classical set. Show
that
H[ + A] = H[].
(9.160)
That is, the entropy is invariant under arbitrary translations.

9.9

Distance

Definition 9.11 (Liu [121]) The distance between uncertain sets and is
defined as
d(, ) = E[| |].
(9.161)
That is, the distance between and is just the expected value of | |.
Since | | is a nonnegative uncertain set, we have
Z

d(, ) =

M{| |  r}dr.

(9.162)

Please note that | |  r represents | | is imaginarily included in


[r, +). What are the appropriate values of M{||  r}? Intuitively, it is
too conservative if we take the value M{| | r}, and it is too adventurous
if we take the value 1 M{| | < r}. Thus we assign M{| |  r} the
middle value between them. That is,
M{| |  r} =

1
(M{| | r} + 1 M{| | < r}) .
2

(9.163)

Example 9.23: Let be an uncertain set with membership function , and


let b be a real number. It follows from the measure inversion formula that
!
1
sup (y) + 1 sup (y) .
(9.164)
M{| b|  x} =
2 |yb|x
|yb|<x
Thus the distance between and b is
1
d(, b) =
2

9.10

sup (y) + 1 sup (y) dx.


0

|yb|x

(9.165)

|yb|<x

Conditional Membership Function

What is the conditional membership function of an uncertain set after it


has been learned that some event A has occurred? This section will answer

210

Chapter 9 - Uncertain Set

this question. At first, it follows from the definition of conditional uncertain


measure that

M{(B )( A)}
M{(B )( A)}

,
if
< 0.5

M{

A}
M{ A}

M{(B 6 )( A)}
M{(B 6 )( A)}
M{B |A} =
1
, if
< 0.5

M{ A}
M{ A}

0.5,
otherwise,

M{( B)( A)}


M{( B)( A)}

,
if
< 0.5

M{

A}
M{ A}

M{( 6 B)( A)}


M{( 6 B)( A)}
M{ B|A} =
1
, if
< 0.5

M{ A}
M{ A}

0.5,
otherwise.
Definition 9.12 Let be an uncertain set, and let A be an event with
M{A} > 0. Then the conditional uncertain set given A is said to have
a membership function (x|A) if for any Borel set B, we have
M{B |A} = inf (x|A),

(9.166)

M{ B|A} = 1 sup (x|A).

(9.167)

xB

xB c

9.11

Uncertain Statistics

In order to determine the membership function of uncertain set, Liu [121]


designed a questionnaire survey for collecting experts experimental data,
and introduced the empirical membership function (i.e., linear interpolation
method) and the principle of least squares.
Experts Experimental Data
Experts experimental data were suggested by Liu [121] to represent experts
knowledge about the membership function to be determined. The first step
is to ask the domain expert to choose a possible point x that the uncertain
set may contain, and then quiz him
How likely does x belong to ?

(9.168)

Assume the experts belief degree is in uncertain measure. Note that the
experts belief degree of x not belonging to must be 1 due to the duality
of uncertain measure. An experts experimental data (x, ) is thus acquired
from the domain expert. Repeating the above process, the following experts
experimental data are obtained by the questionnaire,
(x1 , 1 ), (x2 , 2 ), , (xn , n ).

(9.169)

211

Section 9.11 - Uncertain Statistics

Empirical Membership Function


How do we determine the membership function for an uncertain set? The first
method is the linear interpolation method developed by Liu [121]. Assume
that we have obtained a set of experts experimental data
(x1 , 1 ), (x2 , 2 ), , (xn , n ).

(9.170)

Without loss of generality, we also assume x1 < x2 < < xn . Based


on those experts experimental data, an empirical membership function is
determined as follows,

i + (i+1 i )(x xi ) , if xi x xi+1 , 1 i < n


xi+1 xi
(x) =

0,
otherwise.

(x)
...
..........
...
...
....
......................................................

...
..
.....
...
.....
...
...
....
.
.
...
.
...
..
.
.
.
...
...
.

...
...
...
.
..
.
...
.
...........
..
...
.
.....
..
...
.
.....
..
...
.....
.
.....
..
...
....
...
.
.

....
...
.
...
.
.
...
...
.
...
.
...
.
...
.
...
...
.
.
...
.
...
...
.
.
...
.
...
...
.
.
...
...
.
...
...
.
.
...
.

...
...
.
.
...
.
...
....
.
...
.
...
...
...
.
.
.
.
.
.
.................................................................................................................................................................................................................................................
...
.

Figure 9.13: Empirical Membership Function (x)

Principle of Least Squares


Principle of least squares was first employed to determine membership function by Liu [121]. Assume that a membership function to be determined has
a known functional form (x|) with an unknown parameter . In order to
estimate the parameter , we may employ the principle of least squares that
minimizes the sum of the squares of the distance of the experts experimental
data to the membership function. If the experts experimental data
(x1 , 1 ), (x2 , 2 ), , (xn , n )

(9.171)

are obtained, then we have


min

n
X
i=1

((xi |) i )2 .

(9.172)

212

Chapter 9 - Uncertain Set

The optimal solution b of (9.172) is called the least squares estimate of ,


b
and then the least squares membership function is (x|).
Example 9.24: Assume that a membership function has a trapezoidal form
(a, b, c, d). We also assume the following experts experimental data,
(1, 0.15), (2, 0.45), (3, 0.90), (6, 0.85), (7, 0.60), (8, 0.20).

(9.173)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the least squares membership function has a trapezoidal form
(0.6667, 3.3333, 5.6154, 8.6923).
What is about 100km?
Let us pay attention to the concept of about 100km. When we are interested in what distances can be considered about 100km, it is reasonable to
regard such a concept as an uncertain set. In order to determine the membership function of about 100km, a questionnaire survey was made for
collecting experts experimental data. The consultation process is as follows:
Q1: May I ask you what distances belong to about 100km? What do you
think is the minimum distance?
A1: 80km. (an experts experimental data (80, 0) is acquired)
Q2: What do you think is the maximum distance?
A2: 120km. (an experts experimental data (120, 0) is acquired)
Q3: What distance do you think belongs to about 100km?
A3: 95km.
Q4: What is the belief degree that 95km belongs to about 100km?
A4: 1. (an experts experimental data (95, 1) is acquired)
Q5: Is there another distance that belongs to about 100km?
A5: 105km.
Q6: What is the belief degree that 105km belongs to about 100km?
A6: 1. (an experts experimental data (105, 1) is acquired)
Q7: Is there another distance that belongs to about 100km?
A7: 90km.
Q8: What is the belief degree that 90km belongs to about 100km?

213

Section 9.12 - Bibliographic Notes

A8: 0.5. (an experts experimental data (90, 0.5) is acquired)


Q9: Is there another distance that belongs to about 100km?
A9: 110km.
Q10: What is the belief degree that 110km belongs to about 100km?
A10: 0.5. (an experts experimental data (110, 0.5) is acquired)
Q11: Is there another distance that belongs to about 100km?
A11: No idea.
Until now six experts experimental data (80, 0), (90, 0.5), (95, 1), (105, 1),
(110, 0.5), (120, 0) are acquired from the domain expert. Based on those
experts experimental data, an empirical membership function of about
100km is produced and shown by Figure 9.14.
(x)
1

....
.........
....
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
......................................
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
..
..
...
...
...
...
...
...
...
...
...
....
...
...
...
...
...
...
...
.
.
...
...
...
....
...
...
...
...
.
...
.
...
...
...
....
...
...
...
...
.
...
.
...
..
...
.
.
.

......
...
.
...
...
...
.
...
..
...
.
...
..
...
.
...
..
...
.
...
.
.
...
.
...
...
...
.
...
...
..
.
...
...
..
.
...
...
..
...
.
...
.
.
...
.
...
...
...
.
...
...
..
.
...
...
.
.
.
..
...
................................................................................................................................................
.............................................................
.......................................................
.

(95, 1)

(90, 0.5)

(80, 0)

(105, 1)

(110, 0.5)

(120, 0)

Figure 9.14: Empirical Membership Function of about 100km

9.12

Bibliographic Notes

In order to model unsharp concepts like young, tall and most, the concept of uncertain set was first proposed by Liu [118] in 2010. As a key concept
in uncertain set theory, the independence of uncertain sets was defined by
Liu [127]. In addition, Liu [124] presented the concepts of membership function and inverse membership function so that a rigorous uncertain set theory
was successfully founded. Liu [124] also provided a set operational law of
uncertain sets via membership functions, and an arithmetic operational law
via inverse membership functions.

214

Chapter 9 - Uncertain Set

The expected value of uncertain set was defined by Liu [118]. Then Liu
[120] gave a formula for caluculating the expected value by membership function, and Liu [124] provided a formula by inverse membership function. Based
on expected value operator, Liu [121] presented the concept of variance and
distance between uncertain sets.
The concept of entropy was given by Liu [121] and the positive linearity
of entropy was proved by Yao [226]. As an extension of entropy, Yao [226]
proposed the concept of cross entropy for comparing a membership function
against a reference membership function.
In order to determine membership functions, a questionnaire survey for
collecting experts experimental data was designed by Liu [121]. Based on
experts experimental data, Liu [121] also suggested the linear interpolation
method and the principle of least squares to determine membership functions. When multiple domain experts are available, the Delphi method was
introduced to uncertain statistics by Wang and Wang [208].

Chapter 10

Uncertain Logic
Uncertain logic is a methodology for calculating the truth values of uncertain
propositions via uncertain set theory. This chapter will introduce individual
feature data, uncertain quantifier, uncertain subject, uncertain predicate,
uncertain proposition, and truth value. Uncertain logic may provide a flexible
means for extracting linguistic summary from a collection of raw data.

10.1

Individual Feature Data

At first, we should have a universe A of individuals we are talking about.


Without loss of generality, we may assume that A consists of n individuals
and is represented by
A = {a1 , a2 , , an }.
(10.1)
In order to deal with the universe A, we should have feature data of all
individuals a1 , a2 , , an . When we talk about those days are warm, we
should know the individual feature data of all days, for example,
A = {22, 23, 25, 28, 30, 32, 36}

(10.2)

whose elements are temperatures in centigrades. When we talk about those


students are young, we should know the individual feature data of all students, for example,
A = {21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40}

(10.3)

whose elements are ages in years. When we talk about those sportsmen
are tall, we should know the individual feature data of all sportsmen, for
example,


175, 178, 178, 180, 183, 184, 186, 186
A=
(10.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.

216

Chapter 10 - Uncertain Logic

Sometimes the individual feature data are represented by vectors rather


a scalar number. When we talk about those young students are tall, we
should know the individual feature data of all students, for example,

(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
A = (28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(10.5)

(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
whose elements are ages and heights in years and centimeters, respectively.

10.2

Uncertain Quantifier

If we want to represent all individuals in the universe A, we use the universal


quantifier () and
= for all.
(10.6)
If we want to represent some (at least one) individuals, we use the existential
quantifier () and
= there exists at least one.
(10.7)
In addition to the two quantifiers, there are numerous imprecise quantifiers in
human language, for example, almost all, almost none, many, several, some,
most, a few, about half. This section will model them by the concept of
uncertain quantifier.
Definition 10.1 (Liu [121]) Uncertain quantifier is an uncertain set representing the number of individuals.
Example 10.1: The universal quantifier () on the universe A is a special
uncertain quantifier,
{n}
(10.8)
whose membership function is
(
(x) =

1, if x = n
0, otherwise.

(10.9)

Example 10.2: The existential quantifier () on the universe A is a special


uncertain quantifier,
{1, 2, , n}
(10.10)
whose membership function is
(
(x) =

0, if x = 0
1, otherwise.

(10.11)

Section 10.2 - Uncertain Quantifier

217

Example 10.3: The quantifier there does not exist one on the universe A
is a special uncertain quantifier
Q {0}

(10.12)

whose membership function is


(
(x) =

1, if x = 0
0, otherwise.

(10.13)

Example 10.4: The quantifier there exist exactly m on the universe A is


a special uncertain quantifier
Q {m}
(10.14)
whose membership function is
(
(x) =

1, if x = m
0, otherwise.

(10.15)

Example 10.5: The quantifier there exist at least m on the universe A is


a special uncertain quantifier
Q {m, m + 1, , n}

(10.16)

whose membership function is


(
(x) =

1, if m x n
0, if 0 x < m.

(10.17)

Example 10.6: The quantifier there exist at most m on the universe A is


a special uncertain quantifier
Q {0, 1, 2, , m}

(10.18)

whose membership function is


(
(x) =

1, if 0 x m
0, if m < x n.

(10.19)

Example 10.7: The uncertain quantifier Q of almost all on the universe


A may have a membership function

0,
if 0 x n 5

(x n + 5)/3, if n 5 x n 2
(x) =
(10.20)

1,
if n 2 x n.

218

Chapter 10 - Uncertain Logic

(x)
....
........
..
...................................................................................
...............................
..
...
.....
.. .
..
...
.. ...
.
..
.
...
... ..
..
...
.
.. ..
..
...
.
.
..
..
.
...
.
.
..
..
.
...
.
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
.
.
..
.
.
...
..
...
...
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
..
.
.
...
.
..
.
.
.
.
..........................................................................................................................................................................................................................................................
..
..

n5

n2 n

Figure 10.1: Membership Function of Quantifier almost all


Example 10.8: The uncertain quantifier Q of almost none on the universe
A may have a membership function

1,
if 0 x 2

(5 x)/3, if 2 x 5
(x) =
(10.21)

0,
if 5 x n.
(x)
.....
.......
...................................
......
...
.. ...
...
.. ....
...
.. ....
...
.. ...
...
...
..
...
...
..
...
...
..
...
...
...
..
...
...
..
...
...
..
...
...
..
...
...
...
..
...
...
..
...
...
..
...
...
..
...
...
..
.
...
..........................................................................................................................................................................................................................................................
...
..

Figure 10.2: Membership Function of Quantifier almost none


Example 10.9: The uncertain quantifier Q of about 10 on the universe A
may have a membership function

0,
if 0 x 7

(x 7)/2, if 7 x 9

1,
if 9 x 11
(x) =
(10.22)

(13 x)/2, if 11 x 13

0,
if 13 x n.
Example 10.10: In many cases, it is more convenient for us to use a percentage than an absolute quantity. For example, we may use the uncertain

219

Section 10.2 - Uncertain Quantifier

(x)
....
........
..
.................................................
..................................................................
.. ..
..
...
....
.. ....
.. ..
..
...
.. ..
.. ....
.
..
.
...
.
.. ...
... ..
..
...
.
.. ....
.. ..
..
...
.
...
..
..
..
.
...
.
...
..
..
..
..
...
...
.
.
..
..
..
...
.
.
...
..
..
..
...
..
.
...
.
...
..
..
..
.
.
...
...
..
.
..
..
.
.
...
...
..
..
...
...
...
...
.
..
..
...
..
..
.
...
...
..
..
..
..
.
...
...
..
..
..
..
.
...
...
..
..
..
..
...
.
...
..
..
.
..
.
.
.
.
..........................................................................................................................................................................................................................................................
..
..

10

11

13

Figure 10.3: Membership Function of Quantifier about 10


quantifier Q of about 70% . In this case, a possible membership function of
Q is

0,
if 0 x 0.6

20(x 0.6), if 0.6 x 0.65


1,
if 0.65 x 0.75
(10.23)
(x) =

20(0.8

x),
if
0.75

0.8

0,
if 0.8 x 1.
(x)
...
..........
.....................................................................................................
.....
....
...
.....
.....
...
.. ....
... ..
...
.. ...
...
.... ..
.. ....
... ..
...
. .
.. ...
.
...
.
.. ....
.... ..
...
. .
.. ...
.
...
...
..
...
....
...
...
..
.
..
.
...
...
..
..
...
...
....
..
.
..
...
.
...
..
...
..
...
....
...
..
..
...
...
...
..
.
.
.
...
..
...
..
...
....
...
..
..
...
...
.
.........................................................................................................................................................................................................................................................................
...
.

60% 65%

75% 80%

Figure 10.4: Membership Function of Quantifier about 70%


Definition 10.2 An uncertain quantifier is said to be unimodal if its membership function is unimodal.
Example 10.11: The uncertain quantifiers almost all, almost none,
about 10 and about 70% are unimodal.
Definition 10.3 An uncertain quantifier is said to be monotone if its membership function is monotone. Especially, an uncertain quantifier is said to be
increasing if its membership function is increasing; and an uncertain quantifier is said to be decreasing if its membership function is decreasing.

220

Chapter 10 - Uncertain Logic

The uncertain quantifiers almost all and almost none are monotone,
but about 10 and about 70% are not monotone. Note that both increasing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.
Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 10.4 Let Q be an uncertain quantifier. Then the negated quantifier Q is the complement of Q in the sense of uncertain set, i.e.,
Q = Qc .

(10.24)

Example 10.12: Let = {n} be the universal quantifier. Then its negated
quantifier
{0, 1, 2, , n 1}.
(10.25)
Example 10.13: Let = {1, 2, , n} be the existential quantifier. Then
its negated quantifier is
{0}.
(10.26)
Theorem 10.1 Let Q be an uncertain quantifier whose membership function
is . Then the negated quantifier Q has a membership function
(x) = 1 (x).

(10.27)

Proof: This theorem follows from the operational law of uncertain set immediately.
Example 10.14: Let Q be the uncertain quantifier almost all defined by
(10.20). Then its negated quantifier Q has a membership function

1,
if 0 x n 5

(n x 2)/3, if n 5 x n 2
(x) =
(10.28)

0,
if n 2 x n.
Example 10.15: Let Q be the uncertain quantifier about 70% defined by
(10.23). Then its negated quantifier Q has a membership function

1,
if 0 x 0.6

20(0.65 x), if 0.6 x 0.65


0,
if 0.65 x 0.75
(x) =
(10.29)

20(x 0.75), if 0.75 x 0.8

1,
if 0.8 x 1.

221

Section 10.2 - Uncertain Quantifier


..
.........
...
....................................................................................................................................
....... ....... ...
...
...
..
...
...
...
...
...
...
.
.
...
.
...
..
...
...
...
.
...
... ....
...
...
.
...
.....
....
...
.
...
.. ...
...
... .....
...
...
.
.
...
.
...
..
...
...
...
..
...
...
...
...
...
...
..
.
.............................................................................................................................................................................................................................................................
...
.

(x)

(x)

n5

n2

Figure 10.5: Membership Function of Negated Quantifier of almost all


.....
.......
......................................................................................
. ....... ....... ....... .......
......................................................
...
...
...
...
...
..
...
..
...
..
...
.
..
.
.
...
...
.
...
..
...
...
...
..
...
... ....
... ....
...
...
.. ..
... ..
..
...
.....
......
...
..
...
...
..
.....
...
.. ...
... ....
.
...
.
.
.. ..
.
.. ....
...
.. ....
...
... ....
...
.
...
...
...
..
...
...
...
..
...
..
..
.
...
...
.
.
.
.
.
...
...............................................................................................................................................................................................................................................................................
....

(x)

(x)

(x)

60% 65%

75% 80%

Figure 10.6: Membership Function of Negated Quantifier of about 70%


Theorem 10.2 Let Q be an uncertain quantifier. Then we have Q = Q.
Proof: This theorem follows from Q = Qc = (Qc )c = Q.
Theorem 10.3 If Q is a monotone uncertain quantifier, then Q is also
monotone. Especially, if Q is increasing, then Q is decreasing; if Q is decreasing, then Q is increasing.
Proof: Assume is the membership function of Q. Then Q has a membership function 1 (x). The theorem follows from this fact immediately.
Dual Quantifier
Definition 10.5 Let Q be an uncertain quantifier. Then the dual quantifier
of Q is
Q = Q.
(10.30)
Remark 10.1: Note that Q and Q are dependent uncertain sets such that
Q + Q . Since the cardinality of the universe A is n, we also have
Q = n Q.

(10.31)

222

Chapter 10 - Uncertain Logic

Example 10.16: Since {n}, we immediately have = {0} = . That


is
.
(10.32)
Example 10.17: Since = {0, 1, 2, , n 1}, we immediately have
() = {1, 2, , n} = . That is,
() .

(10.33)

Example 10.18: Since {1, 2, , n}, we have = {0, 1, 2, , n1} =


. That is,
.
(10.34)
Example 10.19: Since = {0}, we immediately have () = {n} = .
That is,
() = .
(10.35)
Theorem 10.4 Let Q be an uncertain quantifier whose membership function
is . Then the dual quantifier Q has a membership function
(x) = (n x)

(10.36)

where n is the cardinality of the universe A.


Proof: This theorem follows from the operational law of uncertain set immediately.
Example 10.20: Let Q be the uncertain quantifier almost all defined by
(10.20). Then its dual quantifier Q has a membership function

1,
if 0 x 2

(5 x)/3, if 2 x 5
(x) =
(10.37)

0,
if 5 x n.
Example 10.21: Let Q be the uncertain quantifier about 70% defined by
(10.23). Then its dual quantifier Q has a membership function

0,
if 0 x 0.2

20(x 0.2), if 0.2 x 0.25

1,
if 0.25 x 0.35
(x) =
(10.38)

20(0.4 x), if 0.35 x 0.4

0,
if 0.4 x 1.

223

Section 10.3 - Uncertain Subject


..

.........
...
................................
....... ....... ...
...
...
..
...
...
...
...
...
...
.
.
...
.
...
..
...
...
...
..
...
...
...
...
...
.
...
...
...
...
...
...
...
..
...
...
...
...
...
...
..
...
...
...
...
...
...
.
...
...
...
.
...
...
..
..
..
..............................................................................................................................................................................................................................................................
...
.

(x)

(x)

n5

Figure 10.7: Membership Function of Dual Quantifier of almost all


...

..........
....
...
................................
..... ....... .......
..
..
...
.
...
...
...
..
...
..
...
..
..
...
.
...
...
...
...
..
...
..
..
.
...
.
.
...
...
...
...
...
.
..
...
....
.
...
..
...
...
...
...
...
...
...
..
..
...
...
...
..
...
...
...
..
...
..
...
..
...
.
.
...
.
..
...
.
...
...
...
....
.
...
..
.
.
...
...
.
...
...
...
...
..
..
.
..
..
...
.
.
.
................................................................................................................................................................................................................................................................................
....

(x)

20%

(x)

40%

60%

80%

Figure 10.8: Membership Function of Dual Quantifier of about 70%


Theorem 10.5 Let Q be an uncertain quantifier. Then we have Q = Q.
Proof: The theorem follows from Q = Q = ( Q) = Q.
Theorem 10.6 If Q is a unimodal uncertain quantifier, then Q is also unimodal. Especially, if Q is a monotone, then Q is monotone; if Q is increasing,
then Q is decreasing; if Q is decreasing, then Q is increasing.
Proof: Assume is the membership function of Q. Then Q has a membership function (n x). The theorem follows from this fact immediately.

10.3

Uncertain Subject

Sometimes, we are interested in a subset of the universe of individuals, for


example, warm days, young students and tall sportsmen. This section
will model them by the concept of uncertain subject.
Definition 10.6 (Liu [121]) Uncertain subject is an uncertain set containing
some specified individuals in the universe.
Example 10.22: Warm days are here again is a statement in which warm
days is an uncertain subject that is an uncertain set on the universe of all

224

Chapter 10 - Uncertain Logic

days, whose membership function may be defined by

0,
if x 15

(x 15)/3, if 15 x 18
1,
if 18 x 24
(x) =

(28

x)/4,
if 24 x 28

0,
if 28 x.

(10.39)

(x)
....
.........
..
........................................................................
...
...
......
.....
.. ...
...
... .
.. ...
.. ....
...
.
.. ..
.. ....
...
.
...
.. ..
..
...
.
...
.. ..
..
...
.
...
..
..
..
...
.
...
..
..
.
...
...
.
.
..
..
...
.
...
.
..
...
..
...
...
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
..
..
...
...
...
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
..
..
...
...
.
...
.
..
.
.
.
.
.
........................................................................................................................................................................................................................................................................
..

...

15 C 18 C

24 C

28 C

Figure 10.9: Membership Function of Subject warm days


Example 10.23: Young students are tall is a statement in which young
students is an uncertain subject that is an uncertain set on the universe of
all students, whose membership function may be defined by

0,
if x 15

(x 15)/5, if 15 x 20
1,
if 20 x 35
(10.40)
(x) =

(45

x)/10,
if
35

45

0,
if x 45.
Example 10.24: Tall students are heavy is a statement in which tall
students is an uncertain subject that is an uncertain set on the universe of
all students, whose membership function may be defined by

0,
if x 180

(x 180)/5, if 180 x 185


1,
if 185 x 195
(x) =
(10.41)

(200 x)/5, if 195 x 200

0,
if x 200.
Let S be an uncertain subject with membership function on the universe
A = {a1 , a2 , , an } of individuals. Then S is an uncertain set of A such

225

Section 10.4 - Uncertain Predicate

(x)
....
........
..
...
.............................................................................................
...
.....
.. .....
...
... .
.. ....
.. ...
...
.
.. ...
.. ..
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
.
..
.
.
.
.
.
............................................................................................................................................................................................................................................................
..
...

15yr 20yr

35yr

45yr

Figure 10.10: Membership Function of Subject young students


(x)
...
..........
..........................................................................................
....
...
.....
.....
.. .
.. ....
...
.. ...
.
.. ....
...
.
... ...
.. ...
...
.
.. .
.. ....
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
.
...
..
..
.
...
...
.
..
..
...
..
...
.
..
..
.
...
.
...
.
...
..
..
...
...
.
...
..
..
..
...
...
.
..
..
...
..
...
.
..
..
...
..
...
.
..
..
...
..
...
.
...
..
..
.
.
...
.
.
.............................................................................................................................................................................................................................................................
....
.

180cm 185cm

195cm 200cm

Figure 10.11: Membership Function of Subject tall students


that
M{ai S} = (ai ),

i = 1, 2, , n.

(10.42)

In many cases, we are interested in some individuals as with (a) , where


is a confidence level. Thus we have a subuniverse,
S = {a A | (a) }

(10.43)

that will play a new universe of individuals we are talking about, and the
individuals out of S will be ignored at the confidence level .
Theorem 10.7 Let 1 and 2 be confidence levels with 1 > 2 , and let S1
and S2 be subuniverses with confidence levels 1 an 2 , respectively. Then
S1 S2 .

(10.44)

That is, S is a decreasing sequence of sets with respect to .


Proof: If a S1 , then (a) 1 > 2 . Thus a S2 . It follows that
S1 S2 . Note that S1 and S2 may be empty.

226

10.4

Chapter 10 - Uncertain Logic

Uncertain Predicate

There are numerous imprecise predicates in human language, for example,


warm, cold, hot, young, old, tall, small, and big. This section will model them
by the concept of uncertain predicate.
Definition 10.7 (Liu [121]) Uncertain predicate is an uncertain set representing a property that the individuals have in common.

Example 10.25: Today is warm is a statement in which warm is an


uncertain predicate that may be represented by a membership function

0,
if x 15

(x

15)/3,
if
15 x 18

1,
if 18 x 24
(x) =

(28 x)/4, if 24 x 28

0,
if 28 x.

(10.45)

(x)
....
........
..
...
........................................................................
...
......
.....
...
.. ...
... .
.. ...
...
.. ....
.
.. ..
...
.. ....
.
.. ..
...
...
..
.
...
.. ..
...
..
.
...
..
..
...
..
.
...
..
..
...
.
.
...
.
..
..
...
...
.
.
.
..
...
..
...
.
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
..
..
...
.
...
.
.
..
...
..
.
...
.
.
...
..
..
.
...
.
...
.
..
..
.
...
.
...
.
.
..
.
.
.
.
.
..........................................................................................................................................................................................................................................................................
..

...

15 C 18 C

24 C

28 C

Figure 10.12: Membership Function of Predicate warm

Example 10.26: John is young is a statement in which young is an


uncertain predicate that may be represented by a membership function

0,

(x 15)/5,
1,
(x) =

(45 x)/10,

0,

if
if
if
if
if

x 15
15 x 20
20 x 35
35 x 45
x 45.

(10.46)

227

Section 10.4 - Uncertain Predicate

(x)
....
........
..
...
.............................................................................................
...
.....
.. .....
...
... .
.. ....
.. ...
...
.
.. ...
.. ..
...
...
.
..
...
.. ..
...
.
..
...
.. ..
...
...
.
..
.. ..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
...
..
..
..
...
...
.
..
...
..
..
...
.
..
...
..
..
...
.
..
...
..
..
...
...
.
..
.
..
.
.
.
.
.
............................................................................................................................................................................................................................................................
..
...

15yr 20yr

35yr

45yr

Figure 10.13: Membership Function of Predicate young


Example 10.27: Tom is tall is a statement in which tall is an uncertain
predicate that may be represented by a membership function

0,
if x 180

(x 180)/5, if 180 x 185


1,
if 185 x 195
(10.47)
(x) =

(200

x)/5,
if
195

200

0,
if x 200.
(x)
....
........
....
........................................................................................
..
.....
......
...
....
.. ....
.. .
...
.. ....
.. ...
.
...
.
. .
.. ...
...
... ..
.. ....
..
..
...
.
...
..
..
..
...
...
.
..
...
..
.
...
.
..
..
...
...
...
..
..
...
..
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
..
.
...
.
...
.
..
..
...
...
...
.
...
..
..
..
...
...
.
..
..
..
...
...
.
..
..
...
..
...
.
..
..
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
...
..

180cm 185cm

195cm 200cm

Figure 10.14: Membership Function of Predicate tall

Negated Predicate
Definition 10.8 Let P be an uncertain predicate. Then its negated predicate
P is the complement of P in the sense of uncertain set, i.e.,
P = P c .

(10.48)

Theorem 10.8 Let P be an uncertain predicate with membership function


. Then its negated predicate P has a membership function
(x) = 1 (x).

(10.49)

228

Chapter 10 - Uncertain Logic

Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.
Example 10.28: Let P be the uncertain predicate warm defined by
(10.45). Then its negated predicate P has a membership function

1,
if x 15

(18

x)/3,
if 15 x 18

0,
if 18 x 24
(x) =
(10.50)

(x 24)/4, if 24 x 28

1,
if 28 x.
...
..........
.........................................................
............................................
...
.. ....... ....... ....... ....... ....... ..
...
...
...
..
...
...
...
...
...
...
..
...
...
.
.
.
.
.
...
.
...
.
..
...
...
.
...
..
... ...
...
.. ....
...
...
... ...
... .....
...
....
....
...
.
.
.....
...
... ...
... .....
...
... ..
...
..
...
.
.
...
...
..
...
...
...
...
...
...
...
...
..
...
...
...
..
..
...
...
...
.
...
..
..
..
...
...............................................................................................................................................................................................................................................................................
..

...

(x)

15 C 18 C

(x)

(x)

24 C

28 C

Figure 10.15: Membership Function of Negated Predicate of warm


Example 10.29: Let P be the negated predicate young defined by (10.46).
Then its negated predicate P has a membership function

1,
if x 15

(20 x)/5, if 15 x 20
0,
if 20 x 35
(10.51)
(x) =

(x

35)/10,
if
35

45

1,
if x 45.
Example 10.30: Let P be the uncertain predicate tall defined by (10.47).
Then its negated predicate P has a membership function

1,
if x 180

(185 x)/5, if 180 x 185

0,
if 185 x 195
(x) =
(10.52)

(x 195)/5, if 195 x 200

1,
if x 200.

229

Section 10.5 - Uncertain Proposition


....
........
.
...............................................
.............................
.. ....... ....... ....... ....... ....... ....... ......
..
...
...
..
...
...
...
...
..
...
...
.
...
...
..
...
.
.
.
...
.
...
.
...
.
...
...
... ...
...
...
...
... .
...
..
...
... .
...
.
.
......
...
.
... .
....
....
...
...
....
...
... ...
... ....
...
... ..
.
.
.
...
.
...
.. ..
..
...
..
...
... .....
...
...
...
...
...
...
...
..
..
...
..
.
...
...
.
...
..
...
..
.
.
...
.........................................................................................................................................................................................................................................................................
....
..

(x)

(x)

(x)

15yr 20yr

35yr

45yr

Figure 10.16: Membership Function of Negated Predicate of young


...
..........
................................................
............................
....... ....... ....... ....... ....... ....... .......
...
...
...
..
..
...
..
...
...
...
...
..
...
...
...
.
...
.
.
.
.
...
.
...
.
..
...
...
...
...
...
.
...
...
..
... ...
..
...
...
..
.
.
.
.
...
.
.....
.....
....
.
...
.
....
...
.. ...
.. ...
...
... ....
... . .
.
.
...
.
...
...
..
...
...
...
.
..
...
...
...
...
...
..
...
..
..
...
..
.
..
.
...
...
.
...
.
.
.
...
..
.
........................................................................................................................................................................................................................................................................
....
.

(x)

(x)

(x)

180cm 185cm

195cm 200cm

Figure 10.17: Membership Function of Negated Predicate of tall


Theorem 10.9 Let P be an uncertain predicate. Then we have P = P .
Proof: The theorem follows from P = P c = (P c )c = P.

10.5

Uncertain Proposition

Definition 10.9 (Liu [121]) Assume that Q is an uncertain quantifier, S is


an uncertain subject, and P is an uncertain predicate. Then the triplet
(Q, S, P ) = Q of S are P

(10.53)

is called an uncertain proposition.


Remark 10.2: Let A be the universe of individuals. Then (Q, A, P ) is a
special uncertain proposition because A itself is a special uncertain subject.
Remark 10.3: Let be the universal quantifier. Then (, A, P ) is an
uncertain proposition representing all of A are P .
Remark 10.4: Let be the existential quantifier. Then (, A, P ) is an
uncertain proposition representing at least one of A is P .

230

Chapter 10 - Uncertain Logic

Example 10.31: Almost all students are young is an uncertain proposition in which the uncertain quantifier Q is almost all, the uncertain subject
S is students (the universe itself) and the uncertain predicate P is young.
Example 10.32: Most young students are tall is an uncertain proposition
in which the uncertain quantifier Q is most, the uncertain subject S is
young students and the uncertain predicate P is tall.
Theorem 10.10 (Liu [121], Logical Equivalence Theorem) Let (Q, S, P ) be
an uncertain proposition. Then
(Q , S, P ) = (Q, S, P )

(10.54)

where Q is the dual quantifier of Q and P is the negated predicate of P .


Proof: Note that (Q , S, P ) represents Q of S are P . In fact, the statement Q of S are P implies Q of S are not P . Since Q = Q, we obtain
(Q, S, P ). Conversely, the statement Q of S are not P implies Q of S
are P , i.e., (Q , S, P ). Thus (10.54) is verified.
Example 10.33: When Q = , we have Q = . If S = A, then (10.54)
becomes the classical equivalence
(, A, P ) = (, A, P ).

(10.55)

Example 10.34: When Q = , we have Q = . If S = A, then (10.54)


becomes the classical equivalence
(, A, P ) = (, A, P ).

10.6

(10.56)

Truth Value

Let (Q, S, P ) be an uncertain proposition. The truth value of (Q, S, P ) should


be the uncertain measure that Q of S are P . That is,
T (Q, S, P ) = M{Q of S are P }.

(10.57)

However, it is impossible for us to deduce the value of M{Q of S are P } from


the information of Q, S and P within the framework of uncertain set theory.
Thus we need an additional formula to compose Q, S and P .
Definition 10.10 (Liu [121]) Let (Q, S, P ) be an uncertain proposition in
which Q is a unimodal uncertain quantifier with membership function , S
is an uncertain subject with membership function , and P is an uncertain
predicate with membership function . Then the truth value of (Q, S, P ) with
respect to the universe A is
!
T (Q, S, P ) = sup
01

sup inf (a) sup inf (a)


KK aK

aK
KK

(10.58)

231

Section 10.6 - Truth Value

where
K = {K S | (|K|) } ,

(10.59)

K = {K S | (|S | |K|) } ,

(10.60)

S = {a A | (a) } .

(10.61)

Remark 10.5: Keep in mind that the truth value formula (10.58) is vacuous
if the individual feature data of the universe A are not available.
Remark 10.6: The symbol |K| represents the cardinality of the set K. For
example, || = 0 and |{2, 5, 6}| = 3.
Remark 10.7: Note that is the membership function of the negated
predicate of P , and
(a) = 1 (a).
(10.62)
Remark 10.8: When the subset K of individuals becomes an empty set ,
we will define
inf (a) = inf (a) = 1.
(10.63)
a

Remark 10.9: If Q is an uncertain percentage rather than an absolute


quantity, then K and K are defined by





|K|
K = K S
,
(10.64)
|S |





|K|
K = K S 1
.
(10.65)
|S |
Remark 10.10: If the uncertain subject S degenerates to the universe A,
then the truth value of (Q, A, P ) is
!
sup inf (a) sup inf (a)

T (Q, A, P ) = sup
01

KK aK

aK
KK

(10.66)

where
K = {K A | (|K|) } ,

(10.67)

K = {K A | (|A| |K|) } .

(10.68)

Exercise 10.1: If the uncertain quantifier Q = and the uncertain subject


S = A, then for any > 0, we have
K = {A},

K = {}.

(10.69)

232

Chapter 10 - Uncertain Logic

Show that
T (, A, P ) = inf (a).
aA

(10.70)

Exercise 10.2: If the uncertain quantifier Q = and the uncertain subject


S = A, then for any > 0, we have
K = {any nonempty subsets of A},

(10.71)

K = {any proper subsets of A}.

(10.72)

Note that K contains A but

does not. Show that

T (, A, P ) = sup (a).

(10.73)

aA

Exercise 10.3: If the uncertain quantifier Q = and the uncertain subject


S = A, then for any > 0, we have
K = {any proper subsets of A},

(10.74)

K = {any nonempty subsets of A}.

(10.75)

T (, A, P ) = 1 inf (a).

(10.76)

Show that
aA

Exercise 10.4: If the uncertain quantifier Q = and the uncertain subject


S = A, then for any > 0, we have
K = {},

K = {A}.

(10.77)

Show that
T (, A, P ) = 1 sup (a).

(10.78)

aA

Theorem 10.11 (Liu [121], Truth Value Theorem) Let (Q, S, P ) be an uncertain proposition in which Q is a unimodal uncertain quantifier with membership function , S is an uncertain subject with membership function ,
and P is an uncertain predicate with membership function . Then the truth
value of (Q, S, P ) is
T (Q, S, P ) = sup ( (k ) (k ))

(10.79)

k = min {x | (x) } ,

(10.80)

01

where
(k ) = the k -th largest value of {(ai ) | ai S },
k

(k )

= the

k -th

= |S | max{x | (x) },
largest value of {1 (ai ) | ai S }.

(10.81)
(10.82)
(10.83)

233

Section 10.7 - Algorithm

Proof: Since the supremum is achieved at the subset with minimum cardinality, we have
sup inf (a) =

KK aK

sup inf (a) =

aK
KK

sup

inf (a) = (k ),

sup

inf (a) = (k ).

KS ,|K|=k aK

aK
KS ,|K|=k

The theorem is thus verified. Please note that (0) = (0) = 1.


Remark 10.11: If Q is an uncertain percentage, then k and k are defined
by





x

k = min x
,
(10.84)
|S |
k


= |S | max x

x
|S |


.

(10.85)

Remark 10.12: If the uncertain subject S degenerates to the universe of


individuals A = {a1 , a2 , , an }, then the truth value of (Q, A, P ) is
T (Q, A, P ) = sup ( (k ) (k ))

(10.86)

k = min {x | (x) } ,

(10.87)

01

where

(k ) = the k -th largest value of (a1 ), (a2 ), , (an ),


k = n max{x | (x) },
(k ) = the k -th largest value of 1 (a1 ), , 1 (an ).

(10.88)
(10.89)
(10.90)

Exercise 10.5: If the uncertain quantifier Q = {m, m+1, , n} (i.e., there


exist at least m) with m 1, then we have k = m and k = 0. Show that
T (Q, A, P ) = the mth largest value of (a1 ), (a2 ), , (an ).

(10.91)

Exercise 10.6: If the uncertain quantifier Q = {0, 1, 2, . . . , m} (i.e., there


exist at most m) with m < n, then we have k = 0 and k = n m. Show
that
T (Q, A, P ) = the (n m)th largest value of 1(a1 ), 1(a2 ), , 1(an ).

234

10.7

Chapter 10 - Uncertain Logic

Algorithm

In order to calculate T (Q, S, P ) based on the truth value formula (10.58), a


truth value algorithm is given as follows:
Step 1. Set = 1 and = 0.01 (a predetermined precision).
Step 2. Calculate S = {a A | (a) } and k = min{x | (x) } as
well as k = |S | max{x | (x) }.
Step 3. If (k) (k ) < , then and go to Step 2. Otherwise,
output the truth value and stop.
Remark 10.13: If Q is an uncertain percentage, then k and k in the truth
value algorithm are replaced with (10.84) and (10.85), respectively.
Example 10.35: Assume that the daily temperatures of some week from
Monday to Sunday are
22, 23, 25, 28, 30, 32, 36

(10.92)

in centigrades, respectively. Consider an uncertain proposition


(Q, A, P ) = two or three days are warm.

(10.93)

Note that the uncertain quantifier is Q = {2, 3}. We also suppose the uncertain predicate P = warm has a membership function

0,
if x 15

(x 15)/3, if 15 x 18
1,
if 18 x 24
(10.94)
(x) =

(28 x)/4, if 24 x 28

0,
if 28 x.
It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not warm at all (in fact, they are hot). Intuitively, the uncertain proposition two or three days are warm should be completely true. The truth
value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that the truth
value is
T (two or three days are warm) = 1.
(10.95)
This is an intuitively expected result. In addition, we also have
T (two days are warm) = 0.25,

(10.96)

T (three days are warm) = 0.75.

(10.97)

235

Section 10.7 - Algorithm

Example 10.36: Assume that in a class there are 15 students whose ages
are
21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40
(10.98)
in years. Consider an uncertain proposition
(Q, A, P ) = almost all students are young.

(10.99)

Suppose the uncertain quantifier Q = almost all has a membership function

0,
if 0 x 10

(x 10)/3, if 10 x 13
(x) =
(10.100)

1,
if 13 x 15,
and the uncertain predicate P = young

0,

(x 15)/5,
1,
(x) =

(45 x)/10,

0,

has a membership function


if
if
if
if
if

x 15
15 x 20
20 x 35
35 x 45
x 45.

(10.101)

The truth value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that


the uncertain proposition has a truth value
T (almost all students are young) = 0.9.

(10.102)

Example 10.37: Assume that in a team there are 16 sportsmen whose


heights are
175, 178, 178, 180, 183, 184, 186, 186
(10.103)
188, 190, 192, 192, 193, 194, 195, 196
in centimeters. Consider an uncertain proposition
(Q, A, P ) = about 70% of sportsmen are tall.
Suppose the uncertain quantifier Q
tion

0,

20(x 0.6),
1,
(x) =

20(0.8
x),

0,

(10.104)

= about 70% has a membership funcif


if
if
if
if

0 x 0.6
0.6 x 0.65
0.65 x 0.75
0.75 x 0.8
0.8 x 1

(10.105)

236

Chapter 10 - Uncertain Logic

and the uncertain predicate P = tall has a membership function

0,
if x 180

(x 180)/5, if 180 x 185


1,
if 185 x 195
(x) =
(10.106)

(200 x)/5, if 195 x 200

0,
if x 200.
The truth value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that
the uncertain proposition has a truth value
T (about 70% of sportsmen are tall) = 0.8.

(10.107)

Example 10.38: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

(10.108)

in years and centimeters. Consider an uncertain proposition


(Q, S, P ) = most young students are tall.

(10.109)

Suppose the uncertain quantifier (percentage) Q = most has a membership


function

0,
if 0 x 0.7

20(x

0.7),
if
0.7 x 0.75

1,
if 0.75 x 0.85
(10.110)
(x) =

20(0.9 x), if 0.85 x 0.9

0,
if 0.9 x 1.
Note that each individual a is described by a feature data (y, z), where y
represents ages and z represents heights. For this case, the uncertain subject
S = young students has a membership function

0,
if y 15

(y 15)/5, if 15 y 20
1,
if 20 y 35
(a) = (y, z) =
(10.111)

(45

y)/10,
if
35

45

0,
if y 45.

Section 10.8 - Linguistic Summarizer

237

and the uncertain predicate P = tall has a membership function

0,
if z 180

(z

180)/5,
if 180 z 185

1,
if 185 z 195
(a) = (y, z) =
(10.112)

(200 z)/5, if 195 z 200

0,
if z 200.
The truth value algorithm yields that the uncertain proposition has a truth
value
T (most young students are tall) = 0.8.
(10.113)

10.8

Linguistic Summarizer

Linguistic summary is a human language statement that is concise and easyto-understand by humans. For example, most young students are tall is
a linguistic summary of students ages and heights. Thus a linguistic summary is a special uncertain proposition whose uncertain quantifier, uncertain
subject and uncertain predicate are linguistic terms. Uncertain logic provides a flexible means that is capable of extracting linguistic summary from
a collection of raw data.
What inputs does the uncertain logic need? First, we should have some
raw data (i.e., the individual feature data),
A = {a1 , a2 , , an }.

(10.114)

Next, we should have some linguistic terms to represent quantifiers, for example, most and all. Denote them by a collection of uncertain quantifiers,
Q = {Q1 , Q2 , , Qm }.

(10.115)

Then, we should have some linguistic terms to represent subjects, for example, young students and old students. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , , Sn }.
(10.116)
Last, we should have some linguistic terms to represent predicates, for example, short and tall. Denote them by a collection of uncertain predicates,
P = {P1 , P2 , , Pk }.

(10.117)

One problem of data mining is to choose an uncertain quantifier Q Q, an


uncertain subject S S and an uncertain predicate P P such that the
truth value of the linguistic summary Q of S are P to be extracted is at
least , i.e.,
T (Q, S, P )
(10.118)

238

Chapter 10 - Uncertain Logic

for the universe A = {a1 , a2 , , an }, where is a confidence level. In order


to solve this problem, Liu [121] proposed the following linguistic summarizer,

Find Q, S and P

subject to:

QQ

SS

P P

T (Q, S, P ) .

(10.119)

Each solution (Q, S, P ) of the linguistic summarizer (10.119) produces a linguistic summary Q of S are P .
Example 10.39: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

(10.120)

in years and centimeters. Suppose we have three linguistic terms about


half, most and all as uncertain quantifiers whose membership functions
are

0,
if 0 x 0.4

20(x 0.4), if 0.4 x 0.45


1,
if 0.45 x 0.55
(10.121)
half (x) =

20(0.6 x), if 0.55 x 0.6

0,
if 0.6 x 1,

0,

20(x
0.7),

1,
most (x) =

20(0.9 x),

0,
(
all (x) =

if
if
if
if
if

0 x 0.7
0.7 x 0.75
0.75 x 0.85
0.85 x 0.9
0.9 x 1,

1, if x = 1
0, if 0 x < 1,

(10.122)

(10.123)

respectively. Denote the collection of uncertain quantifiers by


Q = {about half , most,all}.

(10.124)

Section 10.8 - Linguistic Summarizer

239

We also have three linguistic terms young students, middle-aged students


and old students as uncertain subjects whose membership functions are

0,
if y 15

(y 15)/5, if 15 y 20

1,
if 20 y 35
young (a) = young (y, z) =
(10.125)

(45 y)/10, if 35 y 45

0,
if y 45,

0,
if y 40

(y 40)/5, if 40 y 45
1,
if 45 y 55
(10.126)
middle (a) = middle (y, z) =

(60 y)/5, if 55 y 60

0,
if y 60,

0,
if y 55

(y

55)/5,
if 55 y 60

1,
if 60 y 80
(10.127)
old (a) = old (y, z) =

(85 y)/5, if 80 y 85

1,
if y 85,
respectively. Denote the collection of uncertain subjects by
S = {young students, middle-aged students, old students}. (10.128)
Finally, we suppose that there are two linguistic terms short and tall as
uncertain predicates whose membership functions are

0,
if z 145

(z 145)/5, if 145 z 150

1,
if 150 z 155
(10.129)
short (a) = short (y, z) =

(160 z)/5, if 155 z 160

0,
if z 200,

0,
if z 180

(z 180)/5, if 180 z 185


1,
if 185 z 195
tall (a) = tall (y, z) =
(10.130)

(200

z)/5,
if
195

200

0,
if z 200,
respectively. Denote the collection of uncertain predicates by
P = {short, tall}.

(10.131)

240

Chapter 10 - Uncertain Logic

We would like to extract an uncertain quantifier Q Q, an uncertain subject


S S and an uncertain predicate P P such that the truth value of the
linguistic summary Q of S are P to be extracted is at least 0.8, i.e.,
T (Q, S, P ) 0.8

(10.132)

where 0.8 is a predetermined confidence level. The linguistic summarizer


(10.119) yields
Q = most,

S = young students,

P = tall

and then extracts a linguistic summary most young students are tall.

10.9

Bibliographic Notes

Based on uncertain set theory, uncertain logic was designed by Liu [121]
in 2011 for dealing with human language by using the truth value formula
for uncertain propositions. As an application of uncertain logic, Liu [121]
also proposed a linguistic summarizer that provides a means for extracting
linguistic summary from a collection of raw data.

Chapter 11

Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowledge via uncertain set theory. This chapter will introduce a family of uncertain inference rules, uncertain system, and uncertain control with application
to an inverted pendulum system.

11.1

Uncertain Inference Rule

Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
if X is then Y is

(11.1)

where and are two uncertain sets. We first introduce the following inference rule.
Inference Rule 11.1 (Liu [118]) Let X and Y be two concepts. Assume a
rule if X is an uncertain set then Y is an uncertain set . From X is a
constant a we infer that Y is an uncertain set
= |a

(11.2)

which is the conditional uncertain set of given a . The inference rule is


represented by
Rule: If X is then Y is
From: X is a constant a
(11.3)
Infer: Y is = |a
Theorem 11.1 Let and be independent uncertain sets with membership
functions and , respectively. If is a constant a, then the inference rule

242

Chapter 11 - Uncertain Inference

11.1 yields that has a membership function

(y)

,
if (y) < (a)/2

(a)

(y) + (a) 1
(y) =
, if (y) > 1 (a)/2

(a)

0.5,
otherwise.

(11.4)

Proof: It follows from the inference rule 11.1 that has a membership
function
(y) = M{y |a }.
By using the definition of conditional uncertainty, we have

M{y }
M{y }

,
if
< 0.5

M{a

}
M{a }

M{y 6 }
M{y 6 }
M{y |a } =
1
, if
< 0.5

M{a

}
M{a }

0.5,
otherwise.
The equation (11.4) follows from M{y } = (y), M{y 6 } = 1 (y)
and M{a } = (a) immediately. The theorem is proved.
...
..........
....
.. . . . . . . . . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
......
.. .
....
.... ......
.. ...
...
..... ......
... .....
...
...
...... .....
...
...
.. ... ..........
..
...
.
.
..
.
...
... ...
...
... ..
...

.
...
...
... ... ..... .....
.....
...
.. ..
... ...
...
...
... .
.. ....
...
.
.. .
... ...
...
.
.
. .
... . . . . . . . . . . ........ . ... . . . . . . . . ........ . . . . . . . . . . . . . . . . . . . ............................ . . . . . . ...........................
... ...
...
...
.
..
.. ..
... ..
...
.
...
... ...
...
... ...
...
.
.. ..
...
...
... ...
.
...
.. .....
.
...
... ...
...
.
.
.
.
...
. ...
.
......
..
.
...
.
.
...
....
.
.....
.
..
.
.
.
...
...
.
......
.
....
.
.
.
.
.
.
...
......
...
....
..
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................................................................
...
...

0.5

Figure 11.1: Graphical Illustration of Inference Rule


Inference Rule 11.2 (Gao, Gao and Ralescu [43]) Let X, Y and Z be three
concepts. Assume a rule if X is an uncertain set and Y is an uncertain set
then Z is an uncertain set . From X is a constant a and Y is a constant
b we infer that Z is an uncertain set
= |(a)(b)

(11.5)

which is the conditional uncertain set of given a and b . The


inference rule is represented by
Rule: If X is and Y is then Z is
From: X is a and Y is b
Infer: Z is = |(a)(b)

(11.6)

243

Section 11.1 - Uncertain Inference Rule

Theorem 11.2 Let , , be independent uncertain sets with membership


functions , , , respectively. If is a constant a and is a constant b,
then the inference rule 11.2 yields that has a membership function

(z) =

(z)
,
(a) (b)

if (z) <

(a) (b)
2

(z) + (a) (b) 1


(a) (b)
, if (z) > 1
(a) (b)
2
0.5,

(11.7)

otherwise.

Proof: It follows from the inference rule 11.2 that has a membership
function
(z) = M{z |(a ) (b )}.
By using the definition of conditional uncertainty, M{z |(a ) (b )}
is

M{z }
M{z }

,
if
< 0.5

M{(a ) (b )}
M{(a ) (b )}

M{z 6 }
M{z 6 }
1
, if
< 0.5

M{(a

(b

)}
M{(a
) (b )}

0.5,
otherwise.
The theorem follows from M{z } = (z), M{z 6 } = 1 (z) and
M{(a ) (b )} = (a) (b) immediately.
Inference Rule 11.3 (Gao, Gao and Ralescu [43]) Let X and Y be two
concepts. Assume two rules if X is an uncertain set 1 then Y is an uncertain
set 1 and if X is an uncertain set 2 then Y is an uncertain set 2 . From
X is a constant a we infer that Y is an uncertain set
=

M{a 1 } 1 |a1
M{a 2 } 2 |a2
+
.
M{a 1 } + M{a 2 } M{a 1 } + M{a 2 }

(11.8)

The inference rule is represented by


Rule 1: If X is 1 then Y is 1
Rule 2: If X is 2 then Y is 2
From: X is a constant a
Infer: Y is determined by (11.8)

(11.9)

Theorem 11.3 Let 1 , 2 , 1 , 2 be independent uncertain sets with membership functions 1 , 2 , 1 , 2 , respectively. If is a constant a, then the
inference rule 11.3 yields
=

1 (a)
2 (a)
+

1 (a) + 2 (a) 1 1 (a) + 2 (a) 2

(11.10)

244

Chapter 11 - Uncertain Inference

where 1 and 2 are uncertain sets whose membership functions are respectively given by

1 (y) =

2 (y) =

1 (y)
,
1 (a)

if 1 (y) < 1 (a)/2

1 (y) + 1 (a) 1
, if 1 (y) > 1 1 (a)/2
1 (a)
0.5,
2 (y)
,
2 (a)

otherwise,
if 2 (y) < 2 (a)/2

2 (y) + 2 (a) 1
, if 2 (y) > 1 2 (a)/2
2 (a)
0.5,

(11.11)

(11.12)

otherwise.

Proof: It follows from the inference rule 11.3 that the uncertain set is
just
M{a 1 } 1 |a1
M{a 2 } 2 |a2
=
+
.
M{a 1 } + M{a 2 } M{a 1 } + M{a 2 }
The theorem follows from M{a 1 } = 1 (a) and M{a 2 } = 2 (a)
immediately.
Inference Rule 11.4 Let X1 , X2 , , Xm be concepts. Assume rules if X1
is i1 and and Xm is im then Y is i for i = 1, 2, , k. From X1 is a1
and and Xm is am we infer that Y is an uncertain set
=

k
X
ci i |(a1 i1 )(a2 i2 )(am im )
i=1

c1 + c2 + + ck

(11.13)

where the coefficients are determined by


ci = M {(a1 i1 ) (a2 i2 ) (am im )}

(11.14)

for i = 1, 2, , k. The inference rule is represented by


Rule 1: If X1 is 11 and and Xm is 1m then Y is 1
Rule 2: If X1 is 21 and and Xm is 2m then Y is 2

Rule k: If X1 is k1 and and Xm is km then Y is k


From: X1 is a1 and and Xm is am
Infer: Y is determined by (11.13)

(11.15)

Theorem 11.4 Assume i1 , i2 , , im , i are independent uncertain sets


with membership functions i1 , i2 , , im , i , i = 1, 2, , k, respectively.

245

Section 11.2 - Uncertain System

If 1 , 2 , , m
are constants a1 , a2 , , am , respectively, then the inference
rule 11.4 yields
k
X
ci i
=
(11.16)
c + c2 + + ck
i=1 1

where i are uncertain sets whose membership functions are given by

i (y) =

i (y)
,
ci

if i (y) < ci /2

i (y) + ci 1
, if i (y) > 1 ci /2
ci
0.5,

(11.17)

otherwise

and ci are constants determined by


ci = min il (al )
1lm

(11.18)

for i = 1, 2, , k, respectively.
Proof: For each i, since a1 i1 , a2 i2 , , am im are independent
events, we immediately have

m
\

M
(aj ij ) = min M{aj ij } = min il (al )

1jm
1lm
j=1

for i = 1, 2, , k. From those equations, we may prove the theorem by the


inference rule 11.4 immediately.

11.2

Uncertain System

Uncertain system, proposed by Liu [118], is a function from its inputs to


outputs based on the uncertain inference rule. Usually, an uncertain system
consists of 5 parts:
1. inputs that are crisp data to be fed into the uncertain system;
2. a rule-base that contains a set of if-then rules provided by the experts;
3. an uncertain inference rule that infers uncertain consequents from the
uncertain antecedents;
4. an expected value operator that converts the uncertain consequents to
crisp values;
5. outputs that are crisp data yielded from the expected value operator.

246

Chapter 11 - Uncertain Inference

Now let us consider an uncertain system in which there are m crisp inputs
1 , 2 , , m , and n crisp outputs 1 , 2 , , n . At first, we infer n uncertain sets 1 , 2 , , n from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),
If 11 and 12 and and 1m then 11 and 12 and and 1n
If 21 and 22 and and 2m then 21 and 22 and and 2n

If k1 and k2 and and km then k1 and k2 and and kn

(11.19)

and the uncertain inference rule


j =

k
X
ci ij |(1 i1 )(2 i2 )(m im )
i=1

(11.20)

c1 + c2 + + ck

for j = 1, 2, , n, where the coefficients are determined by


ci = M {(1 i1 ) (2 i2 ) (m im )}

(11.21)

for i = 1, 2, , k. Thus by using the expected value operator, we obtain


j = E[j ]

(11.22)

for j = 1, 2, , n. Until now we have constructed a function from inputs


1 , 2 , , m to outputs 1 , 2 , , n . Write this function by f , i.e.,
(1 , 2 , , n ) = f (1 , 2 , , m ).

(11.23)

Then we get an uncertain system f .


............................................................................................
.

............................
.........................................................................
.
.........................
..........................
1 ...... .. ..... 1
1 ...... ..
...
.
...
.....................
...........................
2 ...... ... ..... 2
2 ..... ..
....
....
....
....
....
...
...
...
...
...
...
.
.
.
...........................

...........................
.
.
..
.
n . ................n
...........................................n
................
..........................

1 ............................... ........................................................................................... .................................


.
...
... ... Inference Rule ... ...
2 ................................ .................................................................................................. ...................................
...
..
..
..
...
..
...
...
..
...
.................................................................
...
...
...
...
.
.....
...
...
...
.. Rule Base ....
...
..
...
.
.
.
.
.
.
.
.
.
.
.
m .......................... ............................................................. ..............................
.........................................................................................

..
.

= E[ ]
= E[ ]
..
.

1
2
..
.

= E[ ]

Figure 11.2: An Uncertain System

Theorem 11.5 Assume i1 , i2 , , im , i1 , i2 , , in are independent uncertain sets with membership functions i1 , i2 , , im , i1 , i2 , , in , i =
1, 2, , k, respectively. Then the uncertain system from (1 , 2 , , m ) to
(1 , 2 , , n ) is
k

X
ci E[ij
]
j =
(11.24)
c + c2 + + ck
i=1 1

Section 11.2 - Uncertain System

247

for j = 1, 2, , n, where ij
are uncertain sets whose membership functions
are given by

ij (y)

,
if ij (y) < ci /2

ci

ij (y) + ci 1
ij
(y) =
(11.25)
, if ij (y) > 1 ci /2

ci

0.5,
otherwise

and ci are constants determined by


ci = min il (l )
1lm

(11.26)

for i = 1, 2, , k, j = 1, 2, , n, respectively.
Proof: It follows from the inference rule 11.4 that the uncertain sets j are
j =

k
X
i=1

ci ij
c1 + c2 + + ck

for j = 1, 2, , n. Since ij
, i = 1, 2, , k, j = 1, 2, , n are independent
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.

Remark 11.1: The uncertain system allows the uncertain sets ij in the
rule-base (11.19) become constants bij , i.e.,
ij = bij

(11.27)

for i = 1, 2, , k and j = 1, 2, , n. In this case, the uncertain system


(11.24) becomes
k
X
ci bij
(11.28)
j =
c
+
c
1
2 + + ck
i=1
for j = 1, 2, , n.
Remark 11.2: The uncertain system allows the uncertain sets ij in the
rule-base (11.19) become functions hij of inputs 1 , 2 , , m , i.e.,
ij = hij (1 , 2 , , m )

(11.29)

for i = 1, 2, , k and j = 1, 2, , n. In this case, the uncertain system


(11.24) becomes
k
X
ci hij (1 , 2 , , m )
j =
(11.30)
c1 + c2 + + ck
i=1
for j = 1, 2, , n.

248

Chapter 11 - Uncertain Inference

Uncertain Systems are Universal Approximator


Uncertain systems are capable of approximating any continuous function on
a compact set (i.e., bounded and closed set) to arbitrary accuracy. This is the
reason why uncertain systems may play a controller. The following theorem
shows this fact.
Theorem 11.6 (Peng and Chen [174]) For any given continuous function
g on a compact set D <m and any given > 0, there exists an uncertain
system f such that
kf (1 , 2 , , m ) g(1 , 2 , , m )k <

(11.31)

for any (1 , 2 , , m ) D.
Proof: Without loss of generality, we assume that the function g is a realvalued function with only two variables 1 and 2 , and the compact set is
a unit rectangle D = [0, 1] [0, 1]. Since g is continuous on D and then is
uniformly continuous, for any given number > 0, there is a number > 0
such that
|g(1 , 2 ) g(10 , 20 )| <
(11.32)

whenever k(1 , 2 ) (10 , 20 )k < . Let k be an integer larger than 1/( 2),
and write


i1
i j1
j
Dij = (1 , 2 )
< 1 ,
< 2
(11.33)
k
k
k
k
for i, j = 1, 2, , k. Note that {Dij } is a sequence of disjoint rectangles
whose diameter is less than . Define rectangular uncertain sets


i1 i
,
, i = 1, 2, , k,
(11.34)
i =
k k


j1 j
j =
, j = 1, 2, , k.
(11.35)
,
k
k
Then we assume a rule-base with k k if-then rules,
Rule ij: If i and j then g(i/k, j/k),

i, j = 1, 2, , k.

(11.36)

According to the uncertain inference rule, the corresponding uncertain system


from D to < is
f (1 , 2 ) = g(i/k, j/k),

if (1 , 2 ) Dij , i, j = 1, 2, , k.

(11.37)

It follows from (11.32) that for any (1 , 2 ) Dij D, we have


|f (1 , 2 ) g(1 , 2 )| = |g(i/k, j/k) g(1 , 2 )| < .

(11.38)

The theorem is thus verified. Hence uncertain systems are universal approximators!

249

Section 11.4 - Inverted Pendulum

11.3

Uncertain Control

Uncertain controller, designed by Liu [118], is a special uncertain system that


maps the state variables of a process under control to the action variables.
Thus an uncertain controller consists of the same 5 parts of uncertain system:
inputs, a rule-base, an uncertain inference rule, an expected value operator,
and outputs. The distinguished point is that the inputs of uncertain controller
are the state variables of the process under control, and the outputs are the
action variables.
Figure 11.3 shows an uncertain control system consisting of an uncertain
controller and a process. Note that t represents time, 1 (t), 2 (t), , m (t)
are not only the inputs of uncertain controller but also the outputs of process,
and 1 (t), 2 (t), , n (t) are not only the outputs of uncertain controller but
also the inputs of process.
.........................................................................
...
...
...
...
.
.
.........................................................................................................................................
...............................................................................................................................................
.
..
...
.
.
.....
.
...
...
..
.....
...
...
.......................................................................
...
...
...
...
.
.
...
.......
.
.
.
.
...............................................................................................................
..................................
........................................
..........................................................................................
.......................................
... .. ..
... .. ..
...
...
...
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..........................
...
......................
.......................... .............................................................................................. ........................
.
.
.....
.
.
.
.
1
1
1
... ...
...
...
...
1
1
...
....
.... ....
....
....
....
...
. ..
.
.....
.
...
.
.....
..... .....
...
...
.
.
.
.

..........................
.............................. ............................................................................................. ...........................
..............................
...
.
....
...
...
...
2
2
.. 2
..
.. 2
...
...
2
..........
....
....
.
.
.
.
...
.
.
....
..
....
..
...
..
....
.
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
..
..
.
.
.
.
.
.
.
.
.
............................................................................
.
.
.
...
...
...
...
...
..
..
.
.
.
.
.
.
.
.
.
...
.
...
.
.
...
.
...
.
...
.
...
.
.
.
.
.
.
.
.
.
.
.
.
...
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.....
.
.
.
.
.
.
.
.
...
.. ... ..
... ... .
..
..
...........
...........

.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.. .. ...
.. .. ...
..
.. ...
.. ... n
....
.
.
.
.
n
n
n
.............m
.
.
.
.
.
.
.
.
........................
................................
......................................
.............................................................................................................
........................................................................................

Inputs of Controller
Outputs of Process

(t)
(t)
..
.

(t)

Process

Inference Rule
Rule Base

Outputs of Controller
Inputs of Process

(t)
(t)
..
.

(t)=E[ (t)]
(t)=E[ (t)]
..
.

(t)
(t)
..
.

(t)

(t)=E[ (t)]

(t)

Figure 11.3: An Uncertain Control System

11.4

Inverted Pendulum

Inverted pendulum system is a nonlinear unstable system that is widely used


as a benchmark for testing control algorithms. Many good techniques already
exist for balancing inverted pendulum. Especially, Gao [46] successfully balanced an inverted pendulum by the uncertain controller with 5 5 if-then
rules.
The uncertain controller has two inputs (angle and angular velocity)
and one output (force). Three of them will be represented by uncertain
sets labeled by
negative large NL
negative small NS
zero
Z
positive small PS
positive large PL
The membership functions of those uncertain sets are shown in Figures 11.5,
11.6 and 11.7.

250

Chapter 11 - Uncertain Inference


..
.........
...
.......
...
... ...
... .................
... ...
...........
...
............... ..........
...
.
.
.
.
...... ...
...
.. ..
...
... ...
... ...
...
........
.
...
.
.. ..
...
... ...
...
... ...
... ...
...
........
.
...
.
... ...
...
... ...
...
... ...
... ..........
... ........
...........
. .
... ..
....................................................................................................................
...
....
...
..
.............................
...
...
..
.......................... ...
... .............................
.
.
...........................................................................................................................
....................
....................
.
.
.
.
.
.
.
.
................................................................. ..................................................................... .................................................

A(t)

F (t)

Figure 11.4: An Inverted Pendulum in which A(t) represents the angular


position and F (t) represents the force that moves the cart at time t.
NL

NS

PS

/4

PL

..................................................
......
......
......
..................................................
...
... ...
... ...
... ...
...
...
... .....
... .....
... .....
...
...
...
...
...
..
..
..
..
...
.
.
.
.
.
.
.
.
...
...
...
.
.
.
...
..
... ....
... ....
... ....
... ....
... ...
... ...
... ...
... ..
.....
.....
.....
......
....
......
......
......
.
.
.
.
. .
. .
. .
. .
... ...
... ...
... ...
... ...
... .....
... .....
... .....
... .....
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
...
.
.
.
.
.
.
.
... ...
...
... ...
... ...
..
... ..
...
... ..
... ..
..
.
.
.
.
.
.
.
.
......................................................................................................................................................................................................................................................................................

/2 /4

/2

(rad)

Figure 11.5: Membership Functions of Angle


Intuitively, when the inverted pendulum has a large clockwise angle and
a large clockwise angular velocity, we should give it a large force to the right.
Thus we have an if-then rule,
If the angle is negative large
and the angular velocity is negative large,
then the force is positive large.
Similarly, when the inverted pendulum has a large counterclockwise angle
and a large counterclockwise angular velocity, we should give it a large force
NL

NS

PS

/8

PL

..................................................
......
......
......
..................................................
...
... ..
...
... ..
... ..
...
.. .....
.. .....
.. .....
..
...
...
...
...
..
..
..
..
.
.
.
.
...
.
.
.
.
...
...
...
.
.
.
...
...
...
...
...
...
...
...
... ....
... ....
... ....
... ....
... ...
... ...
... ...
... ...
......
......
......
......
.
.
..
..
..
..
... ....
... ....
... ....
... ....
... .....
... .....
... .....
... .....
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
.
.
...
.
.
.
.
.
... ...
... ...
... ...
...
..
... ...
...
... ...
... ...
...
.
.
.
.
.
.
.
.
.
.............................................................................................................................................................................................................................................................................................

/4 /8

/4

(rad/sec)

Figure 11.6: Membership Functions of Angular Velocity

251

Section 11.5 - Bibliographic Notes

NL

NS

PS

PL

40

20

20

40

...
...
...
...
...
... ...
... ...
... ...
... ...
... ...
... .....
... .....
... .....
... .....
... .....
..
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
.
.
.
...
... .....
... .....
... .....
... .....
...
...
.
.
.
.
...
.
.
.
.
.
... ..
... ..
... ..
... ..
...
..
.
.
.
.
.
.
.
.
.
.
....
....
....
....
...
.
.
.
.
.
.
.
.
.
.
...
...
... .....
... .....
... .....
... .....
.
.
.
.
.
...
...
...
...
...
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
...
...
...
...
..
..
..
...
...
...
.
.
.
.
.
.
.
.
.
.
.
.
...
...
...
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... ..
... ..
... ..
...
...
...
...
.
.
.
.
.
.
.
...
.
.
.
.
.
... ..
... ..
... ..
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.................................................................................................................................................................................................................................................................................

60

60

(N)

Figure 11.7: Membership Functions of Force


to the left. Thus we have an if-then rule,
If the angle is positive large
and the angular velocity is positive large,
then the force is negative large.
Note that each input or output has 5 states and each state is represented by
an uncertain set. This implies that the rule-base contains 5 5 if-then rules.
In order to balance the inverted pendulum, the 25 if-then rules in Table 11.1
are accepted.
Table 11.1: Rule Base with 5 5 If-Then Rules
XXX
XXXvelocity
XXX
angle
X
NL
NS
Z
PS
PL

NL

NS

PS

PL

PL
PL
PL
PS
Z

PL
PL
PS
Z
NS

PL
PS
Z
NS
NL

PS
Z
NS
NL
NL

Z
NS
NL
NL
NL

A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.

11.5

Bibliographic Notes

The basic uncertain inference rule was initialized by Liu [118] in 2010 by
the tool of conditional uncertain set. After that, Gao, Gao and Ralescu [43]
extended the uncertain inference rule to the case with multiple antecedents
and multiple if-then rules.
Based on the uncertain inference rules, Liu [118] suggested the concept of
uncertain system, and then presented the tool of uncertain controller. As an
important contribution, Peng and Chen [174] proved that uncertain systems

252

Chapter 11 - Uncertain Inference

are universal approximator and then demonstrated that the uncertain controller is a reasonable tool. As a successful application, Gao [46] balanced an
inverted pendulum by using the uncertain controller.

Chapter 12

Uncertain Process
An uncertain process is essentially a collection of uncertain variables. This
chapter will give the concept of uncertain process, and introduce sample
path, uncertainty distribution, independent increment, stationary increment,
extreme value, first hitting time, and time integral of uncertain process.

12.1

Uncertain Process

An uncertain process is a sequence of uncertain variables indexed by time.


A formal definition is given below.
Definition 12.1 (Liu [114]) Let (, L, M) be an uncertainty space and let T
be a totally ordered set (e.g. time). An uncertain process is a function Xt ()
from T (, L, M) to the set of real numbers such that {Xt B} is an event
for any Borel set B at each time t.
Remark 12.1: If Xt is an uncertain process, then Xt is an uncertain variable
at each time t.
Example 12.1: Let 1 , 2 , be a sequence of (not necessarily independent)
uncertain variables. Then
Xn = 1 + 2 + + n

(12.1)

is an uncertain process indexed by the discrete time n. Sometimes, we call


Xn a discrete-time uncertain process.
Example 12.2: Let a and b be real numbers with a < b. Assume Xt is a
linear uncertain variable, i.e.,
Xt L(at, bt)

(12.2)

at each time t. Then Xt is an uncertain process indexed by the time t.


Sometimes, we call Xt a continuous-time uncertain process.

254

Chapter 12 - Uncertain Process

Sample Path
Definition 12.2 (Liu [114]) Let Xt be an uncertain process. Then for each
, the function Xt () is called a sample path of Xt .
Note that each sample path is a real-valued function of time t. In addition,
an uncertain process may also be regarded as a function from an uncertainty
space to a collection of sample paths.
<.

.
.........
...
....
...... .
.. ... ..... .
..
.... . ..... ........
...
. .. ...... ..... ....... .......
. ..
....
...
.
.......
...
...
......
...
...
.
...
...
........
..
.
...
...
.
.
.
.
.. .... ...... ...
.
..
...
.
.
.
......
. ..... ..... ... ...
.
...
.
.
...
.
.
.
.
.
..... ....
.. ........... ........... .....
...
.
.
.. ... ......
..
.. .......
...
.
.
... ..
.. .......... ... ...... ..............
...
... ...
.......
... ... ... .... . ... ... ...
...
.....
... .. .. .. .
..... ... ... .......
...
..
.......... ........
......
.... ........ ........
...
.
..
...
.
... .. ....
... .........
... ...
......
...............................................................................................................................................................................................................................................................

Figure 12.1: A Sample Path of Uncertain Process

Definition 12.3 An uncertain process Xt is said to be sample-continuous if


almost all sample paths are continuous functions with respect to time t.
Uncertain Field
Uncertain field is a generalization of uncertain process when the index set T
becomes a partially ordered set (e.g. time space, or a surface).
Definition 12.4 (Liu [133]) Let (, L, M) be an uncertainty space and let T
be a partially ordered set (e.g. time space). An uncertain field is a function
Xt () from T (, L, M) to the set of real numbers such that {Xt B} is
an event for any Borel set B at each time t.

12.2

Uncertainty Distribution

An uncertainty distribution of uncertain process is a sequence of uncertainty


distributions of uncertain variables indexed by time. Thus an uncertainty
distribution of uncertain process is a surface rather than a curve. A formal
definition is given below.
Definition 12.5 (Liu [133]) An uncertain process Xt is said to have an
uncertainty distribution t (x) if at each time t, the uncertain variable Xt
has the uncertainty distribution t (x).

Section 12.2 - Uncertainty Distribution

255

Example 12.3: The linear uncertain process Xt L(at, bt) has an uncertainty distribution,

0,
if x at

x at
, if at x bt
t (x) =
(12.3)
(b a)t

1,
if x bt.
Example 12.4: The zigzag uncertain process Xt Z(at, bt, ct) has an
uncertainty distribution,

0,
if x at

at

if at x bt

2(b a)t ,
(12.4)
t (x) =

x + ct 2bt

,
if
bt

ct

2(c b)t

1,
if x ct.
Example 12.5: The normal uncertain process Xt N (et, t) has an uncertainty distribution,


1
(et x)

t (x) = 1 + exp
.
(12.5)
3t
Example 12.6: The lognormal uncertain process Xt LOGN (et, t) has
an uncertainty distribution,


1
(et ln x)

t (x) = 1 + exp
.
(12.6)
3t
Theorem 12.1 (Liu [133], Sufficient and Necessary Condition) A function
t (x) : T < [0, 1] is an uncertainty distribution of uncertain process
if and only if at each time t, it is a monotone increasing function except
t (x) 0 and t (x) 1.
Proof: If t (x) is an uncertainty distribution of some uncertain process
Xt , then at each time t, t (x) is the uncertainty distribution of uncertain
variable Xt . It follows from Peng-Iwamura theorem that t (x) is a monotone
increasing function with respect to x and t (x) 6 0, t (x) 6 1. Conversely,
if at each time t, t (x) is a monotone increasing function except t (x) 0
and t (x) 1, it follows from Peng-Iwamura theorem that there exists an
uncertain variable t whose uncertainty distribution is just t (x). Define
Xt = t ,

t T.

256

Chapter 12 - Uncertain Process

Then Xt is an uncertain process and has the uncertainty distribution t (x).


The theorem is verified.
Theorem 12.2 Let Xt be an uncertain process with uncertainty distribution
t (x), and let f (x) be a measurable function. Then f (Xt ) is also an uncertain
process. Furthermore, (i) if f (x) is a strictly increasing function, then f (Xt )
has an uncertainty distribution
t (x) = t (f 1 (x));

(12.7)

and (ii) if f (x) is a strictly decreasing function and t (x) is continuous with
respect to x, then f (Xt ) has an uncertainty distribution
t (x) = 1 t (f 1 (x)).

(12.8)

Proof: At each time t, since Xt is an uncertain variable, it follows from


Theorem 2.1 that f (Xt ) is also an uncertain variable. Thus f (Xt ) is an
uncertain process. The equations (12.7) and (12.8) may be verified by the
operational law of uncertain variables immediately.
Example 12.7: Let Xt be an uncertain process with uncertainty distribution t (x). Show that the uncertain process aXt + b has an uncertainty
distribution,
(
t ((x b)/a),
if a > 0
t (x) =
(12.9)
1 t ((x b)/a), if a < 0.
Regular Uncertainty Distribution
Definition 12.6 (Liu [133]) An uncertainty distribution t (x) is said to be
regular if at each time t, it is a continuous and strictly increasing function
with respect to x at which 0 < t (x) < 1, and
lim t (x) = 0,

lim t (x) = 1.

x+

(12.10)

It is clear that linear uncertainty distribution, zigzag uncertainty distribution, normal uncertainty distribution and lognormal uncertainty distribution
of uncertain process are all regular.
Note that we have stipulated that a crisp initial value X0 has a regular uncertainty distribution. That is, we allow the initial value of regular
uncertain process to be a constant whose uncertainty distribution is
(
1, if x X0
0 (x) =
(12.11)
0, if x < X0
and say 0 (x) is a continuous and strictly increasing function with respect
to x at which 0 < 0 (x) < 1 even though it is discontinuous at X0 .

257

Section 12.2 - Uncertainty Distribution

We may verify that at each time t, the inverse function 1


t () is continuous and strictly increasing with respect to (0, 1). Note that we have also
stipulated that a crisp initial value X0 has an inverse uncertainty distribution
1
0 () X0 .

(12.12)

1
0 ()

That is, we will say


is a continuous and strictly increasing function
with respect to (0, 1) even though it is not.
Inverse Uncertainty Distribution
Definition 12.7 (Liu [133]) Let Xt be an uncertain process with regular
uncertainty distribution t (x). Then the inverse function 1
t () is called
the inverse uncertainty distribution of Xt .
Note that at each time t, the inverse uncertainty distribution 1
t () is
well defined on the open interval (0, 1). If needed, we may extend the domain
to [0, 1] via
1
1
t (0) = lim t (),
0

1
t ()

1
1
t (1) = lim t ().

(12.13)

= 0.9

.......................
....
.......
........
.......
......
....
......
.
.
...
.
.
.
..
.......
...
.........
............................
...
.......................
.......
................
...
.......
.........
.
.
.......
.
.
.
...
.
.
........
.
.....
.
.
.
.
.
............................
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
......
...........................................
.......
...
..........
...........
.......
... ..................... ..................... .........................................................................
.................................
.
.
.
.
.
.
.
................................................................ .............
.
........
.....................
..................................................................
.... ...............
............................................................................................................................................................................................................................
...............................................
................................................ ........................................................................................
.........
.......................................... .......................
...................................
...............................................................
... ................. ................
........
..
..........
...
........
........ .........................................................
..........
...
........
...........
..........................
........
........
...
.........
.......
..............
...
.......
.........................
........
...
..........
...........................
.......
...
......
...
......
......
...
......
...
.......
..........
...
................
...
..
........................................................................................................................................................................................................................................................

= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2

= 0.1

Figure 12.2: Inverse Uncertainty Distribution of Uncertain Process


Example 12.8: The linear uncertain process Xt L(at, bt) has an inverse
uncertainty distribution,
1
t () = (1 )at + bt.

(12.14)

Example 12.9: The zigzag uncertain process Xt Z(at, bt, ct) has an
inverse uncertainty distribution,
(
(1 2)at + 2bt,
if < 0.5
1
t () =
(12.15)
(2 2)bt + (2 1)ct, if 0.5.

258

Chapter 12 - Uncertain Process

Example 12.10: The normal uncertain process Xt N (et, t) has an


inverse uncertainty distribution,

t 3
1
t () = et +
ln
.
(12.16)

1
Example 12.11: The lognormal uncertain process Xt LOGN (et, t) has
an inverse uncertainty distribution,
!

t 3
1
ln
t () = exp et +
.
(12.17)

1
Theorem 12.3 (Liu [133], Sufficient and Necessary Condition) A function
1
t () : T (0, 1) < is an inverse uncertainty distribution of uncertain
process if and only if at each time t, it is a continuous and strictly increasing
function with respect to .
Proof: Suppose 1
t () is an inverse uncertainty distribution of uncertain
process Xt . Then at each time t, 1
t () is an inverse uncertainty distribution of uncertain variable Xt . It follows from Theorem 2.6 that 1
t ()
is a continuous and strictly increasing function with respect to (0, 1).
Conversely, if 1
t () is a continuous and strictly increasing function with
respect to (0, 1), it follows from Theorem 2.6 that there exists an uncertain variable t whose inverse uncertainty distribution is just 1
t (). Define
Xt = t ,

t T.

Then Xt is an uncertain process and has the inverse uncertainty distribution


1
t (). The theorem is proved.

12.3

Independence

Definition 12.8 (Liu [133]) Uncertain processes X1t , X2t , , Xnt are said
to be independent if for any positive integer k and any times t1 , t2 , , tk ,
the uncertain vectors
i = (Xit1 , Xit2 , , Xitk ),

i = 1, 2, , n

(12.18)

are independent, i.e., for any k-dimensional Borel sets B1 , B2 , , Bn , we


have
( n
)
n
\
^
M
( i Bi ) =
M{ i Bi }.
(12.19)
i=1

i=1

For any independent uncertain processes X1t , X2t , , Xnt and any times
t1 , t2 , , tn , it is clear that
X1t1 , X2t2 , , Xntn
are independent uncertain variables.

(12.20)

Section 12.4 - Independent Increment Process

259

Theorem 12.4 (Liu [133]) Uncertain processes X1t , X2t , , Xnt are independent if and only if for any positive integer k, any times t1 , t2 , , tk , and
any k-dimensional Borel sets B1 , B2 , , Bn , we have
( n
)
n
[
_
M
( i Bi ) =
M{ i Bi }
(12.21)
i=1

i=1

where i = (Xit1 , Xit2 , , Xitk ) for i = 1, 2, , n.


Proof: It follows from Theorem 2.67 that 1 , 2 , , n are independent
uncertain vectors if and only if (12.21) holds. The theorem is thus verified.
Theorem 12.5 (Liu [133]) Let X1t , X2t , , Xnt be independent uncertain
processes with regular uncertainty distributions 1t , 2t , , nt , respectively.
If the function f (x1 , x2 , , xn ) is strictly increasing with respect to x1 , x2 , ,
xm and strictly decreasing with respect to xm+1 , xm+2 , , xn , then
Yt = f (X1t , X2t , , Xnt )

(12.22)

is an uncertain process with inverse uncertainty distribution


1
1
1
1
1
t () = f (1t (), , mt (), m+1,t (1 ), , nt (1 )). (12.23)

Proof: At any time t, it is clear that X1t , X2t , , Xnt are independent un1
certain variables with inverse uncertainty distributions 1
1t (), 2t (), ,
1
nt (), respectively. The theorem follows from the operational law of uncertain variables immediately.

12.4

Independent Increment Process

An independent increment process is an uncertain process that has independent increments. A formal definition is given below.
Definition 12.9 (Liu [114]) An uncertain process Xt is said to have independent increments if
Xt0 , Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1

(12.24)

are independent uncertain variables where t0 is the initial time and t1 , t2 , , tk


are any times with t0 < t1 < < tk .
That is, an independent increment process means that its increments are
independent uncertain variables whenever the time intervals do not overlap.
Please note that the increments are also independent of the initial state.
Example 12.12: Any crisp function F (t) is a special instance of independent
increment process.

260

Chapter 12 - Uncertain Process

Example 12.13: Let 1 , 2 , be a sequence of independent uncertain


variables. Then
Xn = 1 + 2 + + n
(12.25)
is an independent increment process with respect to the discrete time n.
Example 12.14: There exists an independent increment process Xt such
that every increment is a linear uncertain variable, i.e.,
Xt+t Xt L(at, bt).

(12.26)

Furthermore, the uncertain process Xt may have a linear uncertainty distribution, i.e.,
Xt L(at, bt).
(12.27)
Theorem 12.6 Let Xt be an independent increment process. Then for any
real numbers a and b, the uncertain process
Yt = aXt + b

(12.28)

is also an independent increment process.


Proof: Since Xt is an independent increment process, the uncertain variables
Xt0 , Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
are independent. It follows from Yt = aXt + b and Theorem 2.8 that
Yt0 , Yt1 Yt0 , Yt2 Yt1 , , Ytk Ytk1
are also independent. That is, Yt is an independent increment process.
Remark 12.2: Generally speaking, a nonlinear function of independent increment process does not necessarily have independent increments. A typical
example is the square of independent increment process.
Theorem 12.7 Let Xt be an independent increment process. Then for any
times s < t, the uncertain variables Xs and Xt Xs are independent.
Proof: Since Xt is an independent increment process, the initial value and
increments
X0 , Xs X0 , Xt Xs
are independent. It follows from Xs = X0 + (Xs X0 ) that Xs and Xt Xs
are independent uncertain variables.
Theorem 12.8 (Liu [133], Sufficient and Necessary Condition) A function
1
t () : T (0, 1) < is an inverse uncertainty distribution of independent
increment process if and only if (i) at each time t, 1
t () is a continuous and
1
strictly increasing function; and (ii) for any times s < t, 1
t () s ()
is a strictly increasing function with respect to .

Section 12.5 - Stationary Independent Increment Process

261

Proof: Let Xt be an independent increment process with inverse uncertainty


1
distribution 1
t (). First, it follows from Theorem 12.3 that t () is a
continuous and strictly increasing function with respect to . Next, the
uncertain variables Xs and Xt Xs are independent. Note that Xs has an
inverse uncertainty distribution 1
s (). Denoting the inverse uncertainty
distribution of Xt Xs by 1 (), the uncertain variable Xt = Xs +(Xt Xs )
has an inverse uncertainty distribution,
1
1
1
().
t () = s () +

For any < , we immediately have


1
1
1
1
1
() 1 ().
t () t () = s () s () +

Since 1 () > 1 (), we obtain


1
1
1
1
t () t () > s () s ().

That is,
1
1
1
1
t () s () > t () s ().
1
Hence 1
t () s () is a strictly increasing function with respect to .
Conversely, let us prove that there exists a stationary independent increment process whose inverse uncertainty distribution is just 1
t (). Without
loss of generality, we only consider the range of t [0, 1]. For each positive
1
1
integer n, since 1
t () and t () s () are continuous and strictly
increasing functions with respect to , there exist independent uncertain
variables 0n , 1n , , nn such that 0n has an inverse uncertainty distribution
1
1
0n () = 0 ()

and in have inverse uncertainty distributions


1
1
1
in () = i/n () (i1)/n (),

i = 1, 2, , n, respectively. Define an uncertain process

k
X

(k = 0, 1, , n)
in , if t =
n
n
Xt =
i=0

linear, otherwise.
It may prove that Xtn converges in distribution as n . Furthermore, we
may verify that the limit is indeed an independent increment process and has
the inverse uncertainty distribution 1
t (). The theorem is verified.
Remark 12.3: It follows from Theorem 12.8 that the uncertainty distribution of independent increment process has a horn-like shape, i.e., for any
s < t and < , we have
1
1
1
1
s () s () < t () t ().

(12.29)

262

Chapter 12 - Uncertain Process

1
t ()

...............
.....
..................
.......
...............
.............
....
.................
.
.
...........
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.............
.........
...
........
............
...
...
...........
........
...............
.......
..........
...............
...
.......
.........
.............
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
....
........
......
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
....
........
........
...
...... .............
..........
..................
......
..........
...
...
......................
..... ...... .................
.................
...
.
..... .......
..
.....................
.
.
.
.
.
.
.
.
.
.
.
... ......... ............ ................
.
.
.
.
.
.
............
... ................ ............
..
..
..............
. ... ... .....
..............
...
...
......................................................
..
..
.......................
.
................................................................................................................................................................................
..........................
.
.
.
.
............................ .....................
...
...
.....
. .. ...
.... ................................... ...........................
..
..
...............
..
... ........ .......... .............
............... ........
.........
..... .......
.
.
...
.
.
................
.......
.........
..... .......
.
.
.
.
.
.
.
.
...
.
.
......
.
.
...................
..........
...................
..........
...... ..............
...
..
...........
........
......
...
...........
........
......
............
.........
.......
...
.............
.........
.......
...
.
.
.
..............
..........
.......
...............
..........
........
...
.......
............
........
...
.............
.........
..............
..........
...
................
...........
...
.......
............
..............
...
..................
...
...................
...
..
.....................................................................................................................................................................................................................................

= 0.9
= 0.8
= 0.7

= 0.6

= 0.5
= 0.4

= 0.3
= 0.2
= 0.1

Figure 12.3: Inverse Uncertainty Distribution of Independent Increment Process: A Horn-like Family of Functions of t indexed by

12.5

Stationary Independent Increment Process

An uncertain process Xt is said to have stationary increments if its increments


are identically distributed uncertain variables whenever the time intervals
have the same length, i.e., for any given t > 0, the increments Xs+t Xs are
identically distributed uncertain variables for all s > 0.
Definition 12.10 (Liu [114]) An uncertain process is said to be a stationary
independent increment process if it has not only stationary increments but
also independent increments.
Example 12.15: Let 1 , 2 , be a sequence of iid uncertain variables.
Then
Xn = 1 + 2 + + n
(12.30)
is a stationary independent increment process with respect to the discrete
time n.
Example 12.16: There exists a stationary independent increment process
Xt such that every increment is a linear uncertain variable, i.e.,
Xt+t Xt L(at, bt).

(12.31)

Furthermore, Xt may have a linear uncertainty distribution, i.e.,


Xt L(at, bt).

(12.32)

263

Section 12.5 - Stationary Independent Increment Process

Theorem 12.9 Let Xt be a stationary independent increment process. Then


for any real numbers a and b, the uncertain process
Yt = aXt + b

(12.33)

is also a stationary independent increment process.


Proof: Since Xt is an independent increment process, it follows from Theorem 12.6 that Yt is also an independent increment process. On the other
hand, since Xt is a stationary increment process, the increments Xs+t Xs
are identically distributed uncertain variables for all s > 0. Thus
Ys+t Ys = a(Xs+t Xs )
are also identically distributed uncertain variables for all s > 0, and Yt is a
stationary increment process. Hence Yt is a stationary independent increment
process.
Theorem 12.10 (Chen [14]) Suppose Xt is a stationary independent increment process. Then Xt and (1 t)X0 + tX1 are identically distributed
uncertain variables for any time t 0.
Proof: We first prove the theorem when t is a rational number. Assume t =
q/p where p and q are irreducible integers. Let be the common uncertainty
distribution of increments
X1/p X0/p , X2/p X1/p , X3/p X2/p ,
Then
Xt X0 = (X1/p X0/p ) + (X2/p X1/p ) + + (Xq/p X(q1)/p )
has an uncertainty distribution (x/q). In addition,
t(X1 X0 ) = t((X1/p X0/p ) + (X2/p X1/p ) + + (Xp/p X(p1)/p ))
has also the uncertainty distribution
(x/p/t) = (x/p/(q/p)) = (x/q).
Thus Xt X0 and t(X1 X0 ) are identically distributed, and so are Xt and
(1 t)X0 + tX1 .
Remark 12.4: If Xt is a stationary independent increment process with
X0 = 0, then Xt /t and X1 are identically distributed uncertain variables. In
other words, there is an uncertainty distribution such that
Xt
(x)
t
or equivalently,
Xt

x

(12.34)

(12.35)
t
for any time t > 0. Note that is just the uncertainty distribution of X1 .

264

Chapter 12 - Uncertain Process

Theorem 12.11 (Liu [133], Sufficient and Necessary Condition) A function


1
t () : T (0, 1) < is an inverse uncertainty distribution of stationary
independent increment process if and only if there exist two continuous and
strictly increasing functions () and () such that
1
t () = () + ()t.

(12.36)

Proof: Let Xt be a stationary independent increment process. Then X0 and


X1 X0 are independent uncertain variables. We denote the inverse uncertainty distributions of X0 and X1 X0 by () and (), respectively. Then
() and () are continuous and strictly increasing functions. Furthermore,
it follows from Theorem 12.10 that Xt and X0 + (X1 X0 )t are identically
distributed uncertain variables. Hence Xt has the inverse uncertainty distribution 1
t () = () + ()t.
Conversely, let us prove that there exists a stationary independent increment process whose inverse uncertainty distribution is just 1
t (). Without
loss of generality, we only consider the range of t [0, 1]. Let



(r) r represents rational numbers in [0, 1]
be a countable sequence of independent uncertain variables, where (0) has
an inverse uncertainty distribution () and (r) have a common inverse
uncertainty distribution () for r 6= 0. For each positive integer n, we
define an uncertain process

 
k
X

i
k

(0) + 1

, if t =
(k = 1, 2, , n)
n
n i=1
n
n
Xt =

linear,
otherwise.
It may prove that Xtn converges in distribution as n . Furthermore, we
may verify that the limit is a stationary independent increment process and
has the inverse uncertainty distribution 1
t (). The theorem is verified.
Example 12.17: The linear uncertain process Xt L(at, bt) has an inverse
uncertainty distribution 1
t () = ()t where
() = (1 )a + b.

(12.37)

It follows from Theorem 12.11 that there exists a stationary independent increment process with linear uncertainty distribution because () is a strictly
increasing function of .
Example 12.18: The zigzag uncertain process Xt Z(at, bt, ct) has an
inverse uncertainty distribution 1
t () = ()t where
(
(1 2)a + 2b,
if < 0.5
() =
(12.38)
(2 2)b + (2 1)c, if 0.5.

Section 12.5 - Stationary Independent Increment Process

1
t ()

265

= 0.9
= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1

....
.....
.......
.......
.......
.......
....
....... ............
.
.
...
.
.
.
.
...... .............
...
.......
..
...
....... .............
.......
.......
........
..
...
....... ............. ...............
.
.
.
.
...
.
.
.....
.... ...........
.
.
.
.
......
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
....
....
......
...
....... ....... ...............
........
....... .......
..
.........
...
........
....... .............. ............... ................
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
..
...... ....... ........
........
..........
...
....... ....... ........ .................
..........
...
.
....... ........ .........
..........
..........
....... ....... ........ ..........
............
..........
...
............
.................................... ................ ...................
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.......
........................... ............. ...............
.
.
.
.
.........
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
. . ..
.
..
....
..... .
............... ........ .......... ...........
...
............
...............
.............. ......... ......... .......... ......................
...............
...
...............
..
.............. ....... ........ ..........
.................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... .................................................................................................. .............................
.
.
.
.
.
...................
............................................. .............. ..................
....................
........................................................................................................................................
..............................
................................
...................................................................................
...............................
.........................................................................................................
.........................
..
...............................................................................................................................................................................................................................................

Figure 12.4: Inverse Uncertainty Distribution of Stationary Independent Increment Process: A Family of Linear Functions of t indexed by
It follows from Theorem 12.11 that there exists a stationary independent increment process with zigzag uncertainty distribution because () is a strictly
increasing function of .
Example 12.19: The normal uncertain process Xt N (et, t) has an
inverse uncertainty distribution 1
t () = ()t where

() = e +
ln
.

(12.39)

It follows from Theorem 12.11 that there exists a stationary independent


increment process with normal uncertainty distribution because () is a
strictly increasing function of .
Example 12.20: The lognormal uncertain process Xt LOGN (et, t) has
an inverse uncertainty distribution,
!

t 3

1
t () = exp et +
ln
.
(12.40)

1
It is clear that 1
t () is not a family of linear functions of t indexed by
. Hence there does not exist any stationary independent increment process
whose uncertainty distribution is lognormal.
Theorem 12.12 (Liu [120]) Let Xt be a stationary independent increment
process. Then there exist two real numbers a and b such that
E[Xt ] = a + bt
for any time t 0.

(12.41)

266

Chapter 12 - Uncertain Process

Proof: It follows from Theorem 12.10 that Xt and X0 + (X1 X0 )t are


identically distributed uncertain variables. Thus we have
E[Xt ] = E[X0 + (X1 X0 )t].
Since X0 and X1 X0 are independent uncertain variables, we obtain
E[Xt ] = E[X0 ] + E[X1 X0 ]t.
Hence (12.41) holds for a = E[X0 ] and b = E[X1 X0 ].
Theorem 12.13 (Liu [120]) Let Xt be a stationary independent increment
process with an initial value 0. Then for any times s and t, we have
E[Xs+t ] = E[Xs ] + E[Xt ].

(12.42)

Proof: It follows from Theorem 12.12 that there exists a real number b such
that E[Xt ] = bt for any time t 0. Hence
E[Xs+t ] = b(s + t) = bs + bt = E[Xs ] + E[Xt ].
Theorem 12.14 (Chen [14]) Let Xt be a stationary independent increment
process with a crisp initial value X0 . Then there exists a real number b such
that
V [Xt ] = bt2
(12.43)
for any time t 0.
Proof: It follows from Theorem 12.10 that Xt and (1 t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have
V [Xt ] = V [(1 t)X0 + tX1 ] = t2 V [X1 ].
Hence (12.43) holds for b = V [X1 ].
Theorem 12.15 (Chen [14]) Let Xt be a stationary independent increment
process with a crisp initial value X0 . Then for any times s and t, we have
p
p
p
V [Xs+t ] = V [Xs ] + V [Xt ].
(12.44)
Proof: It follows from Theorem 12.14 that there exists a real number b such
that V [Xt ] = bt2 for any time t 0. Hence
p
p
p

V [Xs+t ] = b(s + t) = bs + bt = V [Xs ] + V [Xt ].

Section 12.6 - Extreme Value Theorem

12.6

267

Extreme Value Theorem

This section will present a series of extreme value theorems for samplecontinuous independent increment processes. Note that a discrete-time uncertain process will be considered sample-continuous in this section.
Theorem 12.16 (Liu [126], Extreme Value Theorem) Let Xt be a samplecontinuous independent increment process with uncertainty distribution t (x).
Then the supremum
sup Xt
(12.45)
0ts

has an uncertainty distribution


(x) = inf t (x);
0ts

(12.46)

and the infimum


inf Xt

0ts

(12.47)

has an uncertainty distribution


(x) = sup t (x).

(12.48)

0ts

Proof: Let 0 = t1 < t2 < < tn = s be a partition of the closed interval


[0, s]. It is clear that
Xti = Xt1 + (Xt2 Xt1 ) + + (Xti Xti1 )
for i = 1, 2, , n. Since the increments
Xt1 , Xt2 Xt1 , , Xtn Xtn1
are independent uncertain variables, it follows from Theorem 2.16 that the
maximum
max Xti
1in

has an uncertainty distribution


min ti (x).

1in

Since Xt is sample-continuous, we have


max Xti sup Xt

1in

0ts

and
min ti (x) inf t (x)

1in

0ts

268

Chapter 12 - Uncertain Process

as n . Thus (12.46) is proved. Similarly, it follows from Theorem 2.16


that the minimum
min Xti
1in

has an uncertainty distribution


max ti (x).

1in

Since Xt is sample-continuous, we have


min Xti inf Xt

1in

0ts

and
max ti (x) sup t (x)

1in

0ts

as n . Thus (12.48) is verified.


Theorem 12.17 (Liu [126]) Let Xt be a sample-continuous independent increment process with uncertainty distribution t (x). If f is a strictly increasing function, then the supremum
sup f (Xt )

(12.49)

0ts

has an uncertainty distribution


(x) = inf t (f 1 (x));
0ts

(12.50)

and the infimum


inf f (Xt )

0ts

(12.51)

has an uncertainty distribution


(x) = sup t (f 1 (x)).

(12.52)

0ts

Proof: Since f is a strictly increasing function, f (Xt ) x if and only if


Xt f 1 (x). It follows from the extreme value theorem that


(x) = M sup f (Xt ) x
0ts

=M


sup Xt f 1 (x)
0ts

= inf t (f 1 (x)).
0ts

269

Section 12.6 - Extreme Value Theorem

Similarly, we have
(x) = M
=M


inf f (Xt ) x

0ts


inf Xt f

0ts


(x)

= sup t (f 1 (x)).
0ts

The theorem is proved.


Exercise 12.1: Let Xt be a sample-continuous independent increment process with uncertainty distribution t (x). Show that the supremum
sup exp(Xt )

(12.53)

0ts

has an uncertainty distribution


(x) = inf t (ln x);
0ts

(12.54)

and the infimum


inf exp(Xt )

0ts

(12.55)

has an uncertainty distribution


(x) = sup t (ln x).

(12.56)

0ts

Exercise 12.2: Let Xt be a sample-continuous and positive independent


increment process with uncertainty distribution t (x). Show that the supremum
sup ln Xt
(12.57)
0ts

has an uncertainty distribution


(x) = inf t (exp(x));
0ts

(12.58)

and the infimum


inf ln Xt

0ts

(12.59)

has an uncertainty distribution


(x) = sup t (exp(x)).
0ts

(12.60)

270

Chapter 12 - Uncertain Process

Exercise 12.3: Let Xt be a sample-continuous and nonnegative independent increment process with uncertainty distribution t (x). Show that the
supremum
sup Xt2
(12.61)
0ts

has an uncertainty distribution

(x) = inf t ( x);

(12.62)

0ts

and the infimum


inf Xt2

(12.63)

0ts

has an uncertainty distribution

(x) = sup t ( x).

(12.64)

0ts

Theorem 12.18 (Liu [126]) Let Xt be a sample-continuous independent increment process with continuous uncertainty distribution t (x). Then the
supremum
sup f (Xt )
(12.65)
0ts

has an uncertainty distribution


(x) = 1 sup t (f 1 (x));

(12.66)

0ts

and the infimum


inf f (Xt )

(12.67)

0ts

has an uncertainty distribution


(x) = 1 inf t (f 1 (x)).
0ts

(12.68)

Proof: Since f is a strictly decreasing function, f (Xt ) x if and only if


Xt f 1 (x). It follows from the extreme value theorem that




1
(x) = M sup f (Xt ) x = M inf Xt f (x)
0ts

=1M

0ts


1
inf Xt < f (x) = 1 sup t (f 1 (x)).

0ts

0ts

Similarly, we have
(x) = M




inf f (Xt ) x = M sup Xt f 1 (x)

0ts

=1M

0ts


sup Xt < f 1 (x) = 1 inf t (f 1 (x)).
0ts

0ts

271

Section 12.7 - First Hitting Time

The theorem is proved.


Exercise 12.4: Let Xt be a sample-continuous independent increment process with continuous uncertainty distribution t (x). Show that the supremum
sup exp(Xt )
(12.69)
0ts

has an uncertainty distribution


(x) = 1 sup t ( ln x);

(12.70)

0ts

and the infimum


inf exp(Xt )

0ts

(12.71)

has an uncertainty distribution


(x) = 1 inf t ( ln x).
0ts

(12.72)

Exercise 12.5: Let Xt be a sample-continuous and positive independent


increment process with continuous uncertainty distribution t (x). Show that
the supremum
1
sup
(12.73)
X
t
0ts
has an uncertainty distribution
(x) = 1 sup t
0ts

and the infimum

 
1
;
x

1
0ts Xt
inf

(12.74)

(12.75)

has an uncertainty distribution


(x) = 1 inf t
0ts

12.7

 
1
.
x

(12.76)

First Hitting Time

Definition 12.11 Let Xt be an uncertain process and let z be a given level.


Then the uncertain variable



z = inf t 0 Xt = z
(12.77)
is called the first hitting time that Xt reaches the level z.

272

Chapter 12 - Uncertain Process

X. t

...
..........
...
....
....... ..
..
... ... ....... ..
...
........... ....... .................
.. ..
...
.......
...
...
..
........
..
...
.............................................................................................
..
.
...
.
...
.....
........
......
...
.
.
.
...
..
... .....
.. ... ........ ....
.
.
...
.
..... ... ...
...
.... ........ ... .... .......
........
....
......... .....
...
.............
...
.
..
. ... .....
.
.
... ... ...
...
..
... ...
.. .... .... .... .......... ...............
...
...
... ...
.......
...... ... ... ... ... .
...
..
.....
.... .... .. ... .
..... ... ... ......
...
..
..
............... .... ...
.....
.
.
.
.
...
.
.
..
..
. . .
...
..
..... . ..
..
...
..
... ..... ....
..
... ... ...
..
......
..
......
.
.....................................................................................................................................................................................................................................................

Figure 12.5: First Hitting Time


Theorem 12.19 Let Xt be an uncertain process and let z be a given level.
Then the first hitting time z that Xt reaches the level z has an uncertainty
distribution,




M sup Xt z , if X0 < z
0ts
(12.78)
(s) =



M inf Xt z , if X0 > z.
0ts

Proof: When X0 < z, it follows from the definition of first hitting time that
z s if and only if sup Xt z.
0ts

Thus the uncertainty distribution of z is


(s) = M{z s} = M


sup Xt z .
0ts

When X0 > z, it follows from the definition of first hitting time that
z s if and only if

inf Xt z.

0ts

Thus the uncertainty distribution of z is


(s) = M{z s} = M


inf Xt z .

0ts

The theorem is verified.


Theorem 12.20 (Liu [126]) Let Xt be a sample-continuous independent increment process with continuous uncertainty distribution t (x). If f is a

273

Section 12.8 - Time Integral

strictly increasing function and z is a given level, then the first hitting time
z that f (Xt ) reaches the level z has an uncertainty distribution,

(s) =

inf t (f 1 (z)), if z > f (X0 )

1 0ts

sup t (f 1 (z)),

if z < f (X0 ).

(12.79)

0ts

Proof: Note that Xt is a sample-continuous independent increment process


and f is a strictly increasing function. When z > f (X0 ), it follows from the
extreme value theorem that


(s) = M{z s} = M sup f (Xt ) z = 1 inf t (f 1 (z)).
0ts

0ts

When z < f (X0 ), it follows from the extreme value theorem that


(s) = M{z s} = M


inf f (Xt ) z

0ts

= sup t (f 1 (z)).
0ts

The theorem is verified.


Theorem 12.21 (Liu [126]) Let Xt be a sample-continuous independent increment process with continuous uncertainty distribution t (x). If f is a
strictly decreasing function and z is a given level, then the first hitting time
z that f (Xt ) reaches the level z has an uncertainty distribution,

(s) =

sup t (f 1 (z)),

if z > f (X0 )

0ts

1 inf t (f 1 (z)), if z < f (X0 ).

(12.80)

0ts

Proof: Note that Xt is an independent increment process and f is a strictly


decreasing function. When z > f (X0 ), it follows from the extreme value
theorem that


(s) = M{z s} = M sup f (Xt ) z = sup t (f 1 (z)).
0ts

0ts

When z < f (X0 ), it follows from the extreme value theorem that
(s) = M{z s} = M
The theorem is verified.


inf f (Xt ) z

0ts

= 1 inf t (f 1 (z)).
0ts

274

12.8

Chapter 12 - Uncertain Process

Time Integral

This section will give a definition of time integral that is an integral of uncertain process with respect to time.
Definition 12.12 (Liu [114]) Let Xt be an uncertain process. For any partition of closed interval [a, b] with a = t1 < t2 < < tk+1 = b, the mesh is
written as
= max |ti+1 ti |.
(12.81)
1ik

Then the time integral of Xt with respect to t is


k
X

Xt dt = lim

Xti (ti+1 ti )

(12.82)

i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.
Since Xt is an uncertain variable at each time t, the limit in (12.82) is
also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is time integrable if and only if the
limit in (12.82) is an uncertain variable.
Theorem 12.22 If Xt is a sample-continuous uncertain process on [a, b],
then it is time integrable on [a, b].
Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
lim

k
X

Xti (ti+1 ti )

i=1

exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.
Theorem 12.23 If Xt is a time integrable uncertain process on [a, b], then
it is time integrable on each subinterval of [a, b]. Moreover, if c [a, b], then
Z

Z
Xt dt =

Z
Xt dt +

Xt dt.

(12.83)

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is a time integrable


uncertain process on [a, b], for any partition
a = t1 < < tm = a0 < tm+1 < < tn = b0 < tn+1 < < tk+1 = b,

275

Section 12.8 - Time Integral

the limit
lim

k
X

Xti (ti+1 ti )

i=1

exists almost surely and is finite. Thus the limit


lim

n1
X

Xti (ti+1 ti )

i=m

exists almost surely and is finite. Hence Xt is time integrable on the subinterval [a0 , b0 ]. Next, for the partition
a = t1 < < tm = c < tm+1 < < tk+1 = b,
we have
k
X

Xti (ti+1 ti ) =

m1
X

Xti (ti+1 ti ).

i=m

i=1

i=1

k
X

Xti (ti+1 ti ) +

Note that
b

Xt dt = lim

a
c

Xt dt = lim

Xt dt = lim

k
X

Xti (ti+1 ti ),

i=1
m1
X

Xti (ti+1 ti ),

i=1
k
X

Xti (ti+1 ti ).

i=m

Hence the equation (12.83) is proved.


Theorem 12.24 (Linearity of Time Integral) Let Xt and Yt be time integrable uncertain processes on [a, b], and let and be real numbers. Then
Z b
Z b
Z b
(Xt + Yt )dt =
Xt dt +
Yt dt.
(12.84)
a

Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b
k
X
(Xt + Yt )dt = lim
(Xti + Yti )(ti+1 ti )
0

= lim
0

Z
=

k
X

Xti (ti+1 ti ) + lim


0

i=1

Z
Xt dt +

i=1

Yt dt
a

Hence the equation (12.84) is proved.

k
X
i=1

Yti (ti+1 ti )

276

Chapter 12 - Uncertain Process

Theorem 12.25 (Yao [230]) Let Xt be a sample-continuous independent


increment process with inverse uncertainty distribution 1
t (). Then the
time integral
Z s
Xt dt
(12.85)
Ys =
0

has an inverse uncertainty distribution


Z s
1
()
=
1
t ()dt.
s

(12.86)

Example 12.21: Let Xt be a stationary independent increment process with


an inverse uncertainty distribution 1
t (). Then there exist two strictly
increasing functions () and () such that 1
t () = () + ()t. It
follows from Theorem 12.25 that the time integral of Xt on [0, s] has an
inverse uncertainty distribution
Z s
1
1
s () =
(() + ()t)dt = ()s + ()s2 .
2
0

12.9

Bibliographic Notes

The study of uncertain process was started by Liu [114] in 2008 for modeling
the evolution of uncertain phenomena. Uncertainty distribution is an important concept for describing uncertain process, and a sufficient and necessary
condition for it was proved by Liu [133]. In addition, independence concept
of uncertain processes was also discussed by Liu [133].
Independent increment process was initialized by Liu [114], and a sufficient and necessary condition was proved by Liu [133] for its inverse uncertainty distribution. In addition, Liu [126] presented an extreme value
theorem and obtained the uncertainty distribution of first hitting time, and
Yao [230] provided a formula for calculating the uncertainty distribution of
time integral of independent increment process.
Stationary independent increment process was initialized by Liu [114], and
a sufficient and necessary condition was proved by Liu [133] for its inverse
uncertainty distribution. Furthermore, Liu [120] showed that the expected
value is a linear function of time, and Chen [14] verified that the variance is
proportional to the square of time.

Chapter 13

Uncertain Renewal
Process
As an important type of uncertain process, an uncertain renewal process is an
uncertain process in which events occur continuously and independently of
one another in uncertain times. This chapter will introduce uncertain renewal
process, delayed renewal process, renewal reward process, and alternating
renewal process. This chapter will also provide an uncertain insurance model.

13.1

Uncertain Renewal Process

Definition 13.1 (Liu [114]) Let 1 , 2 , be iid positive uncertain interarrival times. Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the
uncertain process
Nt = max {n | Sn t}
(13.1)
n0

is called an uncertain renewal process.


If 1 , 2 , denote the interarrival times of successive events, then Sn is
a stationary independent increment process with respect to n, and can be
regarded as the waiting time until the occurrence of the nth event. In this
case, the renewal process Nt is the number of renewals in (0, t]. Note that Nt
is not sample-continuous, but each sample path of Nt is a right-continuous
and increasing step function taking only nonnegative integer values. Furthermore, the size of each jump of Nt is always 1. In other words, Nt has at most
one renewal at each time. In particular, Nt does not jump at time 0.
Theorem 13.1 (Fundamental Relationship) Let Nt be a renewal process
with uncertain interarrival times 1 , 2 , , and Sn = 1 + 2 + + n .
Then we have
Nt n Sn t
(13.2)

278

Chapter 13 - Uncertain Renewal Process

N. t
4
3
2
1
0

...
..........
...
..
..........
..............................
..
....
..
...
..
..
..........
.........................................................
..
....
..
..
..
..
..
...
..
..
..........
.......................................
..
...
.
..
..
..
...
..
..
...
..
..
..
..
...
..
.........
.........................................................
..
..
..
..
..
....
..
..
..
...
..
..
..
..
.
....
.....................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..

S0

S1

S2

S3

S4

Figure 13.1: A Sample Path of Renewal Process


for any time t and integer n. Furthermore, we also have
Nt n Sn+1 > t.

(13.3)

It follows from the fundamental relationship that Nt n is equivalent to


Sn t. Thus we immediately have
M{Nt n} = M{Sn t}.

(13.4)

Since Nt n is equivalent to Sn+1 > t, by using the duality axiom, we also


have
M{Nt n} = 1 M{Sn+1 t}.
(13.5)
Theorem 13.2 (Liu [120]) Let Nt be a renewal process with uncertain interarrival times 1 , 2 , If those interarrival times have a common uncertainty
distribution , then Nt has an uncertainty distribution


t
t (x) = 1
, x 0
(13.6)
bxc + 1
where bxc represents the maximal integer less than or equal to x.
Proof: Note that Sn+1 has an uncertainty distribution (x/(n + 1)). It
follows from (13.5) that


t
M{Nt n} = 1 M{Sn+1 t} = 1
.
n+1
Since Nt takes integer values, for any x 0, we have
t (x) = M{Nt x} = M{Nt bxc} = 1
The theorem is verified.

t
bxc + 1


.

279

Section 13.1 - Uncertain Renewal Process

t (x)

....
........
..
...
...
t
...
.........................................
...

..
...
..
t
...
............................................

...
..
..
...
.
..
..
t
...
..
.
..........................................

..
...
..
..
..
..
...
.
..
..
..
..
...
.
..
.
..
.
...
..
..
..
..
...
t
.
..
..
.
..
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

..
..
..
..
...
.
.
..
..
..
..
...
.
..
..
..
..
..
..
...
..
.
..
..
..
..
...
..
..
..
..
t
...
.
..
..
..
............................................
.
..

...
.
..
..
..
..
.
...
.
.
.
..
..
..
..
..............................................
.
..
..
..
..

.
.
..
.
.
.
t
..
..
..
..
...
....
.
..
..
..
.
..
.
.
.
.
.......................................................................................................................................................................................................................................................................................
..
..
....
...

(5)

(4)

(3)

(2)

(1)

(0)

Figure 13.2: Uncertainty Distribution t (x) of Renewal Process Nt


Theorem 13.3 (Liu [120]) Let Nt be a renewal process with uncertain interarrival times 1 , 2 , Then the average renewal number
Nt
1

t
1

(13.7)

in the sense of convergence in distribution as t .


Proof: The uncertainty distribution t of Nt has been given by Theorem 13.2 as follows,


t
t (x) = 1
bxc + 1
where is the uncertainty distribution of 1 . It follows from the operational
law that the uncertainty distribution of Nt /t is


t
t (x) = 1
btxc + 1
where btxc represents the maximal integer less than or equal to tx. Thus
 
1
lim t (x) = 1
t
x
which is just the uncertainty distribution of 1/1 . Hence Nt /t converges in
distribution to 1/1 as t .
Theorem 13.4 (Liu [120], Elementary Renewal Theorem) Let Nt be a renewal process with uncertain interarrival times 1 , 2 , If E[1/1 ] exists,
then
 
E[Nt ]
1
lim
=E
.
(13.8)
t
t
1

280

Chapter 13 - Uncertain Renewal Process

If those interarrival times have a common uncertainty distribution , then


Z +  
1
E[Nt ]

lim
=
dx.
(13.9)
t
t
x
0
If the uncertainty distribution is regular, then
Z 1
E[Nt ]
1
lim
=
d.
1 ()
t
t

(13.10)

Proof: It follows from Theorem 13.2 that Nt /t has an uncertainty distribution




t
t (x) = 1
btxc + 1
and 1/1 has an uncertainty distribution
G(x) = 1

 
1
.
x

Note that t (x) G(x) and t (x) G(x). It follows from Lebesgue dominated convergence theorem and the existence of E[1/1 ] that
 
Z +
Z +
E[Nt ]
1
= lim
(1 t (x))dx =
lim
(1 G(x))dx = E
.
t 0
t
t

1
0
Furthermore, since 1/ has an inverse uncertainty distribution 1/1 (1 ),
we get
  Z 1
Z 1
1
1
1
E
=
d
=
d.
1 (1 )
1 ()

0
0
The theorem is proved.
Exercise 13.1: A renewal process Nt is called linear if 1 , 2 , are iid
linear uncertain variables L(a, b) with a > 0. Show that
lim

E[Nt ]
ln b ln a
=
.
t
ba

(13.11)

Exercise 13.2: A renewal process Nt is called zigzag if 1 , 2 , are iid


zigzag uncertain variables Z(a, b, c) with a > 0. Show that


E[Nt ]
1 ln b ln a ln c ln b
lim
=
+
.
(13.12)
t
t
2
ba
cb
Exercise 13.3: A renewal process Nt is called lognormal if 1 , 2 , are iid
lognormal uncertain variables LOGN (e, ). Show that

(
3 exp(e) csc( 3), if < / 3
E[Nt ]
lim
=
(13.13)

t
t
+,
if / 3.

281

Section 13.2 - Delayed Renewal Process

Example 13.1: (Yao [225]) Block replacement policy means that an element
is always replaced at failure or periodically with time s. Assume that the
lifetimes of the elements are iid uncertain variables 1 , 2 , with a common
uncertainty distribution . Then replacement times before the given time s
form an uncertain renewal process Nt . Let a denote the failure replacement
cost of replacing an element when it fails earlier than s, and b the planned
replacement cost of replacing an element at planned time s. It is clear that
the cost of one period is aNs + b and the average cost is
aNs + b
.
s

(13.14)

In addition, it follows from Theorem 13.2 that


E[Ns ] =

s

n=1

and then



aNs + b
1
E
=
s
s

X
n=1

s
n

!
+b .

(13.15)

When the block replacement policy is accepted, one problem is concerned


with finding an optimal time s in order to minimize the average cost, i.e.,
!

s
X
1
min
a

+b .
(13.16)
s s
n
n=1

13.2

Delayed Renewal Process

A delayed renewal process is a generalized renewal process in which the first


interarrival time is allowed to have a different uncertainty distribution from
the remaining ones.
Definition 13.2 (Zhang, Ning and Meng [240]) Let 1 , 2 , be a sequence
of independent positive uncertain interarrival times. Assume 2 , 3 , are
identically distributed but 1 is allowed to have a different uncertainty distribution. Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the
uncertain process
Dt = max {n | Sn t}
(13.17)
n0

is called a delayed renewal process.


Note that if the first interarrival time 1 has the same uncertainty distribution with the others, then Dt is identical with an uncertain renewal
process.

282

Chapter 13 - Uncertain Renewal Process

Theorem 13.5 (Zhang, Ning and Meng [240]) Let Dt be a delayed renewal
process with uncertain interarrival times 1 , 2 , If 1 has an uncertainty
distribution and 2 , 3 , have a common uncertainty distribution , then
Dt has an uncertainty distribution


ts
t (x) = 1 sup (s)
, x0
(13.18)
bxc
0st
where bxc represents the maximal integer less than or equal to x. Here we
set (t s)/bxc = + and ((t s)/bxc) = 1 when bxc = 0.
Proof: It follows from the definition of uncertain delayed renewal process
that the uncertainty distribution of Dt meets
t (n) = M{Dt n} = 1M{Sn+1 t} = 1M {1 + (2 + + n+1 ) t}
for any nonnegative integer n. By using the independence of uncertain interarrival times, we have


ts
.
t (n) = 1 sup (s)
n
0st
Since an uncertain delayed renewal process can only take integer values, we
have


ts
t (x) = t (bxc) = 1 sup (s)
.
bxc
0st
The theorem is verified.
Theorem 13.6 (Zhang, Ning and Meng [240]) Let Dt be a delayed renewal
process with uncertain interarrival times 1 , 2 , Then the average renewal
number
1
Dt

(13.19)
t
2
in the sense of convergence in distribution as t .
Proof: It follows from the equation (13.18) that Dt /t has an uncertainty
distribution


ts
.
Ft (x) = M{Dt tx} = 1 sup (s)
btxc
0st
It is easy to verify that
lim Ft (x) 1

 
1
.
x

On the other hand, the uncertain variable 1/2 has an uncertainty distribution




 
1
1
1
G(x) = M
x = M 2
=1
2
x
x

Section 13.3 - Renewal Reward Process

283

at every continuous point x of . Thus Dt /t converges in distribution to 1/2


as t .
Theorem 13.7 (Zhang, Ning and Meng [240]) Let Dt be a delayed renewal
process with uncertain interarrival times 1 , 2 , If E[1/2 ] exists, then
 
1
E[Dt ]
=E
.
(13.20)
lim
t
t
2
Proof: Let and denote the uncertainty distributions of 1 and 2 ,
respectively. Since E[1/2 ] exists, we immediately have
Z +  
2
dx < +.

x
0
For any time t 1, it is easy to verify that
 (

1,
if 0 x 1
ts

sup (s)
btxc
(2/x), if 1 x < .
0st
That is, the above sequence of functions indexed by t is dominated by an
integrable function of x. Note that


 
ts
1
sup (s)

btxc
x
0st
as t . It follows from Lebesgue dominated convergence theorem that


Z +
ts
E[Dt ]
= lim
sup (s)
dx
lim
t 0
t
t
btxc
0st


Z +
ts
=
dx
lim sup (s)
t 0st
btxc
0
 
Z +  
1
1
=

dx = E
.
x
2
0
The theorem is proved.

13.3

Renewal Reward Process

Let (1 , 1 ), (2 , 2 ), be a sequence of pairs of uncertain variables. We


shall interpret i as the rewards (or costs) associated with the i-th interarrival
times i for i = 1, 2, , respectively.
Definition 13.3 (Liu [120]) Let 1 , 2 , be iid uncertain interarrival times,
and let 1 , 2 , be iid uncertain rewards. Assume that (1 , 2 , ) and
(1 , 2 , ) are independent uncertain vectors. Then
Rt =

Nt
X
i=1

(13.21)

284

Chapter 13 - Uncertain Renewal Process

is called a renewal reward process, where Nt is the renewal process with uncertain interarrival times 1 , 2 ,
A renewal reward process Rt denotes the total reward earned by time t.
In addition, if i 1, then Rt degenerates to a renewal process Nt . Please
also note that Rt = 0 whenever Nt = 0.
Theorem 13.8 (Liu [120]) Let Rt be a renewal reward process with uncertain interarrival times 1 , 2 , and uncertain rewards 1 , 2 , Assume
those interarrival times and rewards have uncertainty distributions and ,
respectively. Then Rt has an uncertainty distribution



x
t

.
(13.22)
t (x) = max 1
k0
k+1
k
Here we set x/k = + and (x/k) = 1 when k = 0.
Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards 1 , 2 , , and Rt
has an uncertainty distribution
(N
)
(
)
k
t
X
[
X
t (x) = M
i x = M
(Nt = k)
i x
i=1

(
=M

i=1

k=0

(Nt = k) 1

k=0

x
k

)
(this is a polyrectangle)

n

x o
= max M (Nt k) 1
(polyrectangular theorem)
k0
k
n
o
x
= max M {Nt k} M 1
(independence)
k0
k



x
t

.
= max 1
k0
k+1
k
The theorem is proved.
Theorem 13.9 (Liu [120]) Assume that Rt is a renewal reward process with
uncertain interarrival times 1 , 2 , and uncertain rewards 1 , 2 , Then
the reward rate
Rt
1

(13.23)
t
1
in the sense of convergence in distribution as t .
Proof: It follows from Theorem 13.8 that the uncertainty distribution of Rt
is



x
t
t (x) = max 1

.
k0
k+1
k

285

Section 13.3 - Renewal Reward Process

t (x)

....
........
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
...
........ ..
.
. ..
.. .. .. .. .. .
......
...
.. .. .. .. .. .. .
....
........
.
.
.
.
.
...... .. ..
.
...
.
.
.
.
........
...
.
.
.
................
.
...
...
.
.
.
.
....
.............
.
.
...
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
... .. .. .. .. .. .. .. .. .. .. ....... .. .. .. .. .. .. .. .. .. ......... .. .. .. .. .. .. .. .. ..................................................................
.
..
..
...
.....
......
....
...
....
..
......
...
....
..
...
......
....
..
.....
...
..
....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................
.
.
.
..
....
....
...
..
...
...
....
..
...
...
..
...
....
...
...
..
...
...
.
...
.
.
.
...
.
.
.
..
...
...
...
...
..
...
...
...
...
..
...
...
.... .. .. .. .. ..................................
...
...
.
.
.
.
.
.
...
.
.
.
.
.
...
..
...
..
...
..
...
...
...
...
..
...
..
..
...
..
..
...
.
.
..
.
.
.
...
...
.
.. ...
..
...
..
.
..
...
... ..... .... ..... ......
............... .. .... ....
... ... ... .... ......
.. .. .. ... ....
...................................................................................................................................................................................................................................................................
....
....

Figure 13.3: Uncertainty Distribution t (x) of Renewal Reward Process Rt


in which the dashed horizontal lines are 1 (t/(k + 1)) and the dashed
curves are (x/k) for k = 0, 1, 2,
Then Rt /t has an uncertainty distribution



 
tx
t
t (x) = max 1

.
k0
k+1
k
When t , we have
t (x) sup(1 (y)) (xy)
y0

which is just the uncertainty distribution of 1 /1 . Hence Rt /t converges in


distribution to 1 /1 as t .
Theorem 13.10 (Liu [120], Renewal Reward Theorem) Assume that Rt is
a renewal reward process with uncertain interarrival times 1 , 2 , and uncertain rewards 1 , 2 , If E[1 /1 ] exists, then
 
E[Rt ]
1
lim
=E
.
(13.24)
t
t
1
If those interarrival times and rewards have regular uncertainty distributions
and , respectively, then
Z 1
1 ()
E[Rt ]
lim
=
d.
(13.25)
1
t
t
(1 )
0
Proof: It follows from Theorem 13.8 that Rt /t has an uncertainty distribution



 
t
tx
Ft (x) = max 1

k0
k+1
k

286

Chapter 13 - Uncertain Renewal Process

and 1 /1 has an uncertainty distribution


G(x) = sup(1 (y)) (xy).
y0

Note that Ft (x) G(x) and Ft (x) G(x). It follows from Lebesgue dominated convergence theorem and the existence of E[1 /1 ] that
 
Z +
Z +
1
E[Rt ]
(1 G(x))dx = E
(1 Ft (x))dx =
lim
= lim
.
t
t 0
t
1
0
Finally, since 1 /1 has an inverse uncertainty distribution 1 ()/1 (1
), the equation (13.25) is verified.

13.4

Alternating Renewal Process

Let (1 , 1 ), (2 , 2 ), be a sequence of pairs of uncertain variables. We


shall interpret i as the on-times and i as the off-times for i = 1, 2, ,
respectively. In this case, the i-th cycle consists of an on-time i followed by
an off-time i .
Definition 13.4 (Yao and Li [218]) Let 1 , 2 , be iid uncertain on-times,
and let 1 , 2 , be iid uncertain off-times. Assume that (1 , 2 , ) and
(1 , 2 , ) are independent uncertain vectors. Then

Nt
Nt
Nt
X
X
X

,
if
(
+

t
<
(i + i ) + Nt +1
t

i
i
i

i=1
i=1
i=1
(13.26)
At =
N
Nt
N

t +1
t +1
X
X
X

(i + i )
(i + i ) + Nt +1 t <
i ,
if

i=1

i=1

i=1

is called an alternating renewal process, where Nt is the renewal process with


uncertain interarrival times 1 + 1 , 2 + 2 ,
Note that the alternating renewal process At is just the total time at which
the system is on up to time t. It is clear that
Nt
X
i=1

i At

N
t +1
X

(13.27)

i=1

for each time t. We are interested in the limit property of the rate at which
the system is on, i.e., At /t.
Theorem 13.11 (Yao and Li [218]) Assume that At is an alternating renewal process with uncertain on-times 1 , 2 , and uncertain off-times 1 , 2 ,
Then the availability rate
At
1

t
1 + 1
in the sense of convergence in distribution as t .

(13.28)

Section 13.5 - Uncertain Insurance Model

287

Proof: Write the uncertainty distributions of 1 and 1 by and , respectively. Then the uncertainty distribution of 1 /(1 + 1 ) is
(x) = sup (xy) (1 (y xy)).
y0

On the other hand, we may prove


( N
)
( N +1
)
t
t
1X
1 X
lim
i x = lim
i x = (x).
t
t
t i=1
t i=1
It follows from (13.27) that At /t converges in distribution to 1 /(1 + 1 ).
Theorem 13.12 (Yao and Li [218], Alternating Renewal Theorem) Assume
that At is an alternating renewal process with uncertain on-times 1 , 2 ,
and uncertain off-times 1 , 2 , If E[1 /(1 + 1 )] exists, then


1
E[At ]
=E
.
(13.29)
lim
t
t
1 + 1
If those on-times and off-times have regular uncertainty distributions and
, respectively, then
Z 1
E[At ]
1 ()
lim
=
d.
(13.30)
1
t
t
() + 1 (1 )
0
Proof: Write the uncertainty distributions of At /t and 1 /(1 + 1 ) by Ft (x)
and G(x), respectively. Since At /t converges in distribution to 1 /(1 + 1 ),
we have Ft (x) G(x) as t . It follows from Lebesgue dominated
convergence theorem that


Z 1
Z 1
1
E[At ]
= lim
(1 Ft (x))dx =
(1 G(x))dx = E
lim
.
t 0
t
t
1 + 1
0
Finally, since the uncertain variable 1 /(1 + 1 ) is strictly increasing with
respect to 1 and strictly decreasing with respect to 1 , it has an inverse
uncertainty distribution 1 ()/(1 () + (1 )). The equation (13.30)
is thus verified.

13.5

Uncertain Insurance Model

Liu [126] assumed that a is the initial capital of an insurance company, b is


the premium rate, bt is the total income up to time t, and the uncertain claim
process is a renewal reward process
Rt =

Nt
X
i=1

(13.31)

288

Chapter 13 - Uncertain Renewal Process

with iid uncertain interarrival times 1 , 2 , and iid uncertain claim amounts
1 , 2 , Then the capital of the insurance company at time t is
Zt = a + bt Rt

(13.32)

and Zt is called an insurance risk process.


Z. t

....
.........
..
...
..
..
...
.......
.......
...
...... ...
...... ...
...... .... .......... ....
...
.....
..
.
...
.
.
.
.
.
.
...
.
.
.
..
...
..
...
......
...
..
.....
...
...
......
..
.
.
.
...
.
.
...
..
....
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
...
.
........
... ...
...
.
.
.
.
.
.
.
... .......... ..
.
.
... .......... ....
.
.
.
..
.... .... .........
..
.
...
.
.
... .........
... ........
.
.........
...
..
..
...
.
.
.
.
.
.
.
.
.
.
.......
.
..
...
..
..
.....
...
...
..
.....
...
..
..
...
.. ..........
..
...
..
..
...
...........
..
...
..
..
...
..
...
..
..
..
...
...
..
..
..
..
...
..
...
..
..
..
...
..
...
..
..
..
...
..
...
..
..
...
..
..
...
..
...
..
..
.
...
.
............................................................................................................................................................................................................................................................................................
....
....
....
.
....
...
......
1
2
3
4
..
...
.
......
...
...
.... .......
...
..
...
........
.

Figure 13.4: An Insurance Risk Process

Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.
Definition 13.5 (Liu [126]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,


Ruin = M inf Zt < 0 .
t0

(13.33)

It is clear that the ruin index is a special case of the risk index in the
sense of Liu [119].
Theorem 13.13 (Liu [126], Ruin Index Theorem) Let Zt = a + bt Rt be
an insurance risk process where a and b are positive numbers, and Rt is a
renewal reward process with iid uncertain interarrival times 1 , 2 , and iid
uncertain claim amounts 1 , 2 , If 1 and 1 have continuous uncertainty
distributions and , respectively, then the ruin index is

 
 x 
xa
Ruin = max sup
1
.
(13.34)
k1 x0
kb
k

289

Section 13.6 - Age Replacement Policy

Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = 1 + 2 + + k
whose uncertainty distribution is (s/k). Define an uncertain process indexed by k as follows,
Yk = a + bSk (1 + 2 + + k ).
It is easy to verify that Yk is an independent increment process with respect
to k. In addition, Yk is just the capital at the arrival time Sk and has an
uncertainty distribution
 

 x 
z+xa
1
.
Fk (z) = sup
kb
k
x0
Since a ruin occurs only at the arrival times, we have




Ruin = M inf Zt < 0 = M min Yk < 0 .
t0

k1

It follows from the extreme value theorem that


 

 x 
xa
1
Ruin = max Fk (0) = max sup
.
k1
k1 x0
kb
k
The theorem is proved.
Ruin Time
Definition 13.6 (Liu [126]) Let Zt be an insurance risk process. Then the
ruin time is determined by



= inf t 0 Zt < 0 .
(13.35)
If Zt 0 for all t 0, then we define = +. Note that the ruin time is
just the first hitting time that the total capital Zt becomes negative. Since
inf t0 Zt < 0 if and only if < +, the relation between ruin index and
ruin time is


Ruin = M inf Zt < 0
t0

13.6

= M{ < +}.

Age Replacement Policy

Age replacement means that an element is always replaced at failure or at


an age s. Assume that the lifetimes of the elements are iid uncertain variables 1 , 2 , with a common uncertainty distribution . Then the actual
lifetimes of the elements are iid uncertain variables
1 s, 2 s,

(13.36)

290

Chapter 13 - Uncertain Renewal Process

which may generate an uncertain renewal process


(
)
n
X

Nt = max n
(i s) t .
n0

(13.37)

i=1

Let a denote the failure replacement cost of replacing an element when


it fails earlier than s, and b the planned replacement cost of replacing an
element at the age s. Define
(
a, if x < s
f (x) =
(13.38)
b, if x = s.
Then f (i s) is just the cost of replacing the ith element, and the average
replacement cost before the time t is
N

t
1X
f (i s).
t i=1

(13.39)

Theorem 13.14 (Yao and Ralescu [221]) Assume 1 , 2 , are iid uncertain variables and s is a positive number. Then
N

t
1X
f (1 s)
f (i s)
t i=1
1 s

(13.40)

in the sense of convergence in distribution as t .


Theorem 13.15 (Yao and Ralescu [221]) Assume 1 , 2 , are iid uncertain variables with a common uncertainty distribution , and s is a positive
number. Then
#
" N
Z s
t
b ab
1X
(x)
f (i s) = +
lim E
(s) + a
dx.
(13.41)
t
t i=1
s
s
x2
0
When the age replacement policy is accepted, the problem is to find the
optimal time s such that the average replacement cost is minimized. That
is, the optimal time s should solve


Z s
(x)
b ab
+
(s) + a
min
dx .
(13.42)
s0
s
s
x2
0

13.7

Bibliographic Notes

The concept of uncertain renewal process was first proposed by Liu [114] in
2008. Two years later, Liu [120] proved an uncertain elementary renewal theorem for determining the average renewal number. Liu [120] also provided

Section 13.7 - Bibliographic Notes

291

the concept of uncertain renewal reward process and verified an uncertain renewal reward theorem for determining the long-run reward rate. In addition,
Zhang, Ning and Meng [240] introduced the concept of uncertain delayed renewal process and showed an uncertain elementary delayed renewal theorem.
Furthermore, Yao and Li [218] presented the concept of uncertain alternating renewal process and proved an uncertain alternating renewal theorem for
determining the availability rate.
Based on the theory of uncertain renewal process, Liu [126] presented an
uncertain insurance model by assuming the claim is an uncertain renewal
reward process, and proved a formula for calculating ruin index. In addition,
Yao [225] discussed the uncertain block replacement policy, and Yao and
Ralescu [221] investigated the uncertain age replacement policy and obtained
the long-run average replacement cost.

Chapter 14

Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. This chapter will introduce Liu process, Liu integral, fundamental theorem, chain rule, change of variables, and
integration by parts.

14.1

Liu Process

In 1827 Robert Brown observed irregular movement of pollen grain suspended


in liquid. This movement is now known as Brownian motion. In 1923 Norbert
Wiener modeled Brownian motion by what is called Wiener process. In 2009
Liu [116] modeled Brownian motion by what is called Liu process.
Roughly speaking, a Liu process is a stationary independent increment
process whose increments are normal uncertain variables. A formal definition
is given below.
Definition 14.1 (Liu [116]) An uncertain process Ct is said to be a canonical Liu process if
(i) C0 = 0 and almost all sample paths are Lipschitz continuous,
(ii) Ct has stationary and independent increments,
(iii) every increment Cs+t Cs is a normal uncertain variable with expected
value 0 and variance t2 .
It is clear that a canonical Liu process Ct is a stationary independent
increment process and has a normal uncertainty distribution with expected
value 0 and variance t2 . The uncertainty distribution of Ct is


1
x

t (x) = 1 + exp
(14.1)
3t
and inverse uncertainty distribution is

t 3

1
t () =
ln
(14.2)

294

Chapter 14 - Uncertain Calculus

that are homogeneous linear functions of time t for any given . See Figure 14.1.
1
t ()

= 0.9

.......
...
.........
..........
........
.........
...
........
.
.
.
.
.
.
....
.
...
..
.........
.........
...
.....
........
.............
...
.........
.............
........
.
.
.
.............
.
...
.
.
.
.....
.............
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.....
....
........
...
.............
.....................
.........
.............
.....................
........
...
........ .......................... ........................................
...
.........
.
...
........................................................................... ....................................................................................
.
.
.
.
.
...
.
.
. ......................................................................................................
.....................................................................................................................................................................................................................................................................................
..........................................................................................
...........................................
....................... ......................
....
...........................................
..
......... .............
...
....
......... .......................................................
.....................
..
.........
...
.....................
......... .........................
...
.....................
.............
.........
.........
.............
.........
...
.............
.........
.
...
.
.............
.........
.............
.........
...
.............
.........
.........
...
.........
.........
...
.........
...
.........
.........
...
.........
...
.........
.........
...
...
...
...
.
.
...................................................................................................................................................................................................................................................................
..

= 0.8
= 0.7
= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1

Figure 14.1: Inverse Uncertainty Distribution of Canonical Liu Process


A canonical Liu process is defined by three properties in the above definition. Does such an uncertain process exist? The following theorem answers
this question.
Theorem 14.1 (Liu [120], Existence Theorem) There exists a canonical Liu
process.
Proof: Without loss of generality, we only prove that there is a canonical
Liu process on the range of t [0, 1]. Let



(r) r represents rational numbers in [0, 1]
be a countable sequence of iid normal uncertain variables with expected value
0 and variance 1. For each positive integer n, we define an uncertain process

 
k

1 X
k
i

(k = 0, 1, , n)

, if t =
n
n + 1 i=0
n
n
Xt =

linear,
otherwise.
We may prove that Xtn converges in distribution as n and the limit
meets the conditions of canonical Liu process. Hence there exists a canonical
Liu process.
Theorem 14.2 Let Ct be a canonical Liu process. Then for each time t >
0, the ratio Ct /t is a normal uncertain variable with expected value 0 and
variance 1. That is,
Ct
N (0, 1)
(14.3)
t
for any t > 0.

295

Section 14.1 - Liu Process

Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution


1
x
(x) = t (tx) = 1 + exp
.
3
Hence Ct /t is a normal uncertain variable with expected value 0 and variance
1. The theorem is verified.
Theorem 14.3 (Liu [120]) Let Ct be a canonical Liu process. Then for each
time t, we have
t2
E[Ct2 ] t2 .
(14.4)
2
Proof: Note that Ct is a normal uncertain variable and has an uncertainty
distribution t (x) in (14.1). It follows from the definition of expected value
that
Z +
Z +

E[Ct2 ] =
M{Ct2 x}dx =
M{(Ct x) (Ct x)}dx.
0

On the one hand, we have


Z +

(M{Ct x} + M{Ct x})dx


E[Ct2 ]
0

Z
=

(1 t ( x) + t ( x))dx = t2 .

On the other hand, we have


Z
Z +

E[Ct2 ]
M{Ct x}dx =
0

t2
(1 t ( x))dx = .
2

Hence (14.4) is proved.


Theorem 14.4 (Iwamura and Xu [63]) Let Ct be a canonical Liu process.
Then for each time t, we have
1.24t4 < V [Ct2 ] < 4.31t4 .

(14.5)

Proof: For exploring the proof of (14.5), please consult Iwamura and Xu
[63]. An open problem is to improve the bounds of the variance of the square
of canonical process.
Theorem 14.5 (Yao, Gao and Gao [219]) Let Ct be a canonical Liu process.
Then there exists a nonnegative uncertain variable K such that K() is a
Lipschitz constant of the sample path Ct () for each , and
lim M{K x} = 1.

x+

(14.6)

296

Chapter 14 - Uncertain Calculus

Definition 14.2 Let Ct be a canonical Liu process. Then for any real numbers e and > 0, the uncertain process
At = et + Ct

(14.7)

is called an arithmetic Liu process, where e is called the drift and is called
the diffusion.
It is clear that the arithmetic Liu process At is a type of stationary independent increment process. In addition, the arithmetic Liu process At has
a normal uncertainty distribution with expected value et and variance 2 t2 ,
i.e.,
At N (et, t)
(14.8)
whose uncertainty distribution is
1


(et x)

t (x) = 1 + exp
3t

(14.9)

and inverse uncertainty distribution is


1
t ()

t 3

= et +
ln
.

(14.10)

Definition 14.3 Let Ct be a canonical Liu process. Then for any real numbers e and > 0, the uncertain process
Gt = exp(et + Ct )

(14.11)

is called a geometric Liu process, where e is called the log-drift and is called
the log-diffusion.
Note that the geometric Liu process Gt has a lognormal uncertainty distribution, i.e.,
Gt LOGN (et, t)
(14.12)
whose uncertainty distribution is


1
(et ln x)

t (x) = 1 + exp
3t

(14.13)

and inverse uncertainty distribution is


1
t ()

t 3

= exp et +
ln
.

Furthermore, the geometric Liu process Gt has an expected value,

(
t 3 exp(et) csc(t 3), if t < /( 3)
E[Gt ] =

+,
if t /( 3).

(14.14)

(14.15)

297

Section 14.2 - Liu Integral

14.2

Liu Integral

As the most popular topic of uncertain integral, Liu integral allows us to


integrate an uncertain process (the integrand) with respect to Liu process
(the integrator). The result of the Liu integral is another uncertain process.
Definition 14.4 (Liu [116]) Let Xt be an uncertain process and let Ct be
a canonical Liu process. For any partition of closed interval [a, b] with a =
t1 < t2 < < tk+1 = b, the mesh is written as
= max |ti+1 ti |.

(14.16)

1ik

Then Liu integral of Xt with respect to Ct is defined as


k
X

Xt dCt = lim

Xti (Cti+1 Cti )

(14.17)

i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.
Since Xt and Ct are uncertain variables at each time t, the limit in (14.17)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (14.17) is an uncertain variable.
Example 14.1: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
Z

k
X

dCt = lim

(Cti+1 Cti ) Cs C0 = Cs .

i=1

That is,
Z

dCt = Cs .

(14.18)

Example 14.2: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
Cs2 =

k 

X
Ct2i+1 Ct2i
i=1

k
X

Cti+1 Cti

i=1

2

+2

k
X
i=1

Z
0+2

Ct dCt
0

Cti Cti+1 Cti

298

Chapter 14 - Uncertain Calculus

as 0. That is,
s

Ct dCt =
0

1 2
C .
2 s

(14.19)

Example 14.3: For any partition 0 = t1 < t2 < < tk+1 = s, it follows
from (14.17) that
k
X

sCs =

ti+1 Cti+1 ti Cti

i=1
k
X

Cti+1 (ti+1 ti ) +

i=1
Z s

k
X

ti (Cti+1 Cti )

i=1
s

Z
Ct dt +

tdCt

as 0. That is,
s

Z
Ct dt +

tdCt = sCs .

(14.20)

Theorem 14.6 If Xt is a sample-continuous uncertain process on [a, b], then


it is integrable with respect to Ct on [a, b].
Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
lim

k
X

Xti (Cti+1 Cti )

i=1

exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable function. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .
Theorem 14.7 If Xt is an integrable uncertain process on [a, b], then it is
integrable on each subinterval of [a, b]. Moreover, if c [a, b], then
Z

Z
Xt dCt =

Z
Xt dCt +

Xt dCt .

(14.21)

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is an integrable uncertain process on [a, b], for any partition
a = t1 < < tm = a0 < tm+1 < < tn = b0 < tn+1 < < tk+1 = b,

299

Section 14.2 - Liu Integral

the limit
lim

k
X

Xti (Cti+1 Cti )

i=1

exists almost surely and is finite. Thus the limit


lim

n1
X

Xti (Cti+1 Cti )

i=m

exists almost surely and is finite. Hence Xt is integrable on the subinterval


[a0 , b0 ]. Next, for the partition
a = t1 < < tm = c < tm+1 < < tk+1 = b,
we have
k
X

Xti (Cti+1 Cti ) =

m1
X

Xti (Cti+1 Cti ) +

Xti (Cti+1 Cti ).

i=m

i=1

i=1

k
X

Note that
k
X

Xt dCt = lim

m1
X

Xt dCt = lim

Xt dCt = lim

Xti (Cti+1 Cti ),

i=1
k
X

Xti (Cti+1 Cti ),

i=1

Xti (Cti+1 Cti ).

i=m

Hence the equation (14.21) is proved.


Theorem 14.8 (Linearity of Liu Integral) Let Xt and Yt be integrable uncertain processes on [a, b], and let and be real numbers. Then
Z b
Z b
Z b
(Xt + Yt )dCt =
Xt dCt +
Yt dCt .
(14.22)
a

Proof: Let a = t1 < t2 < < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b
k
X
(Xt + Yt )dCt = lim
(Xti + Yti )(Cti+1 Cti )
0

= lim
0

Z
=

k
X

Xti (Cti+1 Cti ) + lim


0

i=1

Z
Xt dCt +

i=1

Yt dCt
a

Hence the equation (14.22) is proved.

k
X
i=1

Yti (Cti+1 Cti )

300

Chapter 14 - Uncertain Calculus

Theorem 14.9 Let f (t) be an integrable function with respect to t. Then


the Liu integral
Z s
f (t)dCt
(14.23)
0

is a normal uncertain variable at each time s, and


 Z s

Z s
f (t)dCt N 0,
|f (t)|dt .
0

(14.24)

Proof: Since the increments of Ct are stationary and independent normal


uncertain variables, for any partition of closed interval [0, s] with 0 = t1 <
t2 < < tk+1 = s, it follows from Theorem 2.13 that
!
k
k
X
X
f (ti )(Cti+1 Cti ) N 0,
|f (ti )|(ti+1 ti ) .
i=1

i=1

That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
k
X

Z
|f (ti )|(ti+1 ti )

|f (t)|dt
0

i=1

as the mesh 0. Hence we obtain


 Z
Z s
k
X
f (ti )(Cti+1 Cti ) N 0,
f (t)dCt = lim
0


|f (t)|dt .

i=1

The theorem is proved.


Exercise 14.1: Let s be a given time with s > 0. Show that the Liu integral
Z s
tdCt
(14.25)
0

is a normal uncertain variable N (0, s2 /2) and has an uncertainty distribution




1
2x
s (x) = 1 + exp
.
(14.26)
3s2
Exercise 14.2: For any real number with 0 < < 1, the uncertain process
Z s
Fs =
(s t) dCt
(14.27)
0

is called a fractional Liu process with index . Show that Fs is a normal


uncertain variable and


s1
(14.28)
Fs N 0,
1

301

Section 14.2 - Liu Integral

whose uncertainty distribution is


s (x) =



1
(1 )x
1 + exp
.
3s1

(14.29)

Definition 14.5 (Chen and Ralescu [17]) Let Ct be a canonical Liu process
and let Zt be an uncertain process. If there exist uncertain processes t and
t such that
Z t
Z t
Zt = Z0 +
s ds +
s dCs
(14.30)
0

for any t 0, then Zt is called a Liu process with drift t and diffusion t .
Furthermore, Zt has an uncertain differential
dZt = t dt + t dCt .

(14.31)

Example 14.4: It follows from the equation (14.18) that the canonical Liu
process Ct can be written as
Z
Ct =

dCs .
0

Thus Ct is a Liu process with drift 0 and diffusion 1, and has an uncertain
differential dCt .
Example 14.5: It follows from the equation (14.19) that Ct2 can be written
as
Z t
Ct2 = 2
Cs dCs .
0

Ct2

is a Liu process with drift 0 and diffusion 2Ct , and has an uncertain
Thus
differential
d(Ct2 ) = 2Ct dCt .
Example 14.6: It follows from the equation (14.20) that tCt can be written
as
Z
Z
t

tCt =

Cs ds +
0

sdCs .
0

Thus tCt is a Liu process with drift Ct and diffusion t, and has an uncertain
differential
d(tCt ) = Ct dt + tdCt .
Theorem 14.10 (Chen and Ralescu [17]) Liu process is a sample-continuous
uncertain process.

302

Chapter 14 - Uncertain Calculus

Proof: Let Zt be a Liu process. Then there exist two uncertain processes
t and t such that
Z
Zt = Z0 +

Z
s ds +

s dCs .
0

For each , we have



Z t
Z t



s ()dCs () 0
s ()ds +
|Zt () Zr ()| =
r

as r t. Thus Zt is sample-continuous and the theorem is proved.

14.3

Fundamental Theorem

Theorem 14.11 (Liu [116], Fundamental Theorem of Uncertain Calculus)


Let h(t, c) be a continuously differentiable function. Then Zt = h(t, Ct ) is a
Liu process and has an uncertain differential
dZt =

h
h
(t, Ct )dt +
(t, Ct )dCt .
t
c

(14.32)

Proof: Write Ct = Ct+t Ct = Ct . It follows from Theorems 14.3


and 14.4 that t and Ct are infinitesimals with the same order. Since the
function h is continuously differentiable, by using Taylor series expansion,
the infinitesimal increment of Zt has a first-order approximation,
Zt =

h
h
(t, Ct )t +
(t, Ct )Ct .
t
c

Hence we obtain the uncertain differential (14.32) because it makes


Z s
Z s
h
h
Zs = Z0 +
(t, Ct )dt +
(t, Ct )dCt .
(14.33)
t
0
0 c
This formula is an integral form of the fundamental theorem.
Example 14.7: Let us calculate the uncertain differential of tCt . In this
case, we have h(t, c) = tc whose partial derivatives are
h
(t, c) = c,
t

h
(t, c) = t.
c

It follows from the fundamental theorem of uncertain calculus that


d(tCt ) = Ct dt + tdCt .
Thus tCt is a Liu process with drift Ct and diffusion t.

(14.34)

303

Section 14.4 - Chain Rule

Example 14.8: Let us calculate the uncertain differential of the arithmetic


Liu process At = et + Ct . In this case, we have h(t, c) = et + c whose
partial derivatives are
h
(t, c) = e,
t

h
(t, c) = .
c

It follows from the fundamental theorem of uncertain calculus that


dAt = edt + dCt .

(14.35)

Thus At is a Liu process with drift e and diffusion .


Example 14.9: Let us calculate the uncertain differential of the geometric
Liu process Gt = exp(et + Ct ). In this case, we have h(t, c) = exp(et + c)
whose partial derivatives are
h
(t, c) = eh(t, c),
t

h
(t, c) = h(t, c).
c

It follows from the fundamental theorem of uncertain calculus that


dGt = eGt dt + Gt dCt .

(14.36)

Thus Gt is a Liu process with drift eGt and diffusion Gt .

14.4

Chain Rule

Chain rule is a special case of the fundamental theorem of uncertain calculus.


Theorem 14.12 (Liu [116], Chain Rule) Let f (c) be a continuously differentiable function. Then f (Ct ) has an uncertain differential
df (Ct ) = f 0 (Ct )dCt .

(14.37)

Proof: Since f (c) is a continuously differentiable function, we immediately


have

f (c) = 0,
f (c) = f 0 (c).
t
c
It follows from the fundamental theorem of uncertain calculus that the equation (14.37) holds.
Example 14.10: Let us calculate the uncertain differential of Ct2 . In this
case, we have f (c) = c2 and f 0 (c) = 2c. It follows from the chain rule that
dCt2 = 2Ct dCt .

(14.38)

304

Chapter 14 - Uncertain Calculus

Example 14.11: Let us calculate the uncertain differential of sin(Ct ). In


this case, we have f (c) = sin(c) and f 0 (c) = cos(c). It follows from the chain
rule that
d sin(Ct ) = cos(Ct )dCt .
(14.39)
Example 14.12: Let us calculate the uncertain differential of exp(Ct ). In
this case, we have f (c) = exp(c) and f 0 (c) = exp(c). It follows from the chain
rule that
d exp(Ct ) = exp(Ct )dCt .
(14.40)

14.5

Change of Variables

Theorem 14.13 (Liu [116], Change of Variables) Let f be a continuously


differentiable function. Then for any s > 0, we have
Z s
Z Cs
0
f (Ct )dCt =
f 0 (c)dc.
(14.41)
0

That is,
Z

C0

f 0 (Ct )dCt = f (Cs ) f (C0 ).

(14.42)

Proof: Since f is a continuously differentiable function, it follows from the


chain rule that
df (Ct ) = f 0 (Ct )dCt .
By using the fundamental theorem of uncertain calculus, we get
Z s
f (Cs ) = f (C0 ) +
f 0 (Ct )dCt .
0

Hence the theorem is verified.


Example 14.13: Since the function f (c) = c has an antiderivative c2 /2, it
follows from the change of variables of integral that
Z s
1
1
1
Ct dCt = Cs2 C02 = Cs2 .
2
2
2
0
Example 14.14: Since the function f (c) = c2 has an antiderivative c3 /3, it
follows from the change of variables of integral that
Z s
1
1
1
Ct2 dCt = Cs3 C03 = Cs3 .
3
3
3
0
Example 14.15: Since the function f (c) = exp(c) has an antiderivative
exp(c), it follows from the change of variables of integral that
Z s
exp(Ct )dCt = exp(Cs ) exp(C0 ) = exp(Cs ) 1.
0

305

Section 14.6 - Integration by Parts

14.6

Integration by Parts

Theorem 14.14 (Liu [116], Integration by Parts) Suppose Xt and Yt are


Liu processes. Then
d(Xt Yt ) = Yt dXt + Xt dYt .
(14.43)
Proof: Note that Xt and Yt are infinitesimals with the same order. Since
the function xy is a continuously differentiable function with respect to x and
y, by using Taylor series expansion, the infinitesimal increment of Xt Yt has
a first-order approximation,
(Xt Yt ) = Yt Xt + Xt Yt .
Hence we obtain the uncertain differential (14.43) because it makes
Z s
Z s
Xs Ys = X0 Y0 +
Yt dXt +
Xt dYt .
(14.44)
0

The theorem is thus proved.


Example 14.16: In order to illustrate the integration by parts, let us calculate the uncertain differential of
Zt = exp(t)Ct2 .
In this case, we define
Yt = Ct2 .

Xt = exp(t),
Then
dXt = exp(t)dt,

dYt = 2Ct dCt .

It follows from the integration by parts that


dZt = exp(t)Ct2 dt + 2 exp(t)Ct dCt .
Example 14.17: The integration by parts may also calculate the uncertain
differential of
Z t
Zt = sin(t + 1)
sdCs .
0

In this case, we define


Z
Xt = sin(t + 1),

Yt =

sdCs .
0

Then
dXt = cos(t + 1)dt,

dYt = tdCt .

306

Chapter 14 - Uncertain Calculus

It follows from the integration by parts that


Z t

dZt =
sdCs cos(t + 1)dt + sin(t + 1)tdCt .
0

Example 14.18: Let f and g be continuously differentiable functions. It is


clear that
Zt = f (t)g(Ct )
is an uncertain process. In order to calculate the uncertain differential of Zt ,
we define
Xt = f (t), Yt = g(Ct )
Then
dXt = f 0 (t)dt,

dYt = g 0 (Ct )dCt .

It follows from the integration by parts that


dZt = f 0 (t)g(Ct )dt + f (t)g 0 (Ct )dCt .

14.7

Bibliographic Notes

The concept of uncertain integral was first proposed by Liu [114] in 2008 in
order to integrate uncertain processes with respect to Liu process. One year
later, Liu [116] recast his work via the fundamental theorem of uncertain
calculus from which the techniques of chain rule, change of variables, and
integration by parts were derived.
Note that uncertain integral may also be defined with respect to other
integrators. For example, Liu and Yao [123] suggested an uncertain integral
with respect to multiple Liu processes. In addition, Chen and Ralescu [17]
presented an uncertain integral with respect to general Liu process. In order
to deal with uncertain process with jumps, Yao integral [217] was defined
as a type of uncertain integral with respect to uncertain renewal process.
Since then, the theory of uncertain calculus was well developed. For further
explorations on the development of uncertain calculus, the interested reader
may consult Chens book [19].

Chapter 15

Uncertain Differential
Equation
Uncertain differential equation is a type of differential equation involving
uncertain processes. This chapter will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and introduce YaoChen formula that represents the solution of an uncertain differential equation
by a family of solutions of ordinary differential equations. On the basis of
this formula, a numerical method for solving uncertain differential equations
is designed. In addition, extreme value, first hitting time and time integral
of solutions are provided.

15.1

Uncertain Differential Equation

Definition 15.1 (Liu [114]) Suppose Ct is a canonical Liu process, and f


and g are two functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.1)

is called an uncertain differential equation. A solution is a Liu process Xt


that satisfies (15.1) identically in t.
Remark 15.1: The uncertain differential equation (15.1) is equivalent to
the uncertain integral equation
Z s
Z s
Xs = X0 +
f (t, Xt )dt +
g(t, Xt )dCt .
(15.2)
0

Theorem 15.1 Let ut and vt be two integrable uncertain processes. Then


the uncertain differential equation
dXt = ut dt + vt dCt

(15.3)

308

Chapter 15 - Uncertain Differential Equation

has a solution

Z
Xt = X0 +

Z
us ds +

vs dCs .

(15.4)

Proof: This theorem is essentially the definition of uncertain differential or


a direct deduction of the fundamental theorem of uncertain calculus.
Example 15.1: Let a and b be real numbers. Consider the uncertain differential equation
dXt = adt + bdCt .
(15.5)
It follows from Theorem 15.1 that the solution is
Z t
Z t
Xt = X0 +
ads +
bdCs .
0

That is,
Xt = X0 + at + bCt .

(15.6)

Theorem 15.2 Let ut and vt be two integrable uncertain processes. Then


the uncertain differential equation
dXt = ut Xt dt + vt Xt dCt
has a solution

Z

Xt = X0 exp

us ds +
0

(15.7)

vs dCs .

(15.8)

Proof: At first, the original uncertain differential equation is equivalent to


dXt
= ut dt + vt dCt .
Xt
It follows from the fundamental theorem of uncertain calculus that
dXt
d ln Xt =
= ut dt + vt dCt
Xt
and then
Z
ln Xt = ln X0 +

Z
us ds +

vs dCs .

Therefore the uncertain differential equation has a solution (15.8).


Example 15.2: Let a and b be real numbers. Consider the uncertain differential equation
dXt = aXt dt + bXt dCt .
(15.9)
It follows from Theorem 15.2 that the solution is
Z t

Z t
Xt = X0 exp
ads +
bdCs .
0

That is,
Xt = X0 exp (at + bCt ) .

(15.10)

309

Section 15.1 - Uncertain Differential Equation

Linear Uncertain Differential Equation


Theorem 15.3 (Chen and Liu [9]) Let u1t , u2t , v1t , v2t be integrable uncertain processes. Then the linear uncertain differential equation
dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt

(15.11)

has a solution
t


Z
Xt = Ut X0 +
0

where

Z
Ut = exp

u2s
ds +
Us

v2s
dCs
Us

Z
u1s ds +


v1s dCs .

(15.12)

(15.13)

Proof: At first, we define two uncertain processes Ut and Vt via uncertain


differential equations,
dUt = u1t Ut dt + v1t Ut dCt ,

dVt =

v2t
u2t
dt +
dCt .
Ut
Ut

It follows from the integration by parts that


d(Ut Vt ) = Vt dUt + Ut dVt = (u1t Ut Vt + u2t )dt + (v1t Ut Vt + v2t )dCt .
That is, the uncertain process Xt = Ut Vt is a solution of the uncertain
differential equation (15.11). Note that
Z t

Z t
Ut = U0 exp
u1s ds +
v1s dCs ,
0

Z
Vt = V0 +
0

u2s
ds +
Us

v2s
dCs .
Us

Taking U0 = 1 and V0 = X0 , we get the solution (15.12). The theorem is


proved.
Example 15.3: Let m, a, be real numbers. Consider a linear uncertain
differential equation
dXt = (m aXt )dt + dCt .
At first, we have
Z
Ut = exp

Z
(a)ds +


0dCs

= exp(at).

It follows from Theorem 15.3 that the solution is




Z t
Z t
Xt = exp(at) X0 +
m exp(as)ds +
exp(as)dCs .
0

(15.14)

310

Chapter 15 - Uncertain Differential Equation

That is,
Z t

m
m
exp(as)dCs
Xt =
+ exp(at)
+ exp(at) X0
a
a
0

(15.15)

provided that a 6= 0. Note that Xt is a normal uncertain variable, i.e.,


m

m

Xt N
,
.
(15.16)
+ exp(at) X0
exp(at)
a
a
a
a
Example 15.4: Let m and be real numbers. Consider a linear uncertain
differential equation
dXt = mdt + Xt dCt .
(15.17)
At first, we have
Z
Ut = exp

Z
0ds +


dCs

= exp(Ct ).

It follows from Theorem 15.3 that the solution is




Z t
Z t
Xt = exp(Ct ) X0 +
m exp(s)ds +
0dCs .
0

That is,


Z t
Xt = exp(Ct ) X0 + m
exp(Cs )ds .

(15.18)

First Analytic Method


This subsection will introduce an analytic method for solving nonlinear uncertain differential equations like
dXt = f (t, Xt )dt + t Xt dCt

(15.19)

dXt = t Xt dt + g(t, Xt )dCt .

(15.20)

and
Theorem 15.4 (Liu [139]) Let f be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = f (t, Xt )dt + t Xt dCt

(15.21)

Xt = Yt1 Zt

(15.22)

 Z t

Yt = exp
s dCs

(15.23)

has a solution
where

and Zt is the solution of uncertain differential equation


dZt = Yt f (t, Yt1 Zt )dt
with initial value Z0 = X0 .

(15.24)

311

Section 15.1 - Uncertain Differential Equation

Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential

 Z t
s dCs t dCt = Yt t dCt .
dYt = exp
0

It follows from the integration by parts that


d(Xt Yt ) = Xt dYt + Yt dXt = Xt Yt t dCt + Yt f (t, Xt )dt + Yt t Xt dCt .
That is,
d(Xt Yt ) = Yt f (t, Xt )dt.
Defining Zt = Xt Yt , we obtain Xt = Yt1 Zt and dZt = Yt f (t, Yt1 Zt )dt.
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.
Example 15.5: Let and be real numbers with 6= 1. Consider the
uncertain differential equation
dXt = Xt dt + Xt dCt .

(15.25)

At first, we have Yt = exp(Ct ) and Zt satisfies the uncertain differential


equation,
dZt = exp(Ct )(exp(Ct )Zt ) dt = exp(( 1)Ct )Zt dt.
Since 6= 1, we have
dZt1 = (1 ) exp(( 1)Ct )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
Zt1 = Z01 + (1 )
exp(( 1)Cs )ds.
0

Since the initial value Z0 is just X0 , we have



Zt =

X01 + (1 )

1/(1)
exp(( 1)Cs )ds
.

Theorem 15.4 says the uncertain differential equation (15.25) has a solution
Xt = Yt1 Zt , i.e.,

1/(1)
Z t
Xt = exp(Ct ) X01 + (1 )
exp(( 1)Cs )ds
.
0

312

Chapter 15 - Uncertain Differential Equation

Theorem 15.5 (Liu [139]) Let g be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = t Xt dt + g(t, Xt )dCt

(15.26)

Xt = Yt1 Zt

(15.27)

 Z t

Yt = exp
s ds

(15.28)

has a solution
where

and Zt is the solution of uncertain differential equation


dZt = Yt g(t, Yt1 Zt )dCt

(15.29)

with initial value Z0 = X0 .


Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
 Z t

dYt = exp
s ds t dt = Yt t dt.
0

It follows from the integration by parts that


d(Xt Yt ) = Xt dYt + Yt dXt = Xt Yt t dt + Yt t Xt dt + Yt g(t, Xt )dCt .
That is,
d(Xt Yt ) = Yt g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt1 Zt and dZt = Yt g(t, Yt1 Zt )dCt .
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.
Example 15.6: Let and be real numbers with 6= 1. Consider the
uncertain differential equation
dXt = Xt dt + Xt dCt .

(15.30)

At first, we have Yt = exp(t) and Zt satisfies the uncertain differential


equation,
dZt = exp(t)(exp(t)Zt ) dCt = exp(( 1)t)Zt dCt .
Since 6= 1, we have
dZt1 = (1 ) exp(( 1)t)dCt .

313

Section 15.1 - Uncertain Differential Equation

It follows from the fundamental theorem of uncertain calculus that


Z t
exp(( 1)s)dCs .
Zt1 = Z01 + (1 )
0

Since the initial value Z0 is just X0 , we have



Zt =

X01

1/(1)

Z
+ (1 )

exp(( 1)s)dCs

Theorem 15.5 says the uncertain differential equation (15.30) has a solution
Xt = Yt1 Zt , i.e.,

Xt = exp(t)

X01

1/(1)

Z
+ (1 )

exp(( 1)s)dCs

Second Analytic Method


This subsection will introduce an analytic method for solving nonlinear uncertain differential equations like
dXt = f (t, Xt )dt + t dCt

(15.31)

dXt = t dt + g(t, Xt )dCt .

(15.32)

and

Theorem 15.6 (Yao [223]) Let f be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = f (t, Xt )dt + t dCt

(15.33)

Xt = Yt + Zt

(15.34)

has a solution
where
Z
Yt =

s dCs

(15.35)

and Zt is the solution of uncertain differential equation


dZt = f (t, Yt + Zt )dt

(15.36)

with initial value Z0 = X0 .


Proof: At first, Yt has an uncertain differential dYt = t dCt . It follows from
the integration by parts that
d(Xt Yt ) = dXt dYt = f (t, Xt )dt + t dCt t dCt .

314

Chapter 15 - Uncertain Differential Equation

That is,
d(Xt Yt ) = f (t, Xt )dt.
Defining Zt = Xt Yt , we obtain Xt = Yt + Zt and dZt = f (t, Yt + Zt )dt.
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.
Example 15.7: Let and be real numbers with 6= 0. Consider the
uncertain differential equation
dXt = exp(Xt )dt + dCt .

(15.37)

At first, we have Yt = Ct and Zt satisfies the uncertain differential equation,


dZt = exp(Ct + Zt )dt.
Since 6= 0, we have
d exp(Zt ) = exp(Ct )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
exp(Zt ) = exp(Z0 )
exp(Cs )ds.
0

Since the initial value Z0 is just X0 , we have




Z t
Zt = X0 ln 1
exp(X0 + Cs )ds .
0

Hence


Xt = X0 + Ct ln 1


exp(X0 + Cs )ds .

Theorem 15.7 (Yao [223]) Let g be a function of two variables and let t
be an integrable uncertain process. Then the uncertain differential equation
dXt = t dt + g(t, Xt )dCt

(15.38)

Xt = Yt + Zt

(15.39)

has a solution
where
Z
Yt =

s ds

(15.40)

and Zt is the solution of uncertain differential equation


dZt = g(t, Yt + Zt )dCt
with initial value Z0 = X0 .

(15.41)

315

Section 15.2 - Existence and Uniqueness

Proof: The uncertain process Yt has an uncertain differential dYt = t dt. It


follows from the integration by parts that
d(Xt Yt ) = dXt dYt = t dt + g(t, Xt )dCt t dt.
That is,
d(Xt Yt ) = g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt + Zt and dZt = g(t, Yt + Zt )dCt .
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.
Example 15.8: Let and be real numbers with 6= 0. Consider the
uncertain differential equation
dXt = dt + exp(Xt )dCt .

(15.42)

At first, we have Yt = t and Zt satisfies the uncertain differential equation,


dZt = exp(t + Zt )dCt
Since 6= 0, we have
d exp(Zt ) = exp(t)dCt .
It follows from the fundamental theorem of uncertain calculus that
Z t
exp(Zt ) = exp(Z0 ) +
exp(s)dCs .
0

Since the initial value Z0 is just X0 , we have




Z t
Zt = X0 ln 1
exp(X0 + s)dCs .
0

Hence



Z t
Xt = X0 + t ln 1
exp(X0 + s)dCs .
0

15.2

Existence and Uniqueness

Theorem 15.8 (Chen and Liu [9], Existence and Uniqueness Theorem) The
uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.43)

has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| L(1 + |x|), x <, t 0
(15.44)

316

Chapter 15 - Uncertain Differential Equation

and Lipschitz condition


|f (t, x) f (t, y)| + |g(t, x) g(t, y)| L|x y|,

x, y <, t 0 (15.45)

for some constant L. Moreover, the solution is sample-continuous.


Proof: We first prove the existence of solution by a successive approximation
(0)
method. Define Xt = X0 , and
Z t 
Z t 


(n)
(n1)
Xt = X0 +
f s, Xs
ds +
g s, Xs(n1) dCs
0

for n = 1, 2, and write






(n)
Dt () = max Xs(n+1) () Xs(n) ()
0st

for each . It follows from linear growth condition and Lipschitz condition that

Z s
Z s


(0)
f (v, X0 )dv +
g(v, X0 )dCv ()
Dt () = max
0st
0
0
Z t
Z t

|f (v, X0 )| dv + K
|g(v, X0 )| dv
0

(1 + |X0 |)L(1 + K )t
where K is the Lipschitz constant to the sample path Ct () in Theorem 14.5.
In fact, by using the induction method, we may verify
(n)

Dt () (1 + |X0 |)

Ln+1 (1 + K )n+1 n+1


t
(n + 1)!
(k)

for each n. This means that, for each , the sample paths Xt ()
converges uniformly on any given time interval. Write the limit by Xt ()
that is just a solution of the uncertain differential equation because
Z t
Z t
Xt = X0 +
f (s, Xs )ds +
g(s, Xs )ds.
0

Next we prove that the solution is unique. Assume that both Xt and Xt
are solutions of the uncertain differential equation. Then for each , it
follows from linear growth condition and Lipschitz condition that
Z t
|Xt () Xt ()| L(1 + K )
|Xv () Xv ()|dv.
0

By using Gronwall inequality, we obtain


|Xt () Xt ()| 0 exp(L(1 + K )t) = 0.

317

Section 15.3 - Stability

Hence Xt = Xt . The uniqueness is verified. Finally, for each , we have



Z t
Z t



g(s, Xs ())dCs () 0
f (s, Xs ())ds +
|Xt () Xr ()| =
r

as r t. Thus Xt is sample-continuous and the theorem is proved.

15.3

Stability

Definition 15.2 (Liu [116]) An uncertain differential equation is said to be


stable if for any two solutions Xt and Yt , we have
lim

|X0 Y0 |0

M{|Xt Yt | > } = 0,

t > 0

(15.46)

for any given number > 0.


Example 15.9: In order to illustrate the concept of stability, let us consider
the uncertain differential equation
dXt = adt + bdCt .

(15.47)

It is clear that two solutions with initial values X0 and Y0 are


Xt = X0 + at + bCt ,
Yt = Y0 + at + bCt .
Then for any given number > 0 and any time t > 0, we have
lim

|X0 Y0 |0

M{|Xt Yt | > } =

lim

|X0 Y0 |0

M{|X0 Y0 | > } = 0.

Hence the uncertain differential equation (15.47) is stable.


Example 15.10: Some uncertain differential equations are not stable. For
example, consider
dXt = Xt dt + bdCt .
(15.48)
It is clear that two solutions with different initial values X0 and Y0 are
Z t
Xt = exp(t)X0 + b exp(t)
exp(s)dCs ,
0

Z
Yt = exp(t)Y0 + b exp(t)

exp(s)dCs .
0

Then for any given number > 0, we have


M{|Xt Yt | > } = M{exp(t)|X0 Y0 | > } = 1
provided that t is sufficiently large. Hence the uncertain differential equation
(15.48) is unstable.

318

Chapter 15 - Uncertain Differential Equation

Theorem 15.9 (Yao, Gao and Gao [219], Stability Theorem) The uncertain
differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.49)

is stable if the coefficients f (t, x) and g(t, x) satisfy linear growth condition
|f (t, x)| + |g(t, x)| K(1 + |x|),

x <, t 0

(15.50)

for some constant K and strong Lipschitz condition


|f (t, x) f (t, y)| + |g(t, x) g(t, y)| L(t)|x y|,

x, y <, t 0 (15.51)

for some bounded and integrable function L(t) on [0, +).


Proof: Since L(t) is bounded on [0, +), there is a constant R such that
L(t) R for any t. Then the strong Lipschitz condition (15.51) implies the
following Lipschitz condition,
|f (t, x) f (t, y)| + |g(t, x) g(t, y)| R|x y|,

x, y <, t 0. (15.52)

It follows from linear growth condition (15.50), Lipschitz condition (15.52)


and the existence and uniqueness theorem that the uncertain differential
equation (15.49) has a unique solution. Let Xt and Yt be two solutions with
initial values X0 and Y0 , respectively. Then for each , we have
d|Xt () Yt ()| |f (t, Xt ()) f (t, Yt ())| + |g(t, Xt ()) g(t, Yt ())|
L(t)|Xt () Yt ()|dt + L(t)K()|Xt () Yt ()|dt
= L(t)(1 + K())|Xt () Yt ()|dt
where K() is the Lipschitz constant of the sample path Ct () in Theorem 14.5. It follows that


Z t
|Xt () Yt ()| |X0 Y0 | exp (1 + K())
L(s)ds .
0

Thus for any given > 0, we always have




Z
M{|Xt Yt | > } M K() > / |X0 Y0 |


L(s)ds
.

Since


Z
M K() > / |X0 Y0 |


L(s)ds
0

as |X0 Y0 | 0, we obtain
lim

|X0 Y0 |0

M{|Xt Yt | > } = 0.

319

Section 15.4 - Yao-Chen Formula

Hence the uncertain differential equation is stable.


Exercise 15.1: Suppose u1t , u2t , v1t , v2t are bounded functions with respect
to t such that
Z +
Z +
|u1t |dt < +,
|v1t |dt < +.
(15.53)
0

Show that the linear uncertain differential equation


dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt

(15.54)

is stable since its coefficients satisfy linear growth condition and strong Lipschitz condition.

15.4

Yao-Chen Formula

Yao-Chen formula relates uncertain differential equations and ordinary differential equations, just like that Feynman-Kac formula relates stochastic
differential equations and partial differential equations.
Definition 15.3 (Yao and Chen [222]) Let be a number with 0 < < 1.
An uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.55)

is said to have an -path Xt if it solves the corresponding ordinary differential equation


dXt = f (t, Xt )dt + |g(t, Xt )|1 ()dt
(15.56)
where 1 () is the inverse standard normal uncertainty distribution, i.e.,

() =

3
ln
.

(15.57)

Remark 15.2: Note that each -path Xt is a real-valued function of time t,


but is not necessarily one of sample paths. Furthermore, almost all -paths
are continuous functions with respect to time t.
Example 15.11: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an -path
Xt = exp at + |b|1 ()t

where 1 is the inverse standard normal uncertainty distribution.

(15.58)

320

Chapter 15 - Uncertain Differential Equation

Xt

..
.........
...
...
.............
....
............
..
............
.............
...
..............
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.............
...
..............
...
..............
...............
...
...............
................................
..............................................
... ........................................ .......................
...............
... ...........................................
...............
................
... .................................... ...........
................
... ................................... ............
................
...................... ........ ..........
................
...
........
.................
................ ....... .......
...
.................
........
................. ...... ........
.
....
.
.
.
.
.
...
........
... ........ ..... ...... .......
.........
... .... ..... ..... ....... ........
...
.........
... ..... ...... ...... ...... ........
.
...
.........
.......
... ..... ..... ...... .......
.
.
.
.
.
...
.........
...
... ..... ..... ...... ......
.........
... ..... ..... ..... ....... .............
...
..........
.......
... .... ..... ...... .......
...
..........
.......
... ..... ...... ...... ......
..........
.......
...
... ..... ..... ...... .......
..........
........
......
... ..... ...... ......
..
...
........
.......
.... ..... ..... ......
.
.
.
...
........
.... .... ..... .....
.........
.... ..... ..... ...... .............
...
.........
..... .... .....
.
.
.
.
.
.
.
.
.
.
.
...
.
......
.........
.......
..... ...... ......
....
.........
.......
..
..
.....
...
...
........
..... .......... ........... ............
...
........
..
..
.....
.
.........
...
..... .......... ........... ..............
.........
......
.......
.....
........
...
.........
.....
......
.
.
.
.
.
.
.
..
.
.
.
.
.
.
...
.
........
.....
......
......
.........
....... ..............
...
.........
........
......
.......
...
.......
........
.......
......
.........
.......
......
...
........
.........
.......
...
..........
........
.......
.........
.......
...
..........
........
...
..........
.........
..
..........
...
..........
...
............
.....
...
...
...
...
..............................................................................................................................................................................................................................................

= 0.9

= 0.8

= 0.7

= 0.6
= 0.5
= 0.4
= 0.3
= 0.2
= 0.1

Figure 15.1: A Spectrum of -Paths of dXt = aXt dt + bXt dCt


Theorem 15.10 (Yao-Chen Formula [222]) Let Xt and Xt be the solution
and -path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.59)

M{Xt Xt , t} = ,

(15.60)

M{Xt > Xt , t} = 1 .

(15.61)

respectively. Then

Proof: At first, for each -path Xt , we divide the time interval into two
parts,


T + = t g (t, Xt ) 0 ,


T = t g (t, Xt ) < 0 .
It is obvious that T + T = and T + T = [0, +). Write


dCt ()
1
+

+
=

()
for
any
t

T
,
1
dt


dCt ()


(1 ) for any t T
1 =
dt
where 1 is the inverse standard normal uncertainty distribution. Since T +
and T are disjoint sets and Ct has independent increments, we get
M{+
1 } = ,

M{
1 } = ,

M{+
1 1 } = .

Section 15.5 - Uncertainty Distribution of Solution

321

For any +
1 1 , we always have

dCt ()
|g(t, Xt )|1 (), t.
dt
Hence Xt () Xt for all t and
g(t, Xt ())

M{Xt Xt , t} M{+
1 1 } = .

(15.62)

On the other hand, let us define




dCt ()
1
+

>

()
for
any
t

T
,
+
=

2
dt


dCt ()


< (1 ) for any t T
.
2 =
dt
Since T + and T are disjoint sets and Ct has independent increments, we
obtain
M{+
2 } = 1 ,

M{
2 } = 1 ,

M{+
2 2 } = 1 .

For any +
2 2 , we always have

dCt ()
> |g(t, Xt )|1 (), t.
dt
Hence Xt () > Xt for all t and
g(t, Xt ())

M{Xt > Xt , t} M{+


2 2 } = 1 .

Xt ,

(15.63)

Xt ,

Note that {Xt


t} and {Xt 6
t} are opposite events with each
other. By using the duality axiom, we obtain
M{Xt Xt , t} + M{Xt 6 Xt , t} = 1.
It follows from M{Xt > Xt , t} M{Xt 6 Xt , t} and monotonicity
theorem that
M{Xt Xt , t} + M{Xt > Xt , t} 1.

(15.64)

Thus (15.60) and (15.61) follow from (15.62), (15.63) and (15.64) immediately.
Remark 15.3: It is also shown that Yao-Chen formula may be written as
M{Xt < Xt , t} = ,

(15.65)

M{Xt Xt , t} = 1 .

(15.66)

Please mention that {Xt < Xt , t} and {Xt Xt , t} are disjoint events
but not opposite. Generally speaking, their union is not the universal set,
and it is possible that
M{(Xt < Xt , t) (Xt Xt , t)} < 1.

(15.67)

However, for any , it is always true that


M{Xt < Xt , t} + M{Xt Xt , t} 1.

(15.68)

322

15.5

Chapter 15 - Uncertain Differential Equation

Uncertainty Distribution of Solution

Theorem 15.11 (Yao and Chen [222]) Let Xt and Xt be the solution and
-path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.69)

respectively. Then the solution Xt has an inverse uncertainty distribution

1
t () = Xt .

(15.70)

Proof: Note that {Xt Xt } {Xs Xs , s} holds. By using the


monotonicity theorem and Yao-Chen formula, we obtain
M{Xt Xt } M{Xs Xs , s} = .

(15.71)

Similarly, we also have


M{Xt > Xt } M{Xs > Xs , s} = 1 .

(15.72)

In addition, since {Xt Xt } and {Xt > Xt } are opposite events, the duality
axiom makes
M{Xt Xt } + M{Xt > Xt } = 1.
(15.73)
It follows from (15.71), (15.72) and (15.73) that M{Xt Xt } = . The
theorem is thus verified.
Example 15.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an -path Xt = exp at + |b|1 ()t . Thus its solution
Xt has an inverse uncertainty distribution

1
1
()t
(15.74)
t () = exp at + |b|
where 1 is the inverse standard normal uncertainty distribution.
Theorem 15.12 (Yao and Chen [222]) Let Xt and Xt be the solution and
-path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.75)

respectively. Then for any monotone (increasing or decreasing) function J,


we have
Z 1
E[J(Xt )] =
J(Xt )d.
(15.76)
0

Proof: At first, it follows from Yao-Chen formula that Xt has an uncertainty

distribution 1
t () = Xt . Next, we may have a monotone function become
a strictly monotone function by a small perturbation. When J is a strictly

323

Section 15.5 - Uncertainty Distribution of Solution

increasing function, it follows from Theorem 2.10 that J(Xt ) has an inverse
uncertainty distribution

1
t () = J(Xt ).
Thus we have
Z
E[J(Xt )] =

1
t ()d =

J(Xt )d.

When J is a strictly decreasing function, it follows from Theorem 2.17 that


J(Xt ) has an inverse uncertainty distribution
1
1
).
t () = J(Xt

Thus we have
Z
E[J(Xt )] =

1
t ()d

J(Xt1 )d

Z
=

J(Xt )d.

The theorem is thus proved.


Exercise 15.2: Let Xt and Xt be the solution and -path of some uncertain
differential equation. Show that
1

Z
E[Xt ] =

Xt d,

(15.77)

(Xt K)+ d,

(15.78)

(K Xt )+ d.

(15.79)

E[(Xt K)+ ] =

E[(K Xt )+ ] =

Numerical Method for Uncertainty Distribution


It is almost impossible to find analytic solutions for general uncertain differential equations. This fact provides a motivation to design a numerical
method to solve general uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt .

(15.80)

In order to do so, a key point is to obtain an inverse uncertainty distribution


1
t () of its solution Xt . For this purpose, Yao and Chen [222] designed
the following algorithm:
Step 1. Fix on (0, 1).

324

Chapter 15 - Uncertain Differential Equation

Step 2. Solve dXt = f (t, Xt )dt + |g(t, Xt )|1 ()dt by any method of ordinary differential equation and obtain the -path Xt , for example,
by using the recursion formula

Xi+1
= Xi + f (ti , Xi )h + |g(ti , Xi )|1 ()h

(15.81)

where 1 is the inverse standard normal uncertainty distribution


and h is the step length.
Step 3. The inverse uncertainty distribution of the solution Xt is determined by

1
(15.82)
t () = Xt .
Example 15.13: In order to illustrate the numerical method, let us consider
an uncertain differential equation
p
(15.83)
dXt = (t Xt )dt + (1 + Xt )dCt , X0 = 1.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this equation successfully and obtain an inverse uncertainty distribution
of the solution Xt . Furthermore, we may get
E[X1 ] 0.868.

(15.84)

Example 15.14: Now we consider a nonlinear uncertain differential equation


p
dXt = Xt dt + (1 t)Xt dCt , X0 = 1.
(15.85)
Note that (1 t)Xt takes not only positive values but also negative values.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
obtain the inverse uncertainty distribution of the solution Xt . Furthermore,
we may get
E[(X2 3)+ ] 2.99.
(15.86)

15.6

Extreme Value of Solution

Theorem 15.13 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.87)

respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt )
(15.88)
0ts

has an inverse uncertainty distribution

1
s () = sup J(Xt );
0ts

(15.89)

Section 15.6 - Extreme Value of Solution

325

and the infimum


inf J(Xt )

0ts

(15.90)

has an inverse uncertainty distribution

1
s () = inf J(Xt ).
0ts

(15.91)

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that


sup J(Xt ) sup J(Xt ) {Xt Xt , t}.
0ts

0ts

By using Yao-Chen formula, we obtain





M sup J(Xt ) sup J(Xt ) M{Xt Xt , t} = .

(15.92)

Similarly, we have


M sup J(Xt ) > sup J(Xt ) M{Xt > Xt , t} = 1 .

(15.93)

It follows from (15.92), (15.93) and the duality axiom that




M sup J(Xt ) sup J(Xt ) =

(15.94)

0ts

0ts

0ts

0ts

0ts

0ts

which proves (15.89). Next, it is easy to verify that




inf J(Xt ) inf J(Xt ) {Xt Xt , t}.
0ts

0ts

By using Yao-Chen formula, we obtain




M inf J(Xt ) inf J(Xt ) M{Xt Xt , t} = .

(15.95)

Similarly, we have



M inf J(Xt ) > inf J(Xt ) M{Xt > Xt , t} = 1 .

(15.96)

It follows from (15.95), (15.96) and the duality axiom that




M inf J(Xt ) inf J(Xt ) =

(15.97)

0ts

0ts

0ts

0ts

0ts

0ts

which proves (15.91). The theorem is thus verified.

326

Chapter 15 - Uncertain Differential Equation

Exercise 15.3: Let r and K be real numbers. Show that the supremum
sup exp(rt)(Xt K)
0ts

has an inverse uncertainty distribution

1
s () = sup exp(rt)(Xt K)
0ts

for any given time s > 0.


Theorem 15.14 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.98)

respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt )
(15.99)
0ts

has an inverse uncertainty distribution


1
1
);
s () = sup J(Xt

(15.100)

0ts

and the infimum


inf J(Xt )

0ts

(15.101)

has an inverse uncertainty distribution


1
1
).
s () = inf J(Xt
0ts

(15.102)

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that


sup J(Xt ) sup J(Xt1 ) {Xt Xt1 , t}.
0ts

0ts

By using Yao-Chen formula, we obtain




M sup J(Xt ) sup J(Xt1 ) M{Xt Xt1 , t} = .
0ts

(15.103)

0ts

Similarly, we have


M sup J(Xt ) > sup J(Xt1 ) M{Xt < Xt1 , t} = 1 . (15.104)
0ts

0ts

Section 15.7 - First Hitting Time of Solution

It follows from (15.103), (15.104) and the duality axiom that




M sup J(Xt ) sup J(Xt1 ) =
0ts

327

(15.105)

0ts

which proves (15.100). Next, it is easy to verify that




inf J(Xt ) inf J(Xt1 ) {Xt Xt1 , t}.
0ts

0ts

By using Yao-Chen formula, we obtain




1
M inf J(Xt ) inf J(Xt ) M{Xt Xt1 , t} = .
0ts

0ts

(15.106)

Similarly, we have


1
M inf J(Xt ) > inf J(Xt ) M{Xt < Xt1 , t} = 1 . (15.107)
0ts

0ts

It follows from (15.106), (15.107) and the duality axiom that




M inf J(Xt ) inf J(Xt1 ) =
0ts

0ts

(15.108)

which proves (15.102). The theorem is thus verified.


Exercise 15.4: Let r and K be real numbers. Show that the supremum
sup exp(rt)(K Xt )
0ts

has an inverse uncertainty distribution


1
1
)
s () = sup exp(rt)(K Xt
0ts

for any given time s > 0.

15.7

First Hitting Time of Solution

Theorem 15.15 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.109)

with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time z that J(Xt ) reaches z has
an uncertainty distribution



inf

sup
J(X
)

z
, if z > J(X0 )

0ts
(s) =
(15.110)



sup inf J(Xt ) z ,


if z < J(X0 ).
0ts

328

Chapter 15 - Uncertain Differential Equation

Proof: At first, assume z > J(X0 ) and write






0 = inf sup J(Xt ) z .
0ts

Then we have
sup J(Xt0 ) = z,

0ts


{z s} =


sup J(Xt ) z
0ts


{z > s} =


sup J(Xt ) < z
0ts

{Xt Xt0 , t},


{Xt < Xt0 , t}.

By using Yao-Chen formula, we obtain


M{z s} M{Xt Xt0 , t} = 1 0 ,
M{z > s} M{Xt < Xt0 , t} = 0 .
It follows from M{z s} + M{z > s} = 1 that M{z s} = 1 0 . Hence
the first hitting time z has an uncertainty distribution



(s) = M{z s} = 1 inf sup J(Xt ) z .
0ts

Similarly, assume z < J(X0 ) and write





0 = sup inf J(Xt ) z .
0ts

Then we have
inf J(Xt0 ) = z,


{z s} =
inf J(Xt ) z {Xt Xt0 , t},
0ts

0ts


{z > s} =


inf J(Xt ) > z

0ts

{Xt > Xt0 , t}.

By using Yao-Chen formula, we obtain


M{z s} M{Xt Xt0 , t} = 0 ,
M{z > s} M{Xt > Xt0 , t} = 1 0 .
It follows from M{z s} + M{z > s} = 1 that M{z s} = 0 . Hence
the first hitting time z has an uncertainty distribution




(s) = M{z s} = sup inf J(Xt ) z .
0ts

The theorem is verified.

Section 15.7 - First Hitting Time of Solution

329

Theorem 15.16 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt

(15.111)

with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time z that J(Xt ) reaches z has
an uncertainty distribution



sup
J(X
)

z
,
if z > J(X0 )
sup

(s) =

0ts

1 inf inf J(Xt ) z , if z < J(X0 ).

(15.112)

0ts

Proof: At first, assume z > J(X0 ) and write





0 = sup sup J(Xt ) z .
0ts

Then we have
sup J(Xt0 ) = z,

0ts


{z s} =


sup J(Xt ) z
0ts


{z > s} =


sup J(Xt ) < z
0ts

{Xt Xt0 , t},


{Xt > Xt0 , t}.

By using Yao-Chen formula, we obtain


M{z s} M{Xt Xt0 , t} = 0 ,
M{z > s} M{Xt > Xt0 , t} = 1 0 .
It follows from M{z s} + M{z > s} = 1 that M{z s} = 0 . Hence
the first hitting time z has an uncertainty distribution




(s) = M{z s} = sup sup J(Xt ) z .
0ts

Similarly, assume z < J(X0 ) and write





0 = inf inf J(Xt ) z .
0ts

Then we have
inf J(Xt0 ) = z,

0ts

330

Chapter 15 - Uncertain Differential Equation


{z s} =


inf J(Xt ) z

0ts


{z > s} =


inf J(Xt ) > z

0ts

{Xt Xt0 , t},


{Xt < Xt0 , t}.

By using Yao-Chen formula, we obtain


M{z s} M{Xt Xt0 , t} = 1 0 ,
M{z > s} M{Xt < Xt0 , t} = 0 .
It follows from M{z s} + M{z > s} = 1 that M{z s} = 1 0 . Hence
the first hitting time z has an uncertainty distribution



(s) = M{z s} = 1 inf inf J(Xt ) z .
0ts

The theorem is verified.

15.8

Time Integral of Solution

Theorem 15.17 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.113)

respectively. Then for any time s > 0 and strictly increasing function J(x),
the time integral
Z s
J(Xt )dt
(15.114)
0

has an inverse uncertainty distribution


Z s
1
()
=
J(Xt )dt.
s

(15.115)

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that
Z s

Z s
J(Xt )dt
J(Xt )dt {J(Xt ) J(Xt ), t} {Xt Xt , t}.
0

By using Yao-Chen formula, we obtain


Z s

Z s

M
J(Xt )dt
J(Xt )dt M{Xt Xt , t} = .
0

Similarly, we have
Z s
Z
M
J(Xt )dt >
0

(15.116)

J(Xt )dt

M{Xt > Xt , t} = 1 .

(15.117)

Section 15.8 - Time Integral of Solution

It follows from (15.116), (15.117) and the duality axiom that



Z s
Z s
J(Xt )dt = .
J(Xt )dt
M

331

(15.118)

The theorem is thus verified.


Exercise 15.5: Let r and K be real numbers. Show that the time integral
Z s
exp(rt)(Xt K)dt
0

has an inverse uncertainty distribution


Z s
1
s () =
exp(rt)(Xt K)dt
0

for any given time s > 0.


Theorem 15.18 (Yao [220]) Let Xt and Xt be the solution and -path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt ,

(15.119)

respectively. Then for any time s > 0 and strictly decreasing function J(x),
the time integral
Z s
J(Xt )dt
(15.120)
0

has an inverse uncertainty distribution


Z s
1
()
=
J(Xt1 )dt.
s

(15.121)

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that
Z s

Z s
J(Xt )dt
J(Xt1 )dt {Xt Xt1 , t}.
0

By using Yao-Chen formula, we obtain


Z s

Z s
1
M
J(Xt )dt
J(Xt )dt M{Xt Xt1 , t} = .
0

(15.122)

Similarly, we have
Z s

Z s
1
M
J(Xt )dt >
J(Xt )dt M{Xt < Xt1 , t} = 1 . (15.123)
0

332

Chapter 15 - Uncertain Differential Equation

It follows from (15.122), (15.123) and the duality axiom that



Z s
Z s
J(Xt1 )dt = .
J(Xt )dt
M

(15.124)

The theorem is thus verified.


Exercise 15.6: Let r and K be real numbers. Show that the time integral
Z s
exp(rt)(K Xt )dt
0

has an inverse uncertainty distribution


Z s
1
()
=
exp(rt)(K Xt1 )dt
s
0

for any given time s > 0.

15.9

Bibliographic Notes

The study of uncertain differential equation was pioneered by Liu [114] in


2008. This work was immediately followed upon by many researchers. Nowadays, the uncertain differential equation has achieved fruitful results in both
theory and practice.
The existence and uniqueness theorem of solution of uncertain differential
equation was first proved by Chen and Liu [9] under linear growth condition
and Lipschitz continuous condition. The theorem was verified again by Gao
[47] under local linear growth condition and local Lipschitz continuous condition.
The first concept of stability of uncertain differential equation was presented by Liu [116], and some stability theorems were proved by Yao, Gao
and Gao [219]. Following that, Yao and Sheng [229] discussed stability in
mean, Sheng [192] explored stability in moment, and Sheng [193] investigated exponential stability of uncertain differential equations.
In order to solve uncertain differential equations, Chen and Liu [9] obtained an analytic solution to linear uncertain differential equations. In addition, Liu [139] and Yao [223] presented a spectrum of analytic methods to
solve some special classes of nonlinear uncertain differential equations.
More importantly, Yao and Chen [222] showed that the solution of an
uncertain differential equation can be represented by a family of solutions of
ordinary differential equations, thus relating uncertain differential equations
and ordinary differential equations. On the basis of Yao-Chen formula, a
numerical method was also designed by Yao and Chen [222] for solving general uncertain differential equations. Furthermore, Yao [220] presented some
formulas to calculate the extreme value, first hitting time, and time integral
of solution of uncertain differential equation.

Section 15.9 - Bibliographic Notes

333

Uncertain differential equation has been extended by many researchers.


For example, uncertain delay differential equation was studied among others
by Barbacioru [1], Ge and Zhu [48], and Liu and Fei [134]. In addition,
uncertain differential equation with jumps was suggested by Yao [217], and
backward uncertain differential equation was discussed by Ge and Zhu [49].

Chapter 16

Uncertain Finance
This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differential equation.

16.1

Uncertain Stock Model

Liu [116] supposed that the stock price follows an uncertain differential equation and presented an uncertain stock model in which the bond price Xt and
the stock price Yt are determined by
(
dXt = rXt dt
(16.1)
dYt = eYt dt + Yt dCt
where r is the riskless interest rate, e is the log-drift, is the log-diffusion, and
Ct is a canonical Liu process. Note that the bond price is Xt = X0 exp(rt)
and the stock price is
Yt = Y0 exp(et + Ct )
(16.2)
whose inverse uncertainty distribution is
1
t ()

t 3

ln
.
= Y0 exp et +

(16.3)

European Option
Definition 16.1 A European call option is a contract that gives the holder
the right to buy a stock at an expiration time s for a strike price K.
The payoff from a European call option is (Ys K)+ since the option is rationally exercised if and only if Ys > K. Considering the time value of money
resulted from the bond, the present value of the payoff is exp(rs)(Ys K)+ .

336

Chapter 16 - Uncertain Finance

Hence the European call option price should be the expected present value
of the payoff.
Definition 16.2 Assume a European call option has a strike price K and
an expiration time s. Then the European call option price is
fc = exp(rs)E[(Ys K)+ ].

(16.4)

Y.t

...
..........
...
....
... ... ...... .
....
.... .. ...... .........
.................................................................................................................................................................. .... ....... . ....
...
s ....
.. ........
........
.
...
.. ...
....
..
.
...
.
.
.
.
.
.
.
.
.
.
...
.
.
.
...
.
..
.. ...... ........ .....
.
.
.
.
...
.
.
.
...
.... ....
.. ......... .... ... .......
....... .
.
...
.
...
....... .......
....
..
............
.
...
. ... .. .
.
...
.
. ........
.
.
.
.
.
...
.
.
... ..
........
.. ... ....... ............ ... .........
..
.......
..
.
.................................................................................................................................................................................................
. .....
.
.
..
.
....
.... ...... .......
...
...
.... .
.
... ... ...
..
... ... .....
.
......
...
......
..
...
..
0 ...
...
...
.
...
.......................................................................................................................................................................................................................................................................................
...
..
.....

Figure 16.1: Payoff (Ys K)+ from European Call Option

Theorem 16.1 (Liu [116]) Assume a European call option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European call option price is
1

Z
fc = exp(rs)
0

!
!+

s 3

Y0 exp es +
ln
K
d.

(16.5)

Proof: Since (Ys K)+ is an increasing function with respect to Ys , it has


an inverse uncertainty distribution
1
s ()

!
!+

s 3

Y0 exp es +
ln
K
.

It follows from Definition 16.2 that the European call option price formula is
just (16.5).
Remark 16.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.

Section 16.1 - Uncertain Stock Model

337

Example 16.1: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European call option price
fc = 6.91.
Definition 16.3 A European put option is a contract that gives the holder
the right to sell a stock at an expiration time s for a strike price K.
The payoff from a European put option is (K Ys )+ since the option is rationally exercised if and only if Ys < K. Considering the time value of money
resulted from the bond, the present value of this payoff is exp(rs)(K Ys )+ .
Hence the European put option price should be the expected present value
of the payoff.
Definition 16.4 Assume a European put option has a strike price K and
an expiration time s. Then the European put option price is
fp = exp(rs)E[(K Ys )+ ].

(16.6)

Theorem 16.2 (Liu [116]) Assume a European put option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European put option price is
!!+

Z 1

s 3
ln
d.
(16.7)
fp = exp(rs)
K Y0 exp es +

1
0
Proof: Since (K Ys )+ is a decreasing function with respect to Ys , it has
an inverse uncertainty distribution
!
!+

s 3 1
1
ln
K
.
s () = Y0 exp es +

It follows from Definition 16.4 that the European put option price is
!!+

Z 1
s 3 1
fp = exp(rs)
K Y0 exp es +
ln
d

0
!!+

Z 1
s 3

= exp(rs)
K Y0 exp es +
ln
d.

1
0
The European put option price formula is verified.
Remark 16.2: It is easy to verify that the option price is a decreasing
function of the interest rate r, and is an increasing function of the strike
price K.

338

Chapter 16 - Uncertain Finance

Example 16.2: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European put option price
fp = 4.40.
American Option
Definition 16.5 An American call option is a contract that gives the holder
the right to buy a stock at any time prior to an expiration time s for a strike
price K.
It is clear that the payoff from an American call option is the supremum
of (Yt K)+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is
sup exp(rt)(Yt K)+ .

(16.8)

0ts

Hence the American call option price should be the expected present value
of the payoff.
Definition 16.6 Assume an American call option has a strike price K and
an expiration time s. Then the American call option price is


+
fc = E sup exp(rt)(Yt K) .
(16.9)
0ts

Theorem 16.3 (Chen [10]) Assume an American call option for the uncertain stock model (16.1) has a strike price K and an expiration time s. Then
the American call option price is
!
!+

Z 1
t 3

ln
K
d.
fc =
sup exp(rt) Y0 exp et +

1
0 0ts
Proof: It follows from Theorem 15.13 that sup0ts exp(rt)(Yt K)+ has
an inverse uncertainty distribution
!
!+

t 3

1
s () = sup exp(rt) Y0 exp et +
ln
K
.

1
0ts
Hence the American call option price formula follows from Definition 16.6
immediately.
Remark 16.3: It is easy to verify that the option price is a decreasing
function with respect to either the interest rate r or the strike price K.

339

Section 16.1 - Uncertain Stock Model

Example 16.3: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American call option price
fc = 19.8.
Definition 16.7 An American put option is a contract that gives the holder
the right to sell a stock at any time prior to an expiration time s for a strike
price K.
It is clear that the payoff from an American put option is the supremum
of (K Yt )+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is
sup exp(rt)(K Yt )+ .

(16.10)

0ts

Hence the American put option price should be the expected present value
of the payoff.
Definition 16.8 Assume an American put option has a strike price K and
an expiration time s. Then the American put option price is


fp = E sup exp(rt)(K Yt )+ .
(16.11)
0ts

Theorem 16.4 (Chen [10]) Assume an American put option for the uncertain stock model (16.1) has a strike price K and an expiration time s. Then
the American put option price is
Z

fp =
0

!!+

t 3
d.
ln
sup exp(rt) K Y0 exp et +

1
0ts

Proof: It follows from Theorem 15.14 that sup0ts exp(rt)(K Yt )+ has


an inverse uncertainty distribution
1
s ()

!!+

t 3 1
= sup exp(rt) K Y0 exp et +
ln
.

0ts

Hence the American put option price formula follows from Definition 16.8
immediately.
Remark 16.4: It is easy to verify that the option price is a decreasing
function of the interest rate r, and is an increasing function of the strike
price K.

340

Chapter 16 - Uncertain Finance

Example 16.4: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American put option price
fp = 3.90.
Asian Option
Definition 16.9 An Asian call option is a contract whose payoff at the expiration time s is
 Z s
+
1
Yt dt K
(16.12)
s 0
where K is a strike price.
Considering the time value of money resulted from the bond, the present
value of the payoff from an Asian call option is
+
 Z s
1
Yt dt K
.
(16.13)
exp(rs)
s 0
Hence the Asian call option price should be the expected present value of the
payoff.
Definition 16.10 Assume an Asian call option has a strike price K and an
expiration time s. Then the Asian call option price is
" Z
+ #
1 s
fc = exp(rs)E
Yt dt K
.
(16.14)
s 0
Theorem 16.5 (Sun and Chen [198]) Assume an Asian call option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian call option price is
!
!+

Z 1
Z
Y0 s
t 3

fc = exp(rs)
exp et +
ln
dt K
d.
s 0

1
0
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Yt dt
0

is
1
s () = Y0

Z
0

t 3

exp et +
ln
dt.

Hence the Asian call option price formula follows from Definition 16.10 immediately.

Section 16.1 - Uncertain Stock Model

341

Definition 16.11 An Asian put option is a contract whose payoff at the


expiration time s is

+
Z
1 s
K
Yt dt
(16.15)
s 0
where K is a strike price.
Considering the time value of money resulted from the bond, the present
value of the payoff from an Asian put option is
+

Z
1 s
.
(16.16)
Yt dt
exp(rs) K
s 0
Hence the Asian put option price should be the expected present value of the
payoff.
Definition 16.12 Assume an Asian put option has a strike price K and an
expiration time s. Then the Asian put option price is
"
+ #
Z
1 s
fp = exp(rs)E
K
Yt dt
.
(16.17)
s 0
Theorem 16.6 (Sun and Chen [198]) Assume an Asian put option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian put option price is
! !+

Z 1
Z

Y0 s
t 3
fc = exp(rs)
ln
dt
d.
K
exp et +
s 0

1
0
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Yt dt
0

is
1
s ()

Z
= Y0
0

t 3
ln
dt.
exp et +

Hence the Asian put option price formula follows from Definition 16.12 immediately.
General Stock Model
Generally, we may assume the stock price follows a general uncertain differential equation and obtain a general stock model in which the bond price Xt
and the stock price Yt are determined by
(
dXt = rXt dt
(16.18)
dYt = F (t, Yt )dt + G(t, Yt )dCt

342

Chapter 16 - Uncertain Finance

where r is the riskless interest rate, F and G are two functions, and Ct is a
canonical Liu process.
Note that the -path Yt of the stock price Yt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time is
s. It follows from Definition 16.2 and Theorem 15.12 that the European call
option price is
Z
1

(Ys K)+ d.

fc = exp(rs)

(16.19)

It follows from Definition 16.4 and Theorem 15.12 that the European put
option price is
Z 1
fp = exp(rs)
(K Ys )+ d.
(16.20)
0

It follows from Definition 16.6 and Theorem 15.13 that the American call
option price is

Z 1
fc =
sup exp(rt)(Yt K)+ d.
(16.21)
0

0ts

It follows from Definition 16.8 and Theorem 15.14 that the American put
option price is

Z 1
fp =
sup exp(rt)(K Yt )+ d.
(16.22)
0

0ts

It follows from Definition 16.9 and Theorem 15.17 that the Asian call option
price is
+ #
Z 1 " Z s
1

Y dt K
d.
(16.23)
fc = exp(rs)
s 0 t
0
It follows from Definition 16.11 and Theorem 15.18 that the Asian put option
price is
+ #
Z
Z 1 "
1 s
fp = exp(rs)
Y dt
d.
(16.24)
K
s 0 t
0
Multifactor Stock Model
Now we assume that there are multiple stocks whose prices are determined
by multiple Liu processes. In this case, we have a multifactor stock model in
which the bond price Xt and the stock prices Yit are determined by

dXt = rXt dt

n
X
(16.25)

dY
=
e
Y
dt
+
ij Yit dCjt , i = 1, 2, , m
it
i
it

j=1

where r is the riskless interest rate, ei are the log-drifts, ij are the logdiffusions, Cjt are independent Liu processes, i = 1, 2, , m, j = 1, 2, , n.

343

Section 16.1 - Uncertain Stock Model

Portfolio Selection
For the multifactor stock model (16.25), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (t , 1t , , mt ) (i.e.,
the investment fractions meeting t + 1t + + mt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
dZt = rt Zt dt +

m
X

ei it Zt dt +

i=1

m X
n
X

ij it Zt dCjt .

(16.26)

i=1 j=1

That is,

Z tX
n Z tX
m
m
X
ij is dCjs .
Zt = Z0 exp(rt) exp
(ei r)is ds +
0 i=1

j=1

0 i=1

Portfolio selection problem is to find an optimal portfolio (t , 1t , , mt )


such that the wealth Zs is maximized in the sense of expected value.
No-Arbitrage
The stock model (16.25) is said to be no-arbitrage if there is no portfolio
(t , 1t , , mt ) such that for some time s > 0, we have
M{exp(rs)Zs Z0 } = 1

(16.27)

M{exp(rs)Zs > Z0 } > 0

(16.28)

and
where Zt is determined by (16.26) and represents the wealth at time t.
Theorem 16.7
model (16.25) is

11
21

..
.
m1

(Yaos No-Arbitrage Theorem [224]) The multifactor stock


no-arbitrage if and only if the system of linear equations

12 1n
x1
e1 r

22 2n
x2 e2 r
(16.29)

..
.. .. =
..
..

.
.
. .
.
m2 mn
xn
em r

has a solution, i.e., (e1 r, e2 r, , em r) is a linear combination of column


vectors (11 , 21 , , m1 ), (12 , 22 , , m2 ), , (1n , 2n , , mn ).
Proof: When the portfolio (t , 1t , , mt ) is accepted, the wealth at each
time t is

Z tX
m
n Z tX
m
X
Zt = Z0 exp(rt) exp
(ei r)is ds +
ij is dCjs .
0 i=1

j=1

0 i=1

344

Chapter 16 - Uncertain Finance

Thus
ln(exp(rt)Zt ) ln Z0 =

Z tX
m

(ei r)is ds +

0 i=1

n Z tX
m
X
j=1

ij is dCjs

0 i=1

is a normal uncertain variable with expected value


Z tX
m

(ei r)is ds

0 i=1

and variance


2
n Z t X
m

X

ij is ds .



0

j=1

i=1

Assume the system (16.29) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (t , 1t , , mt ),
suppose


n Z t X
m

X


ij is ds = 0.


0
j=1

Then

m
X

ij is = 0,

i=1

j = 1, 2, , n, s (0, t].

i=1

Since the system (16.29) has a solution, we have


m
X

(ei r)is = 0,

s (0, t]

i=1

and

Z tX
m

(ei r)is ds = 0.

0 i=1

This fact implies that


ln(exp(rt)Zt ) ln Z0 = 0
and
M{exp(rt)Zt > Z0 } = 0.
That is, the stock model (16.25) is no-arbitrage. Case II: for any given time
t and portfolio (t , 1t , , mt ), suppose
m

n Z t X

X


ij is ds 6= 0.



0
j=1

i=1

345

Section 16.1 - Uncertain Stock Model

Then ln(exp(rt)Zt ) ln Z0 is a normal uncertain variable with nonzero


variance and
M{ln(exp(rt)Zt ) ln Z0 0} < 1.
That is,
M{exp(rt)Zt Z0 } < 1
and the multifactor stock model (16.25) is no-arbitrage.
Conversely, assume the system (16.29) has no solution. Then there exist
real numbers 1 , 2 , , m such that
m
X

ij i = 0,

j = 1, 2, , n

i=1

and

m
X

(ei r)i > 0.

i=1

Now we take a portfolio


(t , 1t , , mt ) (1 (1 + 2 + + m ), 1 , 2 , , m ).
Then
ln(exp(rt)Zt ) ln Z0 =

Z tX
m

(ei r)i ds > 0.

0 i=1

Thus we have
M{exp(rt)Zt > Z0 } = 1.
Hence the multifactor stock model (16.25) is arbitrage. The theorem is thus
proved.
Theorem 16.8 The multifactor
log-diffusion matrix

11
21

..
.
m1

stock model (16.25) is no-arbitrage if its


12
22
..
.

..
.

1n
2n
..
.

m2

mn

(16.30)

has rank m, i.e., the row vectors are linearly independent.


Proof: If the log-diffusion matrix (16.30) has rank m, then the system of
equations (16.29) has a solution. It follows from Theorem 16.7 that the
multifactor stock model (16.25) is no-arbitrage.
Theorem 16.9 The multifactor stock model (16.25) is no-arbitrage if its
log-drifts are all equal to the interest rate r, i.e.,
ei = r,

i = 1, 2, , m.

(16.31)

346

Chapter 16 - Uncertain Finance

Proof: Since the log-drifts ei = r for any i = 1, 2, , m, we immediately


have
(e1 r, e2 r, , em r) (0, 0, , 0)
that is a linear combination of (11 , 21 , , m1 ), (12 , 22 , , m2 ), ,
(1n , 2n , , mn ). It follows from Theorem 16.7 that the multifactor stock
model (16.25) is no-arbitrage.

16.2

Uncertain Interest Rate Model

Real interest rates do not remain unchanged. Chen and Gao [18] assumed
that the interest rate follows an uncertain differential equation and presented
an uncertain interest rate model,
dXt = (m aXt )dt + dCt

(16.32)

where m, a, are positive numbers. Chen and Gao [18] also investigated the
uncertain interest rate model,
p
(16.33)
dXt = (m aXt )dt + Xt dCt .
More generally, we may assume the interest rate Xt follows a general uncertain differential equation and obtain a general interest rate model,
dXt = F (t, Xt )dt + G(t, Xt )dCt

(16.34)

where F and G are two functions, and Ct is a canonical Liu process.


Zero-Coupon Bond
A zero-coupon bond is a bond bought at a price lower than its face value
that is the amount it promises to pay at the maturity date. For simplicity,
we assume the face value is always 1 dollar. One problem is how to price a
zero-coupon bond.
Definition 16.13 Let Xt be the uncertain interest rate. Then the price of a
zero-coupon bond with a maturity date s is

 Z s

f = E exp
Xt dt .
(16.35)
0

Theorem 16.10 Let Xt be the -path of the uncertain interest rate Xt .


Then the price of a zero-coupon bond with maturity date s is
Z
f=
0

 Z
exp
0


Xt dt d.

(16.36)

Section 16.3 - Uncertain Currency Model

347

Proof: It follows from Theorem 15.17 that the inverse uncertainty distribution of time integral
Z s
Xt dt
0

is
1
s () =

Xt dt.

Hence the price formula of zero-coupon bond follows from Definition 16.13
immediately.

16.3

Uncertain Currency Model

Liu, Chen and Ralescu [142] assumed that the exchange rate follows an uncertain differential equation and proposed an uncertain currency model,

dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency)


(16.37)

dZt = eZt dt + Zt dCt (Exchange Rate)


where Xt represents the domestic currency with domestic interest rate u, Yt
represents the foreign currency with foreign interest rate v, and Zt represents
the exchange rate that is domestic currency price of one unit of foreign currency at time t. Note that the domestic currency price is Xt = X0 exp(ut),
the foreign currency price is Yt = Y0 exp(vt), and the exchange rate is
Zt = Z0 exp(et + Ct )

(16.38)

whose inverse uncertainty distribution is


1
t ()

t 3

= Z0 exp et +
.
ln

(16.39)

European Currency Option


Definition 16.14 A European currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at an expiration
time s for K units of domestic currency.
Suppose that the price of this contract is f in domestic currency. Then
the investor pays f for buying the contract at time 0, and receives (Zs K)+
in domestic currency at the expiration time s. Thus the expected return of
the investor at time 0 is
f + exp(us)E[(Zs K)+ ].

(16.40)

348

Chapter 16 - Uncertain Finance

On the other hand, the bank receives f for selling the contract at time 0,
and pays (1 K/Zs )+ in foreign currency at the expiration time s. Thus the
expected return of the bank at the time 0 is
f exp(vs)Z0 E[(1 K/Zs )+ ].

(16.41)

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
f + exp(us)E[(Zs K)+ ] = f exp(vs)Z0 E[(1 K/Zs )+ ]. (16.42)
Thus the European currency option price is given by the definition below.
Definition 16.15 (Liu, Chen and Ralescu [142]) Assume a European currency option has a strike price K and an expiration time s. Then the European currency option price is
f=

1
1
exp(us)E[(Zs K)+ ] + exp(vs)Z0 E[(1 K/Zs )+ ].
2
2

(16.43)

Theorem 16.11 (Liu, Chen and Ralescu [142]) Assume a European currency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the European currency option price is
!
!+

Z 1
1

s 3
ln
K
d
Z0 exp es +
f = exp(us)
2

1
0
!!+

Z 1
1
s 3

+ exp(vs)
Z0 K/ exp es +
ln
d.
2

1
0
Proof: Since (Zs K)+ and Z0 (1 K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
!
!+

s 3

1
s () = Z0 exp es +
ln
K
,

1
1
s ()

!!+

s 3
ln
,
Z0 K/ exp es +

respectively. Thus the European currency option price formula follows from
Definition 16.15 immediately.
Remark 16.5: The European currency option price of the uncertain currency model (16.37) is a decreasing function of K, u and v.
Example 16.5: Assume the domestic interest rate u = 0.08, the foreign interest rate v = 0.07, the log-drift e = 0.06, the log-diffusion = 0.32, the initial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =

349

Section 16.3 - Uncertain Currency Model

2. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm)


yields the European currency option price
f = 0.977.
American Currency Option
Definition 16.16 An American currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at any time prior
to an expiration time s for K units of domestic currency.
Suppose that the price of this contract is f in domestic currency. Then
the investor pays f for buying the contract, and receives
sup exp(ut)(Zt K)+

(16.44)

0ts

in domestic currency. Thus the expected return of the investor at time 0 is




f + E sup exp(ut)(Zt K)+ .
(16.45)
0ts

On the other hand, the bank receives f for selling the contract, and pays
sup exp(vt)(1 K/Zt )+ .

(16.46)

0ts

in foreign currency. Thus the expected return of the bank at time 0 is




f E sup exp(vt)Z0 (1 K/Zt )+ .
(16.47)
0ts

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,


f + E sup exp(ut)(Zt K)+
0ts


=f E

sup exp(vt)Z0 (1 K/Zt )

(16.48)


.

0ts

Thus the American currency option price is given by the definition below.
Definition 16.17 (Liu, Chen and Ralescu [142]) Assume an American currency option has a strike price K and an expiration time s. Then the American currency option price is




1
1
f = E sup exp(ut)(Zt K)+ + E sup exp(vt)Z0 (1 K/Zt )+ .
2 0ts
2 0ts

350

Chapter 16 - Uncertain Finance

Theorem 16.12 (Liu, Chen and Ralescu [142]) Assume an American currency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the American currency option price is
!
!+

Z
t 3
1 1

sup exp(ut) Z0 exp et +


f =
ln
K
d
2 0 0ts

1
!!+

Z
1 1
t 3

+
sup exp(vt) Z0 K/ exp et +
ln
d.
2 0 0ts

1
Proof: It follows from Theorem 15.13 that sup0ts exp(ut)(Zt K)+ and
sup0ts exp(vt)Z0 (1 K/Zt )+ have inverse uncertainty distributions
!
!+

t 3
1
ln
K
,
s () = sup exp(ut) Z0 exp et +

1
0ts
1
s ()

!!+

t 3
ln
,
= sup exp(vt) Z0 K/ exp et +

1
0ts

respectively. Thus the American currency option price formula follows from
Definition 16.17 immediately.
General Currency Model
If the exchange rate follows a general uncertain differential equation, then we
have a general currency model,

dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency)


(16.49)

dZt = F (t, Zt )dt + G(t, Zt )dCt (Exchange Rate)


where u and v are interest rates, F and G are two functions, and Ct is a
canonical Liu process.
Note that the -path Zt of the exchange rate Zt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time
is s. It follows from Definition 16.15 and Theorem 15.12 that the European
currency option price is
Z

1 1
f=
exp(us)(Zs K)+ + exp(vs)Z0 (1 K/Zs )+ d.
2 0
It follows from Definition 16.17 and Theorem 15.13 that the American currency option price is

Z 
1 1
f=
sup exp(ut)(Zt K)+ + sup exp(vt)Z0 (1 K/Zt )+ d.
2 0 0ts
0ts

Section 16.4 - Bibliographic Notes

16.4

351

Bibliographic Notes

The classical finance theory assumed that stock price, interest rate, and exchange rate follow stochastic differential equations. However, this preassumption was challenged among others by Liu [125] in which a convincing paradox
was presented to show why the real stock price is impossible to follow any
stochastic differential equations. As an alternative, Liu [125] suggested to
develop a theory of uncertain finance.
Uncertain differential equations were first introduced into finance by Liu
[116] in 2009 in which an uncertain stock model was proposed and European
option price formulas were provided. Besides, Chen [10] derived American
option price formulas, Sun and Chen [198] verified Asian option price formulas, and Yao [224] proved a no-arbitrage theorem for this type of uncertain
stock model. It is emphasized that other uncertain stock models were also
actively investigated by Peng and Yao [170], Yu [233], and Chen, Liu and
Ralescu [16], among others.
Uncertain differential equations were used to simulate interest rate by
Chen and Gao [18] in 2013 and an uncertain interest rate model was presented. On the basis of this model, the price of zero-coupon bond was also
produced.
Uncertain differential equations were employed to model currency exchange rate by Liu, Chen and Ralescu [142] in which an uncertain currency
model was proposed and some currency option price formulas were also derived for the uncertain currency markets.

Appendix A

Probability Theory
Probability theory (Kolmogorov [79]) is a branch of mathematics for studying the behavior of random phenomena. The emphasis in this appendix is
mainly on probability measure, random variable, probability distribution, independence, operational law, expected value, variance, law of large numbers,
conditional probability, stochastic process, stochastic calculus, and stochastic
differential equation.

A.1

Probability Measure

Let be a nonempty set, and A a -algebra over . Each element in A is


called an event. In order to present an axiomatic definition of probability,
the following three axioms are assumed:
Axiom 1. (Normality Axiom) Pr{} = 1 for the universal set .
Axiom 2. (Nonnegativity Axiom) Pr{A} 0 for any event A.
Axiom 3. (Additivity Axiom) For every countable sequence of mutually disjoint events {Ai }, we have
( )

[
X
Pr
Ai =
Pr{Ai }.
(A.1)
i=1

i=1

Definition A.1 The set function Pr is called a probability measure if it satisfies the normality, nonnegativity, and additivity axioms.
Example A.1: Let = {1 , 2 , }, and let A be the power set of .
Assume that p1 , p2 , are nonnegative numbers such that p1 + p2 + = 1.
Define a set function on A as
X
Pr{A} =
pi .
(A.2)
i A

354

Appendix A - Probability Theory

Then Pr is a probability measure.


Example A.2: Let be a nonnegative and integrable function on < (the
set of real numbers) such that
Z
(x)dx = 1.
(A.3)
<

Define a set function on the Borel algebra as


Z
Pr{A} =
(x)dx.

(A.4)

Then Pr is a probability measure.


Definition A.2 Let be a nonempty set, A a -algebra over , and Pr a
probability measure. Then the triplet (, A, Pr) is called a probability space.
Product Probability
Let (k , Ak , Prk ), k = 1, 2, be a sequence of probability spaces. Now we
write
= 1 2 , A = A1 A2
(A.5)
It has been proved that there is a unique probability measure Pr on the
product -algebra A such that
(
)

Y
Y
Prk {Ak }
(A.6)
Pr
Ak =
k=1

k=1

where Ak are arbitrarily chosen events from Ak for k = 1, 2, , respectively.


This conclusion is called the product probability theorem. Such a probability
measure is called the product probability measure, denoted by
Pr = Pr1 Pr2

(A.7)

Definition A.3 Assume (i , Ai , Pri ) are probability spaces for i = 1, 2,


Let = 1 2 , A = A1 A2 and Pr = Pr1 Pr2 Then
the triplet (, A, Pr) is called the product probability space.

A.2

Random Variable

Definition A.4 A random variable is a measurable function from a probability space (, A, Pr) to the set of real numbers, i.e., { B} is an event for
any Borel set B.

355

Section A.3 - Probability Distribution

Example A.3: Take (, A, Pr) to be {1 , 2 } with Pr{1 } = Pr{2 } = 0.5.


Then the function
(
0, if = 1
() =
1, if = 2
is a random variable.
Definition A.5 Let f be a real-valued measurable function, and 1 , 2 , , n
random variables on the probability space (, A, Pr). Then = f (1 , 2 , , n )
is a random variable defined by
() = f (1 (), 2 (), , n ()),

A.3

(A.8)

Probability Distribution

Definition A.6 The probability distribution : < [0, 1] of a random variable is defined by
(x) = Pr { x} .
(A.9)
That is, (x) is the probability that the random variable takes a value less
than or equal to x. A function : < [0, 1] is a probability distribution if
and only if it is an increasing and right-continuous function with
lim (x) = 0;

lim (x) = 1.

x+

(A.10)

Example A.4: Take (, A, Pr) to be {1 , 2 } with Pr{1 } = Pr{2 } = 0.5.


We now define a random variable as follows,

1, if = 1
() =
1, if = 2 .
Then has a probability distribution

0, if x < 1
0.5, if 1 x < 1
(x) =

1, if x 1.
Definition A.7 The probability density function : < [0, +) of a random variable is a function such that
Z x
(x) =
(y)dy
(A.11)

holds for all x <, where is the probability distribution of the random
variable .

356

Appendix A - Probability Theory

Theorem A.1 (Probability Inversion Theorem) Let be a random variable


whose probability density function exists. Then for any Borel set B, we
have
Z
Pr{ B} =

(y)dy.

(A.12)

Proof: Assume that C is the class of all subsets C of < for which the relation
Z
Pr{ C} =
(y)dy
(A.13)
C

holds. We will show that C contains all Borel sets. On the one hand, we
may prove that C is a monotone class (if Ai C and Ai A or Ai A, then
A C). On the other hand, we may verify that C contains all intervals of the
form (, a], (a, b], (b, ) and since
Z a
Pr{ (, a]} = (a) =
(y)dy,

Z
Pr{ (b, +)} = (+) (b) =

(y)dy,
b

Pr{ (a, b]} = (b) (a) =

(y)dy,
a

Z
Pr{ } = 0 =

(y)dy

where is the probability distribution of . Let F be the algebra consisting of


all finite unions of disjoint sets of the form (, a], (a, b], (b, ) and . Note
that for any disjoint sets C1 , C2 , , Cm of F and C = C1 C2 Cm ,
we have
Z
m
m Z
X
X
Pr{ C} =
Pr{ Cj } =
(y)dy =
(y)dy.
j=1

j=1

Cj

That is, C C. Hence we have F C. Since the smallest -algebra containing F is just the Borel algebra, the monotone class theorem (if F C and
(F) is the smallest -algebra containing F, then (F) C) implies that C
contains all Borel sets.
Example A.5: A random variable has a uniform distribution if its probability density function is defined by
(x) =

1
,
ba

a x b.

where a and b are given real numbers with a < b.

(A.14)

357

Section A.5 - Operational Law

Example A.6: A random variable has an exponential distribution if its


probability density function is defined by


1
x
(x) = exp
, x0
(A.15)

where is a positive number.


Example A.7: A random variable has a normal distribution if its probability density function is defined by


(x )2
1
, < x < +
(A.16)
(x) = exp
2 2
2
where and are real numbers.
Example A.8: A random variable has a lognormal distribution if its logarithm is normally distributed, i.e., its probability density function is defined
by


(ln x )2
1
exp
(x) =
, x>0
(A.17)
2 2
x 2
where and are real numbers.

A.4

Independence

Definition A.8 The random variables 1 , 2 , , n are said to be independent if


( n
)
n
\
Y
Pr
(i Bi ) =
Pr{i Bi }
(A.18)
i=1

i=1

for any Borel sets B1 , B2 , , Bn .


Theorem A.2 Let 1 , 2 , , n be independent random variables, and f1 , f2 ,
, fn measurable functions. Then f1 (1 ), f2 (2 ), , fn (n ) are independent random variables.
Proof: For any Borel sets B1 , B2 , , Bn , it follows from the definition of
independence that
( n
)
( n
)
\
\
1
Pr
(fi (i ) Bi ) = Pr
(i fi (Bi ))
i=1

n
Y
i=1

Pr{i fi1 (Bi )} =

i=1
n
Y

Pr{fi (i ) Bi }.

i=1

Thus f1 (1 ), f2 (2 ), , fn (n ) are independent random variables.

358

A.5

Appendix A - Probability Theory

Operational Law

Theorem A.3 Let 1 , 2 , , n be independent random variables with probability distributions 1 , 2 , , n , respectively, and f : <n < a measurable function. Then
= f (1 , 2 , , n )
(A.19)
is a random variable with probability distribution
Z
(x) =
d1 (x1 )d2 (x2 ) dn (xn ).

(A.20)

f (x1 ,x2 , ,xn )x

Remark A.1: If 1 , 2 , , n have probability density functions 1 , 2 , ,


n , respectively, then = f (1 , 2 , , n ) has a probability distribution
Z
(x) =
1 (x1 )2 (x2 ) n (xn )dx1 dx2 dxn .
(A.21)
f (x1 ,x2 , ,xn )x

Exercise A.1: Let 1 , 2 , , n be independent random variables with


probability distributions 1 , 2 , , n , respectively. Show that the sum
= 1 + 2 + + n
has a probability distribution
Z
(x) =

d1 (x1 )d2 (x2 ) dn (xn ).

(A.22)

(A.23)

x1 +x2 ++xn x

Especially, let 2 and 2 be independent random variables with probability


distributions 1 and 2 , respectively. Then = 1 + 2 has a probability
distribution
Z
+

1 (x y)d2 (y)

(x) =

(A.24)

that is called the convolution of 1 and 2 .


Exercise A.2: Let 1 , 2 , , n be independent random variables with
probability distributions 1 , 2 , , n , respectively. Show that the minimum
= 1 2 n
(A.25)
is a random variable with probability distribution
(x) = 1 (1 1 (x))(1 2 (x)) (1 n (x)).

(A.26)

Exercise A.3: Let 1 , 2 , , n be independent random variables with


probability distributions 1 , 2 , , n , respectively. Show that the maximum
= 1 2 n
(A.27)

359

Section A.5 - Operational Law

is a random variable with probability distribution


(x) = 1 (x)2 (x) n (x).

(A.28)

Operational Law for Boolean System


Theorem A.4 Assume that 1 , 2 , , n are independent Boolean random
variables, i.e.,
(
1 with probability ai
i =
(A.29)
0 with probability 1 ai
for i = 1, 2, , n. If f is a Boolean function, then = f (1 , 2 , , n ) is a
Boolean random variable such that
!
n
X
Y
Pr{ = 1} =
i (xi ) f (x1 , x2 , , xn )
(A.30)
(x1 ,x2 , ,xn ){0,1}n

where

(
i (xi ) =

i=1

ai ,
if xi = 1
1 ai , if xi = 0

(A.31)

for i = 1, 2, , n.
Exercise A.4: Let 1 , 2 , , n be independent Boolean random variables
defined by (A.29). Show that
= 1 2 n

(A.32)

is a Boolean random variable such that


Pr{ = 1} = a1 a2 an .

(A.33)

Exercise A.5: Let 1 , 2 , , n be independent Boolean random variables


defined by (A.29). Show that
= 1 2 n

(A.34)

is a Boolean random variable such that


Pr{ = 1} = 1 (1 a1 )(1 a2 ) (1 an ).

(A.35)

Exercise A.6: Let 1 , 2 , , n be independent Boolean random variables


defined by (A.29). Show that
= k-max [1 , 2 , , n ]

(A.36)

360

Appendix A - Probability Theory

is a Boolean random variable such that


X

n
Y

(x1 ,x2 , ,xn ){0,1}n

i=1

Pr{ = 1} =
where

i (xi ) k-max [x1 , x2 , , xn ] (A.37)

ai ,
if xi = 1
1 ai , if xi = 0

i (xi ) =

A.6

(i = 1, 2, , n).

(A.38)

Expected Value

Definition A.9 Let be a random variable. Then the expected value of is


defined by
Z +
Z 0
E[] =
Pr{ r}dr
Pr{ r}dr
(A.39)

provided that at least one of the two integrals is finite.


Exercise A.7: Assume that is a discrete random variable taking values xi
with probabilities pi , i = 1, 2, , m, respectively. Show that
E[] =

m
X

pi xi .

i=1

Theorem A.5 Let be a random variable with probability distribution . If


the expected value exists, then
Z +
Z 0
E[] =
(1 (x))dx
(x)dx.
(A.40)

Proof: It follows from the probability inversion theorem that for almost all
numbers x, we have Pr{ x} = 1 (x) and Pr{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
Pr{ x}dx
Pr{ x}dx

(1 (x))dx

(x)dx.

The theorem is proved.


Theorem A.6 Let be a random variable with probability distribution . If
the expected value exists, then
Z +
E[] =
xd(x).
(A.41)

361

Section A.6 - Expected Value

Proof: It follows from the change of variables of integral and Theorem A.5
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx

0
+

xd(x).

xd(x) =

xd(x) +

The theorem is proved.


Remark A.2: Let (x) be the probability density function of . Then we
immediately have
Z +
E[] =
x(x)dx
(A.42)

because d(x) = (x)dx.


Theorem A.7 Let be a random variable with probability distribution . If
the expected value exists, then
Z 1
E[] =
1 ()d.
(A.43)
0

Proof: It follows from the change of variables of integral and Theorem A.5
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx

1 ()d +

(0)

Z
0

(0)

1 ()d =

1 ()d.

The theorem is proved.


Theorem A.8 Let 1 , 2 , , n be independent random variables with probability distributions 1 , 2 , , n , respectively, and f : <n < a measurable function. Then = f (1 , 2 , , n ) has an expected value
Z
E[] =
f (x1 , x2 , , xn )d1 (x1 )d2 (x2 ) dn (xn ).
(A.44)
<n

Theorem A.9 Let 1 , 2 , , n be independent random variables with probability density functions 1 , 2 , , n , respectively, and f : <n < a measurable function. Then = f (1 , 2 , , n ) has an expected value
Z
E[] =
f (x1 , x2 , , xn )1 (x1 )2 (x2 ) n (xn )dx1 dx2 dxn . (A.45)
<n

362

Appendix A - Probability Theory

Theorem A.10 Let and be random variables with finite expected values.
Then
E[a + b] = aE[] + bE[]
(A.46)
for any numbers a and b. Furthermore, if the two random variables are also
independent, then
E[] = E[]E[].
(A.47)

A.7

Variance

Definition A.10 Let be a random variable with finite expected value e.


Then the variance of is defined by V [] = E[( e)2 ].
Since ( e)2 is a nonnegative random variable, we know Pr{( e)2 r} = 0
for any r < 0. Thus
Z +
V [] =
Pr{( e)2 r}dr.
(A.48)
0

Theorem A.11 If is a random variable whose variance exists, and a and


b are real numbers, then V [a + b] = a2 V [].
Proof: Let e be the expected value of . Then E[a + b] = ae + b. It follows
from the definition of variance that


V [a + b] = E (a + b ae b)2 = a2 E[( e)2 ] = a2 V [].
Theorem A.12 Let be a random variable with probability density function
. If its expected value e exists and is finite, then its variance is
Z +
V [] =
(x e)2 (x)dx.
(A.49)

Theorem A.13 If 1 , 2 , , n are independent random variables with finite variances, then
V [1 + 2 + + n ] = V [1 ] + V [2 ] + + V [n ].

(A.50)

Proof: Let 1 , 2 , , n have expected values e1 , e2 , , en , respectively.


Then we have
E[1 + 2 + + n ] = e1 + e2 + + en .
It follows from the definition of variance that
" n #
n
n1
n
X
X
X X


V
i =
E (i e)2 + 2
E [(i e)(j e)] .
i=1

i=1

i=1 j=i+1

Since 1 , 2 , , n are independent, E [(i e)(j e)] = 0 for all i, j with


i 6= j. Thus (A.50) holds.

363

Section A.9 - Conditional Probability

A.8

Law of Large Numbers

Assume 1 , 2 , are a sequence of random variables. In order to introduce


the laws of large numbers, we will denote
Sn = 1 + 2 + + n

(A.51)

for each n throughout this section.


Theorem A.14 (Weak Law of Large Numbers) Let {i } be a sequence of iid
random variables with finite expected value e. Then
Sn
e
n

(A.52)

in the sense of convergence in probability as n . That is, for every > 0,


we have



Sn



(A.53)
lim Pr
e = 0.
n
n
Theorem A.15 (Strong Law of Large Numbers) Let 1 , 2 , be a sequence
of iid random variables with finite expected value e. Then
Sn
e, a.s.
n

(A.54)

as n . That is, there exists an event A A with Pr{A} = 1 such that


lim

A.9

Sn ()
= e,
n

A.

(A.55)

Conditional Probability

We consider the probability of an event A after it has been learned that some
other event B has occurred. This new probability is called the conditional
probability of A given B.
Definition A.11 Let (, A, Pr) be a probability space, and A, B A. Then
the conditional probability of A given B is defined by
Pr{A|B} =

Pr{A B}
Pr{B}

(A.56)

provided that Pr{B} > 0.


Example A.9: Let be an exponentially distributed random variable with
expected value . Then for any real numbers a > 0 and x > 0, the conditional
probability of a + x given a is
Pr{ a + x| a} = exp(x/) = Pr{ x}

364

Appendix A - Probability Theory

which means that the conditional probability is identical to the original probability. This is the so-called memoryless property of exponential distribution.
In other words, it is as good as new if it is functioning on inspection.
Definition A.12 The conditional probability distribution : < [0, 1] of a
random variable given B is defined by
(x|B) = Pr { x|B}

(A.57)

provided that Pr{B} > 0.


Definition A.13 The conditional probability density function of a random
variable given B is a nonnegative function such that
Z x
(x|B) =
(y|B)dy, x <
(A.58)

where (x|B) is the conditional probability distribution of given B.

A.10

Stochastic Process

A stochastic process is essentially a sequence of random variables indexed by


time.
Definition A.14 Let (, A, Pr) be a probability space and let T be a totally
ordered set (e.g. time). A stochastic process is a function Xt () from T
(, A, Pr) to the set of real numbers such that {Xt B} is an event for any
Borel set B at each time t.
For each fixed , the function Xt () is called a sample path of the stochastic process Xt . A stochastic process Xt is said to be sample-continuous if
almost all sample paths are continuous with respect to t.
Definition A.15 A stochastic process Xt is said to have independent increments if
Xt0 , Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(A.59)
are independent random variables where t0 is the initial time and t1 , t2 , , tk
are any times with t0 < t1 < < tk .
Definition A.16 A stochastic process Xt is said to have stationary increments if, for any given t > 0, the increments Xs+t Xs are identically
distributed random variables for all s > 0.
A stationary independent increment process is a stochastic process that
has not only independent increments but also stationary increments. If Xt is
a stationary independent increment process, then
Yt = aXt + b
is also a stationary independent increment process for any numbers a and b.

365

Section A.10 - Stochastic Process

Renewal Process
Let i denote the times between the (i 1)th and the ith events, known as
the interarrival times, i = 1, 2, , respectively. Define S0 = 0 and
Sn = 1 + 2 + + n ,

n 1.

(A.60)

Then Sn can be regarded as the waiting time until the occurrence of the nth
event after time t = 0.
Definition A.17 Let 1 , 2 , be iid positive interarrival times. Define
S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the stochastic process
Nt = max {n | Sn t}

(A.61)

n0

is called a renewal process.


At each time t, it is clear that Nt is a random variable taking integer
values, and
Pr{Nt n} = Pr{Sn+1 > t},

(A.62)

Pr{Nt n} = Pr{Sn t},

(A.63)

Pr{Nt = n} = Pr{Sn t < Sn+1 }

(A.64)

for each integer n.


Poisson Process
Definition A.18 A renewal process is called a Poisson process with rate
if the interarrival times are exponential random variables with a common
probability density function,


x
1
,
(x) = exp

x 0.

(A.65)

Let Nt be a Poisson process with rate . Since the sum of n iid exponential
random variables with rate follows an Erlang distribution with parameters
n and , we immediately have
Pr{Nt n} =

X
k=n

exp(t)

(t)k
.
k!

(A.66)

366

Appendix A - Probability Theory

Wiener Process
Brownian motion is the irregular movement of pollen grain suspended in
liquid. In 1923 Norbert Wiener modeled Brownian motion by the following
Wiener process.
Definition A.19 A stochastic process Wt is said to be a standard Wiener
process if
(i) W0 = 0 and almost all sample paths are continuous,
(ii) Wt has stationary and independent increments,
(iii) every increment Ws+t Ws is a normal random variable with expected
value 0 and variance t.
Note that the lengths of almost all sample paths of Wiener process are
infinitely long during any fixed time interval, and are differentiable nowhere.
Furthermore, the squared variation of Wiener process on [0, t] is equal to t
both in mean square and almost surely.

A.11

Itos Stochastic Calculus

Ito calculus, named after Kiyoshi Ito, is the most popular topic of stochastic
calculus. The central concept is the Ito integral that allows one to integrate
a stochastic process with respect to Wiener process. This section provides a
brief introduction to Ito calculus.
Definition A.20 Let Xt be a stochastic process and let Wt be a standard
Wiener process. For any partition of closed interval [a, b] with a = t1 < t2 <
< tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik

Then the Ito integral of Xt with respect to Wt is


Z

Xt dWt = lim

k
X

Xti (Wti+1 Wti )

(A.67)

i=1

provided that the limit exists in mean square and is a random variable.
Example A.10: Let Wt be a standard Wiener process. It follows from the
definition of Ito integral that
Z s
dWt = Ws ,
0

Wt dWt =
0

1 2 1
W s.
2 s
2

Section A.12 - Stochastic Differential Equation

367

Theorem A.16 (Ito Formula) Let Wt be a standard Wiener process, and let
h(t, w) be a twice continuously differentiable function. Then Xt = h(t, Wt )
has an Ito differential,
dXt =

h
h
1 2h
(t, Wt )dt.
(t, Wt )dt +
(t, Wt )dWt +
t
w
2 w2

(A.68)

Example A.11: Ito formula is the fundamental theorem of stochastic calculus. Applying Ito formula, we obtain
d(tWt ) = Wt dt + tdWt ,
d(Wt2 ) = 2Wt dWt + dt.
Definition A.21 Let Wt be a standard Wiener process and let Zt be a
stochastic process. If there exist two stochastic processes t and t such that
Z
Zt = Z0 +

Z
s ds +

s dWs

(A.69)

for any t 0, then Zt is called an Ito process with drift t and diffusion t .
Furthermore, Zt has a stochastic differential
dZt = t dt + t dWt .

A.12

(A.70)

Stochastic Differential Equation

In 1940s Kiyoshi Ito invented a type of stochastic differential equation that


is a differential equation driven by Wiener process. This section provides a
brief introduction to stochastic differential equation.
Definition A.22 Suppose Wt is a standard Wiener process, and f and g
are two functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dWt

(A.71)

is called a stochastic differential equation. A solution is an Ito process Xt


that satisfies (A.71) identically in t.
Example A.12: Let Wt be a standard Wiener process. Then the stochastic
differential equation
dXt = adt + bdWt
has a solution
Xt = at + bWt .

368

Appendix A - Probability Theory

Example A.13: Let Wt be a standard Wiener process. Then the stochastic


differential equation
dXt = aXt dt + bXt dWt
has a solution


Xt = exp

b2
a
2


t + bWt .

Theorem A.17 (Existence and Uniqueness Theorem) The stochastic differential equation
dXt = f (t, Xt )dt + g(t, Xt )dWt
(A.72)
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| L(1 + |x|), x <, t 0
(A.73)
and Lipschitz condition
|f (t, x) f (t, y)| + |g(t, x) g(t, y)| L|x y|,

x, y <, t 0 (A.74)

for some constant L. Moreover, the solution is sample-continuous.


Theorem A.18 (Feynman-Kac Formula) Consider the stochastic differential equation
dXt = f (t, Xt )dt + g(t, Xt )dWt .
(A.75)
For any measurable function h(x) and fixed T > 0, the function
"Z
#
T

U (t, x) = E
h(Xs )ds Xt = x

(A.76)

is the solution of the partial differential equation


U
1
2U
U
(t, x) + f (t, x)
(t, x) + g 2 (t, x) 2 (t, x) + h(x) = 0
t
x
2
x

(A.77)

with the terminal condition


U (T, x) = 0.

(A.78)

Appendix B

Chance Theory
Uncertainty and randomness are two basic types of indeterminacy. Chance
theory is a mathematical methodology for modeling complex systems with
not only uncertainty but also randomness. This appendix will introduce the
concepts of chance measure, uncertain random variable, chance distribution,
operational law, expected value, variance, and law of large numbers. As applications of chance theory, this appendix will also provide uncertain random
programming, uncertain random risk analysis, uncertain random reliability
analysis, uncertain random graph, and uncertain random network.

B.1

Chance Measure

Let (, L, M) be an uncertainty space and let (, A, Pr) be a probability


space. Then the product (, L, M) (, A, Pr) is called a chance space.
Essentially, it is another triplet,
( , L A, M Pr)

(B.1)

where is the universal set, L A is the product -algebra, and M Pr


is the product measure.
The universal set is clearly the set of all ordered pairs of the form
(, ), where and . That is,
= {(, ) | , } .

(B.2)

The product -algebra L A is the smallest -algebra containing measurable rectangles of the form A, where L and A A. Any element
in L A is called an event in the chance space.
What is the product measure M Pr? In order to answer this question,
let us consider an event in L A. For each , the set
= { | (, ) }

(B.3)

370

Appendix B - Chance Theory

is clearly an event in L. Thus the uncertain measure M{ } exists for each


. However, unfortunately, M{ } is not necessarily a measurable
function with respect to . In other words, for a real number r, the set
r = { | M{ } r}

(B.4)

is a subset of but not necessarily an event in A. Thus the probability


measure Pr{r } does not necessarily exist. In this case, we assign

Pr{r } =

inf

Pr{A}, if

sup

Pr{A}, if

AA,Ar

AA,Ar

inf

Pr{A} < 0.5

sup

Pr{A} > 0.5

AA,Ar

(B.5)

AA,Ar

0.5,

otherwise

in the light of maximum uncertainty principle. This ensures the probability


measure Pr{r } exists for any real number r. Now it is ready to define MPr
of as the expected value of M{ } with respect to , i.e.,
Z

Pr{r }dr.

(B.6)

Note that the above-mentioned integral is neither an uncertain measure nor


a probability measure. We will call it chance measure and represent it by
Ch{}.
..

..
.........
.............................................
....
........
.......
..
.......
......
......
.....
...
.....
.....
.
.
.
...
.
...
...
.
...
...
.
.
.......................................................................................................................
.....
..
...
.
.
.. ...
...
.... ..
.. ....
.. ..
...
.. ...
.... ..
...
.. ...
... ..
...
... .
.. ...
...
... ..
.. ...
...
... ..
.. ...
...
... ..
.. ...
...
... .
.....
.....
...
...
....
...
....
.
......
.
...
. ..
.
.
.. ......
.
.
...
... ..
.
.
.. ........
.
.
...
..
...
......
..
...
.......
......
...
..........
.......
..
...
.....................................
..
..
...
..
..
...
.
.
.
.
.....................................................................................................................................................................
..
.... ....
.
....
..........................................
..............................................
.

Figure B.1: An Event in L A


Definition B.1 (Liu [140]) Let (, L, M)(, A, Pr) be a chance space, and
let L A be an event. Then the chance measure of is defined as
Z 1
Ch{} =
Pr { | M{ | (, ) } r} dr.
(B.7)
0

371

Section B.1 - Chance Measure

Theorem B.1 (Liu [140], Monotonicity Theorem) Let (, L, M)(, A, Pr)


be a chance space. Then the chance measure Ch{} is a monotone increasing
function of and
Ch{ A} = M{} Pr{A}
(B.8)
for any L and any A A. Especially, we have
Ch{} = 0,

Ch{ } = 1.

(B.9)

Proof: Let 1 and 2 be two events with 1 2 . Then for each , we


have
{ | (, ) 1 } { | (, ) 2 }
and
M{ | (, ) 1 } M{ | (, ) 2 }.
Thus for any real number r, we have
Pr { | M{ | (, ) 1 } r}
Pr { | M{ | (, ) 2 } r} .
By the definition of chance measure, we get
Z 1
Ch{1 } =
Pr { | M{ | (, ) 1 } r} dr
0
Z 1

Pr { | M{ | (, ) 2 } r} dr = Ch{2 }.
0

That is, Ch{} is a monotone increasing function of . Next we prove the


identity (B.8). For each , we have
{ | (, ) A} =
and
M{ | (, ) A} = M{}.
For any real number r, if M{} r, then
Pr { | M{ | (, ) A} r} = Pr{A}.
If M{} < r, then
Pr { | M{ | (, ) A} r} = Pr{} = 0.
Thus
Z
Ch{ A} =

Pr { | M{ | (, ) A} r} dr

Z
=

M{}

Pr{A}dr +
0

M{}

0dr = M{} Pr{A}.

372

Appendix B - Chance Theory

Furthermore, it follows from (B.8) that


Ch{} = M{} Pr{} = 0,
Ch{ } = M{} Pr{} = 1.
The theorem is thus verified.
Theorem B.2 (Liu [140], Duality Theorem) The chance measure is selfdual. That is, for any event , we have
Ch{} + Ch{c } = 1.

(B.10)

Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{} =
Pr { | M{ | (, ) } r} dr
0

Pr { | M{ | (, ) c } 1 r} dr

=
0

(1 Pr { | M{ | (, ) c } > 1 r}) dr

Z
=1

Pr { | M{ | (, ) c } > r} dr

= 1 Ch{c }.
That is, Ch{} + Ch{c } = 1, i.e., the chance measure is self-dual.
Theorem B.3 (Hou [58], Subadditivity Theorem) The chance measure is
subadditive. That is, for any countable sequence of events 1 , 2 , , we
have
(
)

[
X
Ch
i
Ch{i }.
(B.11)
i=1

i=1

Proof: For each , it follows from the subadditivity of uncertain measure


that
(
)

[
X
M | (, )
i
M{ | (, ) i }.
i=1

i=1

Thus for any real number r, we have


(
(
Pr | M | (, )

)
i

)
r

i=1

(
Pr |

X
i=1

)
M{ | (, ) i } r .

373

Section B.2 - Uncertain Random Variable

By the definition of chance measure, we get


(
(
(
) Z
)
)

1
[
[
Ch
i =
Pr | M | (, )
i r dr
0

i=1

i=1

Pr |

Pr |
0

)
M{ | (, ) i } r dr

i=1

)
M{ | (, ) i } r dr

i=1

Z
X
i=1

Pr { | M{ | (, ) i } r} dr

Ch{i }.

i=1

That is, the chance measure is subadditive.

B.2

Uncertain Random Variable

Theoretically, an uncertain random variable is a measurable function on the


chance space. It is usually used to deal with measurable functions of uncertain
variables and random variables.
Definition B.2 (Liu [140]) An uncertain random variable is a measurable
function from a chance space (, L, M) (, A, Pr) to the set of real numbers, i.e., { B} is an event for any Borel set B.
Remark B.1: An uncertain random variable (, ) degenerates to a random variable if it does not vary with . Thus a random variable is a special
uncertain random variable.
Remark B.2: An uncertain random variable (, ) degenerates to an uncertain variable if it does not vary with . Thus an uncertain variable is a
special uncertain random variable.
Theorem B.4 Let f : <n < be a measurable function, and 1 , 2 , , n
uncertain random variables on the chance space (, L, M) (, A, Pr). Then
= f (1 , 2 , , n ) is an uncertain random variable determined by
(, ) = f (1 (, ), 2 (, ), , n (, ))
for all (, ) .

(B.12)

374

Appendix B - Chance Theory

Proof: Since 1 , 2 , , n are uncertain random variables, we know that


they are measurable functions on the chance space, and = f (1 , 2 , , n )
is also a measurable function. Hence is an uncertain random variable.
Example B.1: An random variable plus an uncertain variable makes
an uncertain random variable , i.e.,
(, ) = () + ()

(B.13)

for all (, ) .
Example B.2: Let 1 , 2 , , m be random variables, and let 1 , 2 , , n
be uncertain variables. If f is a measurable function, then
= f (1 , 2 , , m , 1 , 2 , , n )

(B.14)

is an uncertain random variable determined by


(, ) = f (1 (), 2 (), , m (), 1 (), 2 (), , n ())

(B.15)

for all (, ) .
Theorem B.5 (Liu [140]) Let be an uncertain random variable on the
chance space (, L, M) (, A, Pr), and let B be a Borel set. Then { B}
is an uncertain random event with chance measure
Z 1
Ch{ B} =
Pr { | M{ | (, ) B} r} dr.
(B.16)
0

Proof: Since { B} is an event in the chance space, the equation (B.16)


follows from Definition B.1 immediately.
Remark B.3: If the uncertain random variable degenerates to a random
variable , then Ch{ B} = Ch{ ( B)} = M{} Pr{ B} =
Pr{ B}. That is,
Ch{ B} = Pr{ B}.
(B.17)
If the uncertain random variable degenerates to an uncertain variable , then
Ch{ B} = Ch{( B) } = M{ B} Pr{} = M{ B}. That is,
Ch{ B} = M{ B}.

(B.18)

Theorem B.6 (Liu [140]) Let be an uncertain random variable. Then the
chance measure Ch{ B} is a monotone increasing function of B and
Ch{ } = 0,

Ch{ <} = 1.

(B.19)

Section B.3 - Chance Distribution

375

Proof: Let B1 and B2 be Borel sets with B1 B2 . Then we immediately


have { B1 } { B2 }. It follows from the monotonicity of chance
measure that
Ch{ B1 } Ch{ B2 }.
Hence Ch{ B} is a monotone increasing function of B. Furthermore, we
have
Ch{ } = Ch{} = 0,
Ch{ <} = Ch{ } = 1.
The theorem is verified.
Theorem B.7 (Liu [140]) Let be an uncertain random variable. Then for
any Borel set B, we have
Ch{ B} + Ch{ B c } = 1.

(B.20)

Proof: It follows from { B}c = { B c } and the duality of chance


measure immediately.

B.3

Chance Distribution

Definition B.3 (Liu [140]) Let be an uncertain random variable. Then


its chance distribution is defined by
(x) = Ch{ x}

(B.21)

for any x <.


Example B.3: As a special uncertain random variable, the chance distribution of a random variable is just its probability distribution, that is,
(x) = Ch{ x} = Pr{ x}.

(B.22)

Example B.4: As a special uncertain random variable, the chance distribution of an uncertain variable is just its uncertainty distribution, that
is,
(x) = Ch{ x} = M{ x}.
(B.23)
Theorem B.8 (Liu [140], Sufficient and Necessary Condition for Chance
Distribution) A function : < [0, 1] is a chance distribution if and only if
it is a monotone increasing function except (x) 0 and (x) 1.

376

Appendix B - Chance Theory

Proof: Assume is a chance distribution of uncertain random variable .


Let x1 and x2 be two real numbers with x1 < x2 . It follows from Theorem B.6
that
(x1 ) = Ch{ x1 } Ch{ x2 } = (x2 ).
Hence the chance distribution is a monotone increasing function. Furthermore, if (x) 0, then
Z

Pr { | M{ | (, ) x} r} dr 0.

Thus for almost all , we have


M{ | (, ) x} 0,

x <

which is in contradiction to the asymptotic theorem, and then (x) 6 0 is


verified. Similarly, if (x) 1, then
Z

Pr { | M{ | (, ) x} r} dr 1.

Thus for almost all , we have


M{ | (, ) x} 1,

x <

which is also in contradiction to the asymptotic theorem, and then (x) 6 1


is proved.
Conversely, suppose : < [0, 1] is a monotone increasing function but
(x) 6 0 and (x) 6 1. It follows from Peng-Iwamura theorem that there is
an uncertain variable whose uncertainty distribution is just (x). Since an
uncertain variable is a special uncertain random variable, we know that is
a chance distribution.
Theorem B.9 (Liu [140], Chance Inversion Theorem) Let be an uncertain
random variable with continuous chance distribution . Then for any real
number x, we have
Ch{ x} = (x),

Ch{ x} = 1 (x).

(B.24)

Proof: The equation Ch{ x} = (x) follows from the definition of


chance distribution immediately. By using the duality of chance measure
and continuity of chance distribution, we get
Ch{ x} = 1 Ch{ < x} = 1 (x).

Section B.4 - Operational Law

B.4

377

Operational Law

Assume 1 , 2 , , m are independent random variables with probability


distributions 1 , 2 , , m , and 1 , 2 , , n are independent uncertain
variables with uncertainty distributions 1 , 2 , , n , respectively. What
is the chance distribution of uncertain random variable
= f (1 , , m , 1 , , n )?

(B.25)

This section will provide an operational law to answer this question.


Theorem B.10 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , respectively, and let 1 , 2 ,
, n be uncertain variables (not necessarily independent). Then the uncertain random variable
= f (1 , , m , 1 , , n )
has a chance distribution
Z
(x) =
M{f (y1 , , ym , 1 , , n ) x}d1 (y1 ) dm (ym )

(B.26)

(B.27)

<m

for any number x.


Proof: It follows from Theorem B.5 that the uncertain random variable
has a chance distribution
Z 1
(x) =
Pr { | M{ | (, ) x} r} dr
0

Z
=

Pr { | M{f (1 (), , m (), 1 , , n ) x} r} dr

Z
=

M{f (y1 , , ym , 1 , , n ) x}d1 (y1 ) dm (ym ).

<m

The theorem is verified.


Theorem B.11 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , respectively, and let 1 , 2 ,
, n be uncertain variables (not necessarily independent). Then the uncertain random variable
= f (1 , , m , 1 , , n )
has a chance distribution
Z
(x) =
F (x; y1 , , ym )d1 (y1 ) dm (ym )

(B.28)

(B.29)

<m

where F (x; y1 , , ym ) is the uncertainty distribution of uncertain variable


f (y1 , , ym , 1 , , n ) for any real numbers y1 , , ym .

378

Appendix B - Chance Theory

Proof: For any given numbers y1 , , ym , it follows from the operational law
of uncertain variables that f (y1 , , ym , 1 , , n ) is an uncertain variable
with uncertainty distribution F (x; y1 , , ym ). By using (B.27), the chance
distribution of is
Z
(x) =
M{f (y1 , , ym , 1 , , n ) x}d1 (y1 ) dm (ym )
<m
Z
=
F (x; y1 , , ym )d1 (y1 ) dm (ym )
<m

that is just (B.29). The theorem is verified.


Remark B.4: Let 1 , 2 , , n be independent uncertain variables with
uncertainty distributions 1 , 2 , , n , respectively. If the function
f (1 , , m , 1 , , n )
is strictly increasing with respect to 1 , , k and strictly decreasing with
respect to k+1 , , n , then F 1 (; y1 , , ym ) is equal to
1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 ))

from which we may derive the uncertainty distribution F (x; y1 , , ym ).


Exercise B.1: Let 1 , 2 , , m be independent random variables with
probability distributions 1 , 2 , , m , and let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. Show that the sum
= 1 + 2 + + m + 1 + 2 + + n
is an uncertain random variable whose chance distribution is
Z +
(x) =
(x y)d(y)

(B.30)

(B.31)

where
Z
d1 (y1 )d2 (y2 ) dm (ym )

(y) =

(B.32)

y1 +y2 ++ym y

is the probability distribution of 1 + 2 + + m , and


(z) =

sup

1 (z1 ) 2 (z2 ) n (zn )

(B.33)

z1 +z2 ++zn =z

is the uncertainty distribution of 1 + 2 + + n .


Exercise B.2: Let 1 , 2 , , m be independent positive random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n

379

Section B.4 - Operational Law

be independent positive uncertain variables with uncertainty distributions


1 , 2 , , n , respectively. Show that the product
= 1 2 m 1 2 n
is an uncertain random variable whose chance distribution is
Z +
(x/y)d(y)
(x) =

(B.34)

(B.35)

where

Z
d1 (y1 )d2 (y2 ) dm (ym )

(y) =

(B.36)

y1 y2 ym y

is the probability distribution of 1 2 m , and


(z) =

sup
z1 z2 zn =z

1 (z1 ) 2 (z2 ) n (zn )

(B.37)

is the uncertainty distribution of 1 2 n .


Exercise B.3: Let 1 , 2 , , m be independent random variables with
probability distributions 1 , 2 , , m , and let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. Show that the minimum
= 1 2 m 1 2 n

(B.38)

is an uncertain random variable whose chance distribution is


(x) = (x) + (x) (x)(x)

(B.39)

(x) = 1 (1 1 (x))(1 2 (x)) (1 m (x))

(B.40)

where
is the probability distribution of 1 2 m , and
(x) = 1 (x) 2 (x) n (x)

(B.41)

is the uncertainty distribution of 1 2 n .


Exercise B.4: Let 1 , 2 , , m be independent random variables with
probability distributions 1 , 2 , , m , and let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. Show that the maximum
= 1 2 m 1 2 n

(B.42)

is an uncertain random variable whose chance distribution is


(x) = (x)(x)

(B.43)

380

Appendix B - Chance Theory

where
(x) = 1 (x)2 (x) m (x)

(B.44)

is the probability distribution of 1 2 m , and


(x) = 1 (x) 2 (x) n (x)

(B.45)

is the uncertainty distribution of 1 2 n .


Some Useful Theorems
In many cases, it is required to calculate Ch{f (1 , , m , 1 , , n ) 0}.
We may produce the chance distribution (x) of f (1 , , m , 1 , , n ) by
the operational law, and then the chance measure is just (0). However, for
convenience, we may use the following theorems.
Theorem B.12 (Liu [143]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m and let 1 , 2 , , n be
independent uncertain variables with regular uncertainty distributions 1 , 2 ,
, n , respectively. If f (1 , , m , 1 , , n ) is strictly increasing with
respect to 1 , , k and strictly decreasing with respect to k+1 , , n , then
Z
Ch{f (1 , , m , 1 , , n ) 0} =
G(y1 , , ym )d1 (y1 ) dm (ym )
<m

where G(y1 , , ym ) is the root of the equation


1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) = 0.

Proof: It follows from the definition of chance measure that for any numbers
y1 , , ym , the theorem is true if the function G is
G(y1 , , ym ) = M{f (y1 , , ym , 1 , , n ) 0}.
Furthermore, by using Theorem 2.21, we know that G is just the root . The
theorem is proved.
Remark B.5: Sometimes, the equation may not have a root. In this case,
if
1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) < 0

for all , then we set the root = 1; and if


1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) > 0

for all , then we set the root = 0.


Remark B.6: The root may be estimated by the bisection method because
1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) is a strictly
increasing function with respect to . See Figure B.2.

381

Section B.4 - Operational Law


...
....
..........
....
.. ..
...
.. ..
...
.
.
... ..
...
.... ..
...
.....
...
.....
...
.....
.
.
.
..
.
...
.
.
..
..
...
........
........
.
.
.
.
..
.
...
.
.
.
......
.
.
.
.
.
.
.
.
.
.
.............................................................................................. ..........................................................................................
..
.
.
.
..
.
.......
.
.
.
.
..
.
.
.
...
.
.
..
........
...
.......
.
.
..
.
.
...
.
....
.
.
..
.
...
.
..
..
... .......
..
... ...
..
... ...
..
... ...
..
......
.
....
...
...
.

1
1
1
Figure B.2: f (y1 , , ym , 1
1 (), , k (), k+1 (1), , n (1))

Theorem B.13 (Liu [143]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m and let 1 , 2 , , n be
independent uncertain variables with regular uncertainty distributions 1 , 2 ,
, n , respectively. If f (1 , , m , 1 , , n ) is strictly increasing with
respect to 1 , , k and strictly decreasing with respect to k+1 , , n , then
Z
Ch{f (1 , , m , 1 , , n ) > 0} =
G(y1 , , ym )d1 (y1 ) dm (ym )
<m

where G(y1 , , ym ) is the root of the equation


1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) = 0.

Proof: It follows from the definition of chance measure that for any numbers
y1 , , ym , the theorem is true if the function G is
G(y1 , , ym ) = M{f (y1 , , ym , 1 , , n ) > 0}.
Furthermore, by using Theorem 2.22, we know that G is just the root . The
theorem is proved.
Remark B.7: Sometimes, the equation may not have a root. In this case,
if
1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) < 0

for all , then we set the root = 0; and if


1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) > 0

for all , then we set the root = 1.


Remark B.8: The root may be estimated by the bisection method because
1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) is a strictly
decreasing function with respect to . See Figure B.3.

382

Appendix B - Chance Theory


....
........
....
..
......
..
... ....
..
... ...
..
... ....
..
... ....
..
.....
...
.....
..
......
...
..
.......
...
........
..
.........
...
..
..........
...
..........
.
.
.
.
.
................................................................................... .....................................................................................................
..........
..
..
.........
..
...
.........
.......
..
...
......
..
...
......
.....
...
..... ...
...
.... .
...
... ..
... ..
...
... .
...
.....
...
..
.

1
1
1
Figure B.3: f (y1 , , ym , 1
1 (1), , k (1), k+1 (), , n ())

Operational Law for Boolean System


Theorem B.14 (Liu [141]) Assume 1 , 2 , , m are independent Boolean
random variables, i.e.,
(
1 with probability measure ai
i =
(B.46)
0 with probability measure 1 ai
for i = 1, 2, , m, and 1 , 2 , , n are independent Boolean uncertain
variables, i.e.,
(
1 with uncertain measure bj
j =
(B.47)
0 with uncertain measure 1 bj
for j = 1, 2, , n. If f is a Boolean function (not necessarily monotone),
then
= f (1 , , m , 1 , , n )
(B.48)
is a Boolean uncertain random variable such that
!
m
X
Y
Ch{ = 1} =
i (xi ) f (x1 , , xm )
(x1 , ,xm ){0,1}m

(B.49)

i=1

where

f (x1 , , xm ) =

sup

min j (yj ),

f (x1 , ,xm ,y1 , ,yn )=1 1jn

if

sup

min j (yj ) < 0.5

f (x1 , ,xm ,y1 , ,yn )=1 1jn

(B.50)

1
sup
min j (yj ),

f (x1 , ,xm ,y1 , ,yn )=0 1jn

sup
min j (yj ) 0.5,
if
1jn
f (x1 , ,xm ,y1 , ,yn )=1

383

Section B.4 - Operational Law

(
i (xi ) =
(
j (yj ) =

ai ,
if xi = 1
1 ai , if xi = 0

(i = 1, 2, , m),

(B.51)

bj ,
if yj = 1
1 bj , if yj = 0

(j = 1, 2, , n).

(B.52)

Proof: At first, when (x1 , , xm ) is given, f (x1 , , xm , 1 , , n ) is essentially a Boolean function of uncertain variables. It follows from the operational law of uncertain variables that
M{f (x1 , , xm , 1 , , n ) = 1} = f (x1 , , xm )
that is determined by (B.50). On the other hand, it follows from the operational law of uncertain random variables that
!
m
X
Y
Ch{ = 1} =
i (xi ) M{f (x1 , , xm , 1 , , n ) = 1}.
(x1 , ,xm ){0,1}m

i=1

Thus (B.49) is verified.


Remark B.9: When the uncertain variables disappear, the operational law
becomes
!
m
X
Y
Pr{ = 1} =
i (xi ) f (x1 , x2 , , xm ). (B.53)
(x1 ,x2 , ,xm ){0,1}m

i=1

Remark B.10: When the random variables disappear, the operational law
becomes

sup
min j (yj ),

f (y1 ,y2 , ,yn )=1 1jn

if
sup
min j (yj ) < 0.5

f (y1 ,y2 , ,yn )=1 1jn


(B.54)
M{ = 1} =

1
sup
min j (yj ),

f (y1 ,y2 , ,yn )=0 1jn

if
sup
min j (yj ) 0.5.

1jn
f (y1 ,y2 , ,yn )=1

Exercise B.5: Let 1 , 2 , , m be independent Boolean random variables


defined by (B.46) and let 1 , 2 , , n be independent Boolean uncertain
variables defined by (B.47). Then the minimum
= 1 2 m 1 2 n

(B.55)

is a Boolean uncertain random variable. Show that


Ch{ = 1} = a1 a2 am (b1 b2 bn ).

(B.56)

384

Appendix B - Chance Theory

Exercise B.6: Let 1 , 2 , , m be independent Boolean random variables


defined by (B.46) and let 1 , 2 , , n be independent Boolean uncertain
variables defined by (B.47). Then the maximum
= 1 2 m 1 2 n

(B.57)

is a Boolean uncertain random variable. Show that


Ch{ = 1} = 1 (1 a1 )(1 a2 ) (1 am )(1 b1 b2 bn ). (B.58)
Exercise B.7: Let 1 , 2 , , m be independent Boolean random variables
defined by (B.46) and let 1 , 2 , , n be independent Boolean uncertain
variables defined by (B.47). Then the kth largest value
= k-max [1 , 2 , , m , 1 , 2 , , n ]
is a Boolean uncertain random variable. Show that
!
m
X
Y
Ch{ = 1} =
i (xi ) f (x1 , x2 , , xm )
(x1 ,x2 , ,xm ){0,1}m

(B.59)

(B.60)

i=1

where
f (x1 , x2 , , xm ) = k-max [x1 , x2 , , xm , b1 , b2 , , bn ],
(
i (xi ) =

B.5

ai ,
if xi = 1
1 ai , if xi = 0

(i = 1, 2, , m).

(B.61)

(B.62)

Expected Value

Definition B.4 (Liu [140]) Let be an uncertain random variable. Then


its expected value is defined by
Z

Ch{ r}dr

E[] =

Ch{ r}dr

(B.63)

provided that at least one of the two integrals is finite.


Theorem B.15 (Liu [140]) Let be an uncertain random variable with
chance distribution . If the expected value of exists, then
Z

(1 (x))dx

E[] =
0

(x)dx.

(B.64)

385

Section B.5 - Expected Value

Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ x} = 1 (x) and Ch{ x} = (x). By using
the definition of expected value operator, we obtain
Z +
Z 0
E[] =
Ch{ x}dx
Ch{ x}dx

(1 (x))dx

(x)dx.

Thus we obtain the equation (B.64).


Theorem B.16 Let be an uncertain random variable with chance distribution . If the expected value exists, then
Z +
E[] =
xd(x).
(B.65)

Proof: It follows from the change of variables of integral and Theorem B.15
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx

0
+

Z
=

xd(x) +

xd(x) =

xd(x).

The theorem is proved.


Theorem B.17 Let be an uncertain random variable with regular chance
distribution . If the expected value exists, then
Z 1
E[] =
1 ()d.
(B.66)
0

Proof: It follows from the change of variables of integral and Theorem B.15
that the expected value is
Z +
Z 0
E[] =
(1 (x))dx
(x)dx

1 ()d +

(0)

Z
0

(0)

1 ()d =

1 ()d.

The theorem is proved.


Theorem B.18 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , respectively, and let 1 , 2 ,

386

Appendix B - Chance Theory

, n be uncertain variables (not necessarily independent), then the uncertain random variable
= f (1 , , m , 1 , , n )

(B.67)

has an expected value


Z
E[] =
E[f (y1 , , ym , 1 , , n )]d1 (y1 ) dm (ym )

(B.68)

<m

where E[f (y1 , , ym , 1 , , n )] is the expected value of the uncertain variable f (y1 , , ym , 1 , , n ) for any real numbers y1 , , ym .
Proof: For simplicity, we only prove the case m = n = 2. Write the
uncertainty distribution of f (y1 , y2 , 1 , 2 ) by F (x; y1 , y2 ) for any real numbers
y1 and y2 . Then
Z +
Z 0
E[f (y1 , y2 , 1 , 2 )] =
(1 F (x; y1 , y2 ))dx
F (x; y1 , y2 )dx.

On the other hand, the uncertain random variable = f (1 , 2 , 1 , 2 ) has a


chance distribution
Z
(x) =
F (x; y1 , y2 )d1 (y1 )d2 (y2 ).
<2

It follows from Theorem B.15 that


Z +
Z 0
E[] =
(1 (x))dx
(x)dx

0
+


Z
1


F (x; y1 , y2 )d1 (y1 )d2 (y2 ) dx

<2

0
0

F (x; y1 , y2 )d1 (y1 )d2 (y2 )dx


<2

Z Z

(1 F (x; y1 , y2 ))dx

=
<2


F (x; y1 , y2 )dx d1 (y1 )d2 (y2 )

Z
=

E[f (y1 , y2 , 1 , 2 )]d1 (y1 )d2 (y2 ).


<2

Thus the theorem is proved.


Example B.5: Let be a random variable and let be an uncertain variable.
Assume has a probability distribution . It follows from Theorem B.18 that
the uncertain random variable + has an expected value
Z
Z
E[ + ] =
E[y + ]d(y) = (y + E[ ])d(y) = E[] + E[ ].
<

<

387

Section B.5 - Expected Value

That is,
E[ + ] = E[] + E[ ].

(B.69)

Exercise B.8: Let be a random variable and let be an uncertain variable.


Assume has a probability distribution . Show that
E[ ] = E[]E[ ].

(B.70)

Theorem B.19 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,
respectively. If f (1 , , m , 1 , , n ) is a strictly increasing function or
a strictly decreasing function with respect to 1 , , n , then the uncertain
random variable
= f (1 , , m , 1 , , n )
(B.71)
has an expected value
Z Z 1
1
E[] =
f (y1 , , ym , 1
1 (), , n ())dd1 (y1 ) dm (ym ).
<m 0

Proof: Since f (y1 , , ym , 1 , , n ) is a strictly increasing function or a


strictly decreasing function with respect to 1 , , n , we have
Z 1
1
E[f (y1 , , ym , 1 , , n )] =
f (y1 , , ym , 1
1 (), , n ())d.
0

It follows from Theorem B.18 that the result holds.


Remark B.11: If f (1 , , m , 1 , , n ) is strictly increasing with respect
to 1 , , k and strictly decreasing with respect to k+1 , , n , then the
integrand in the formula of expected value E[] should be replaced with
1
1
1
f (y1 , , ym , 1
1 (), , k (), k+1 (1 ) , n (1 )).

Exercise B.9: Let be a random variable with probability distribution ,


and let be an uncertain variable with uncertainty distribution . Show
that
Z Z 1

E[ ] =
y 1 () dd(y)
(B.72)
<

and
Z Z
E[ ] =
<


y 1 () dd(y).

(B.73)

Theorem B.20 (Liu [141], Linearity of Expected Value Operator) Assume


1 and 2 are random variables (not necessarily independent), 1 and 2 are
independent uncertain variables, and f1 and f2 are measurable functions.
Then
E[f1 (1 , 1 ) + f2 (2 , 2 )] = E[f1 (1 , 1 )] + E[f2 (2 , 2 )].

(B.74)

388

Appendix B - Chance Theory

Proof: Since 1 and 2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , 1 ) and f2 (y2 , 2 ) are also independent
uncertain variables. Thus
E[f1 (y1 , 1 ) + f2 (y2 , 2 )] = E[f1 (y1 , 1 )] + E[f2 (y2 , 2 )].
Let 1 and 2 be the probability distributions of random variables 1 and
2 , respectively. Then we have
E[f1 (1 , 1 ) + f2 (2 , 2 )]
Z
E[f1 (y1 , 1 ) + f2 (y2 , 2 )]d1 (y1 )d2 (y2 )
=
<2
Z
=
(E[f1 (y1 , 1 )] + E[f2 (y2 , 2 )])d1 (y1 )d2 (y2 )
<2
Z
Z
=
E[f1 (y1 , 1 )]d1 (y1 ) +
E[f2 (y2 , 2 )]d2 (y2 )
<

<

= E[f1 (1 , 1 )] + E[f2 (2 , 2 )].


The theorem is proved.
Exercise B.10: Assume 1 and 2 are random variables, and 1 and 2 are
independent uncertain variables. Show that
E[1 1 + 2 2 ] = E[1 1 ] + E[2 2 ].

B.6

(B.75)

Variance

Definition B.5 (Liu [140]) Let be an uncertain random variable with finite
expected value e. Then the variance of is
V [] = E[( e)2 ].

(B.76)

Since ( e)2 is a nonnegative uncertain random variable, we also have


Z +
V [] =
Ch{( e)2 r}dr.
(B.77)
0

Theorem B.21 (Liu [140]) If is an uncertain random variable with finite


expected value, a and b are real numbers, then
V [a + b] = a2 V [].

(B.78)

Proof: Let e be the expected value of . Then a + b has an expected value


ae + b. Thus the variance is
V [a + b] = E[(a + b (ae + b))2 ] = E[a2 ( e)2 ] = a2 V [].
The theorem is verified.

389

Section B.6 - Variance

Theorem B.22 (Liu [140]) Let be an uncertain random variable with expected value e. Then V [] = 0 if and only if Ch{ = e} = 1.
Proof: We first assume V [] = 0. It follows from the equation (B.77) that
Z

Ch{( e)2 r}dr = 0

which implies Ch{( e)2 r} = 0 for any r > 0. Hence we have


Ch{( e)2 = 0} = 1.
That is, Ch{ = e} = 1. Conversely, assume Ch{ = e} = 1. Then we
immediately have Ch{( e)2 = 0} = 1 and Ch{( e)2 r} = 0 for any
r > 0. Thus
Z
+

Ch{( e)2 r}dr = 0.

V [] =
0

The theorem is proved.


How to Obtain Variance from Distributions?
Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be independent uncertain variables with uncertainty distributions 1 , 2 , , n , respectively. Then
= f (1 , 2 , , m , 1 , 2 , , n )

(B.79)

is an uncertain random variable whose expected value e may be calculated


from the probability and uncertainty distributions. Then the variance
Z +
V [] =
Ch{( e)2 x}dx
0

+Z

=
0

M{(f (y1 , , ym , 1 , , n ) e)2 x}d1 (y1 ) m (ym )dx

<m
+

M{(f (y1 , , ym , 1 , , n ) e)2 x}dxd1 (y1 ) m (ym ).

<m 0

On the other hand, the subadditivity of uncertain measure says


M{(f (y1 , , ym , 1 , , n ) e)2 x}

1 F (e + x; y1 , , ym ) + F (e x; y1 , , ym )
where F (x; y1 , , ym ) is the uncertainty distribution of the uncertain variable
f (y1 , , ym , 1 , , n ) and is determined by 1 , 2 , , n . Thus we have
the following stipulation.

390

Appendix B - Chance Theory

Stipulation B.1 (Guo and Wang [51]) Let 1 , 2 , , m be independent


random variables with probability distributions 1 , 2 , , m , and let 1 , 2 ,
, n be independent uncertain variables with uncertainty distributions 1 ,
2 , , n , respectively. Then
= f (1 , 2 , , m , 1 , 2 , , n )
has a variance
Z Z
V [] =

(1 F (e +

x; y1 , , ym )

(B.81)

<m 0

+F (e

(B.80)

x; y1 , , ym ))dxd1 (y1 ) m (ym )

where F (x; y1 , , ym ) is the uncertainty distribution of the uncertain variable


f (y1 , , ym , 1 , , n ) and is determined by 1 , 2 , , n .
Exercise B.11: Let be a random variable with probability distribution
, and let be an uncertain variable with uncertainty distribution . Show
that the sum
=+
(B.82)
has a variance
Z +Z
V [] =

B.7

(1 (e +

x y) + (e

x y))dxd(y).

(B.83)

Law of Large Numbers

Theorem B.23 (Yao and Gao [227], Law of Large Numbers) Let 1 , 2 ,
be iid random variables with a common probability distribution , and let
1 , 2 , be iid uncertain variables. If f is a monotone function, then
Sn = f (1 , 1 ) + f (2 , 2 ) + + f (n , n )
is a sequence of uncertain random variables and
Z +
Sn

f (y, 1 )d(y)
n

(B.84)

(B.85)

in the sense of convergence in distribution as n .


Proof: The argument breaks into two cases. Case 1: Assume f (x, y) is a
monotone increasing function with respect to y, and let denote the common
uncertainty distribution of 1 , 2 , It is clear that the uncertain variable
Z +
f (y, 1 )d(y)
(B.86)

Section B.8 - Uncertain Random Programming

has an inverse uncertainty distribution


Z +
f (y, 1 ())d(y).

391

(B.87)

In order to prove the theorem, it suffices to prove




Z +

Sn
f y, 1 () d(y) =
lim Ch

n
n

(B.88)

for any (0, 1).


Exercise B.12: Let 1 , 2 , be iid random variables, and let 1 , 2 , be
iid uncertain variables. Then
Sn = (1 + 1 ) + (2 + 2 ) + + (n + n )

(B.89)

is a sequence of uncertain random variables. Show that


Sn
E[1 ] + 1
n

(B.90)

in the sense of convergence in distribution as n .


Exercise B.13: Let 1 , 2 , be iid random variables, and let 1 , 2 , be
iid uncertain variables. Then
Sn = 1 1 + 2 2 + + n n

(B.91)

is a sequence of uncertain random variables. Show that


Sn
E[1 ]1
n

(B.92)

in the sense of convergence in distribution as n .

B.8

Uncertain Random Programming

Assume that x is a decision vector, and is an uncertain random vector.


Since an uncertain random objective function f (x, ) cannot be directly minimized, we may minimize its expected value, i.e.,
min E[f (x, )].
x

(B.93)

Since the uncertain random constraints gj (x, ) 0, j = 1, 2, , p do not


make a crisp feasible set, it is naturally desired that the uncertain random
constraints hold with confidence levels 1 , 2 , , p . Then we have a set of
chance constraints,
Ch{gj (x, ) 0} j ,

j = 1, 2, , p.

(B.94)

392

Appendix B - Chance Theory

In order to obtain a decision with minimum expected objective value subject


to a set of chance constraints, Liu [141] proposed the following uncertain
random programming model,

E[f (x, )]

min
x
(B.95)
subject to:

Ch{gj (x, ) 0} j , j = 1, 2, , p.
Definition B.6 (Liu [141]) A vector x is called a feasible solution to the
uncertain random programming model (B.95) if
Ch{gj (x, ) 0} j

(B.96)

for j = 1, 2, , p.
Definition B.7 (Liu [141]) A feasible solution x is called an optimal solution to the uncertain random programming model (B.95) if
E[f (x , )] E[f (x, )]

(B.97)

for any feasible solution x.


Theorem B.24 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,
respectively. If f (x, 1 , , m , 1 , , n ) is a strictly increasing function
or a strictly decreasing function with respect to 1 , , n , then the expected
function
E[f (x, 1 , , m , 1 , , n )]
(B.98)
is equal to
Z

<m 0

1
1
f (x, y1 , , ym , 1
1 (), , n ())dd1 (y1 ) dm (ym ).

Proof: It follows from Theorem B.19 immediately.


Remark B.12: If f (x, 1 , , m , 1 , , n ) is strictly increasing with respect to 1 , , k and strictly decreasing with respect to k+1 , , n , then
the integrand in the formula of expected value E[f (x, 1 , , m , 1 , , n )]
should be replaced with
1
1
1
f (x, y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )).

Theorem B.25 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,

Section B.8 - Uncertain Random Programming

393

respectively. If gj (x, 1 , , m , 1 , , n ) is a strictly increasing function


with respect to 1 , , n , then the chance constraint
Ch{gj (x, 1 , , m , 1 , , n ) 0} j
holds if and only if
Z
Gj (x, y1 , , ym )d1 (y1 ) dm (ym ) j

(B.99)

(B.100)

<m

where Gj (x, y1 , , ym ) is the root of the equation


1
gj (x, y1 , , ym , 1
1 (), , n ()) = 0.

(B.101)

Proof: Since Gj (x, y1 , , ym ) is the root of the equation (B.101), it


follows from Theorem B.12 that the chance measure
Ch{gj (x, 1 , , m , 1 , , n ) 0}
is equal to the integral
Z
Gj (x, y1 , , ym )d1 (y1 ) dm (ym ).
<m

Hence the chance constraint (B.99) holds if and only if (B.100) is true. The
theorem is verified.
Remark B.13: Sometimes, the equation (B.101) may not have a root. In
this case, if
1
gj (x, y1 , , ym , 1
(B.102)
1 (), , n ()) < 0
for all , then we set the root = 1; and if
1
gj (x, y1 , , ym , 1
1 (), , n ()) > 0

(B.103)

for all , then we set the root = 0.


Remark B.14: The root may be estimated by the bisection method be1
cause gj (x, y1 , , ym , 1
1 (), , n ()) is a strictly increasing function
with respect to .
Remark B.15: If gj (x, 1 , , m , 1 , , n ) is strictly increasing with
respect to 1 , , k and strictly decreasing with respect to k+1 , , n ,
then the equation (B.101) becomes
1
1
1
gj (x, y1 , , ym , 1
1 (), , k (), k+1 (1 ), , n (1 )) = 0.

Theorem B.26 (Liu [141]) Let 1 , 2 , , m be independent random variables with probability distributions 1 , 2 , , m , and let 1 , 2 , , n be
independent uncertain variables with uncertainty distributions 1 , 2 , , n ,

394

Appendix B - Chance Theory

respectively. If f (x, 1 , , m , 1 , , n ) and gj (x, 1 , , m , 1 , , n )


are strictly increasing functions with respect to 1 , , n for j = 1, 2, , p,
then the uncertain random programming

E[f (x, 1 , , m , 1 , , n )]

min
x
subject to:

Ch{gj (x, 1 , , m , 1 , , n ) 0} j , j = 1, 2, , p
is equivalent to the crisp mathematical programming

Z Z 1

min
f (x, y1 , , ym , 1

1 (), , n ())dd1 (y1 ) dm (ym )

x <m 0
subject to:

Gj (x, y1 , , ym )d1 (y1 ) dm (ym ) j , j = 1, 2, , p

<m

where Gj (x, y1 , , ym ) are the roots of the equations


1
gj (x, y1 , , ym , 1
1 (), , n ()) = 0

(B.104)

for j = 1, 2, , p, respectively.
Proof: It follows from Theorems B.24 and B.25 immediately.
After an uncertain random programming is converted into a crisp mathematical programming, we may solve it by any classical numerical methods
(e.g. iterative method) or intelligent algorithms (e.g. genetic algorithm).

B.9

Uncertain Random Risk Analysis

The study of uncertain random risk analysis was started by Liu and Ralescu
[143] with the concept of risk index.
Definition B.8 (Liu and Ralescu [143]) Assume that a system contains uncertain random factors 1 , 2 , , n , and has a loss function f . Then the risk
index is the chance measure that the system is loss-positive, i.e.,
Risk = Ch{f (1 , 2 , , n ) > 0}.

(B.105)

If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [185]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [119]).
Theorem B.27 (Liu and Ralescu [143], Risk Index Theorem) Assume a
system contains independent random variables 1 , 2 , , m with probability

395

Section B.9 - Uncertain Random Risk Analysis

distributions 1 , 2 , , m and independent uncertain variables 1 , 2 , , n


with regular uncertainty distributions 1 , 2 , , n , respectively. If the loss
function f (1 , , m , 1 , , n ) is strictly increasing with respect to 1 , , k
and strictly decreasing with respect to k+1 , , n , then the risk index is
Z
Risk =
G(y1 , , ym )d1 (y1 ) dm (ym )
(B.106)
<m

where G(y1 , , ym ) is the root of the equation


1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) = 0.

Proof: It follows from Definition B.8 and Theorem B.13 immediately.


Remark B.16: Sometimes, the equation may not have a root. In this case,
if
1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) < 0

for all , then we set the root = 0; and if


1
1
1
f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) > 0

for all , then we set the root = 1.


Remark B.17: The root may be estimated by the bisection method
1
1
1
because f (y1 , , ym , 1
1 (1 ), , k (1 ), k+1 (), , n ()) is
a strictly decreasing function with respect to .
Exercise B.14: (Series System) Consider a series system in which there are
m elements whose lifetimes are independent random variables 1 , 2 , , m
with probability distributions 1 , 2 , , m and n elements whose lifetimes
are independent uncertain variables 1 , 2 , , n with uncertainty distributions 1 , 2 , , n , respectively. If the loss is understood as the case that
the system fails before the time T , then the loss function is
f = T 1 2 m 1 2 n .

(B.107)

Show that the risk index is


Risk = a + b ab

(B.108)

where
a = 1 (1 1 (T ))(1 2 (T )) (1 m (T )),
b = 1 (T ) 2 (T ) n (T ).

(B.109)
(B.110)

Exercise B.15: (Parallel System) Consider a parallel system in which there


are m elements whose lifetimes are independent random variables 1 , 2 , ,

396

Appendix B - Chance Theory

m with probability distributions 1 , 2 , , m and n elements whose lifetimes are independent uncertain variables 1 , 2 , , n with uncertainty distributions 1 , 2 , , n , respectively. If the loss is understood as the case
that the system fails before the time T , then the loss function is
f = T 1 2 m 1 2 n .

(B.111)

Show that the risk index is


Risk = ab

(B.112)

a = 1 (T )2 (T ) m (T ),

(B.113)

b = 1 (T ) 2 (T ) n (T ).

(B.114)

where

Exercise B.16: (Standby System) Consider a standby system in which


there are m elements whose lifetimes are independent random variables 1 , 2 ,
, m with probability distributions 1 , 2 , , m and n elements whose
lifetimes are independent uncertain variables 1 , 2 , , n with uncertainty
distributions 1 , 2 , , n , respectively. If the loss is understood as the
case that the system fails before the time T , then the loss function is
f = T (1 + 2 + + m + 1 + 2 + + n ).
Show that the risk index is
Z
Risk =
G(y1 , y2 , , ym )d1 (y1 )d2 (y2 ) dm (ym )

(B.115)

(B.116)

<m

where G(y1 , y2 , , ym ) is the root of the equation


1
1
1
1 () + 2 () + + n () = T (y1 + y2 + + ym ).

(B.117)

Remark B.18: As a substitute of risk index, Liu and Ralescu [144] suggested
a concept of value-at-risk,
VaR() = sup{x | Ch{f (1 , 2 , , n ) x} }.

(B.118)

Note that VaR() represents the maximum possible loss when percent
of the right tail distribution is ignored. In other words, the loss will exceed VaR() with chance measure . Let be the chance distribution of
f (1 , 2 , , n ). It is easy to verify that
VaR() = 1 (1 ).

(B.119)

When the uncertain random variables degenerate to random variables, the


value-at-risk becomes the one in Morgan [161]. When the uncertain random
variables degenerate to uncertain variables, the value-at-risk becomes the one
in Peng [171].

397

Section B.10 - Uncertain Random Reliability Analysis

B.10

Uncertain Random Reliability Analysis

The study of uncertain random reliability analysis was started by Wen and
Kang [209] with the concept of reliability index..
Definition B.9 (Wen and Kang [209]) Assume a Boolean system has uncertain random elements 1 , 2 , , n and a structure function f . Then the
reliability index is the chance measure that the system is working, i.e.,
Reliability = Ch{f (1 , 2 , , n ) = 1}.

(B.120)

If all uncertain random elements degenerate to random ones, then the


reliability index is the probability measure that the system is working. If all
uncertain random elements degenerate to uncertain ones, then the reliability
index (Liu [119]) is the uncertain measure that the system is working.
Theorem B.28 (Wen and Kang [209], Reliability Index Theorem) Assume
that a system has a structure function f and contains independent random
elements 1 , 2 , , m with reliabilities a1 , a2 , , am , and independent uncertain elements 1 , 2 , , n with reliabilities b1 , b2 , , bn , respectively.
Then the reliability index is
!
m
X
Y
Reliability =
i (xi ) f (x1 , , xm )
(B.121)
(x1 , ,xm ){0,1}m

i=1

where

f (x1 , , xm ) =

sup

min j (yj ),

f (x1 , ,xm ,y1 , ,yn )=1 1jn

if

sup

min j (yj ) < 0.5

f (x1 , ,xm ,y1 , ,yn )=1 1jn

(B.122)

1
sup
min j (yj ),

f (x1 , ,xm ,y1 , ,yn )=0 1jn

sup
min j (yj ) 0.5,
if
1jn
f (x1 , ,xm ,y1 , ,yn )=1

(
i (xi ) =
(
j (yj ) =

ai ,
1 ai ,

if xi = 1
if xi = 0

(i = 1, 2, , m),

(B.123)

bj ,
if yj = 1
1 bj , if yj = 0

(j = 1, 2, , n).

(B.124)

Proof: It follows from Definition B.9 and Theorem B.14 immediately.


Exercise B.17: (Series System) Consider a series system in which there are
m independent random elements 1 , 2 , , m with reliabilities a1 , a2 , , am ,

398

Appendix B - Chance Theory

and n independent uncertain elements 1 , 2 , , n with reliabilities b1 , b2 , ,


bn , respectively. Note that the structure function is
f = 1 2 m 1 2 n .

(B.125)

Show that the reliability index is


Reliability = a1 a2 am (b1 b2 bn ).

(B.126)

Exercise B.18: (Parallel System) Consider a parallel system in which


there are m independent random elements 1 , 2 , , m with reliabilities
a1 , a2 , , am , and n independent uncertain elements 1 , 2 , , n with reliabilities b1 , b2 , , bn , respectively. Note that the structure function is
f = 1 2 m 1 2 n .

(B.127)

Show that the reliability index is


Reliability = 1 (1 a1 )(1 a2 ) (1 am )(1 b1 b2 bn ). (B.128)
Exercise B.19: (k-out-of-(m + n) System) Consider a k-out-of-(m + n) system in which there are m independent random elements 1 , 2 , , m with
reliabilities a1 , a2 , , am , and n independent uncertain elements 1 , 2 , , n
with reliabilities b1 , b2 , , bn , respectively. Note that the structure function
is
f = k-max [1 , 2 , , m , 1 , 2 , , n ].
(B.129)
Show that the reliability index is
Reliability =

m
Y

(x1 ,x2 , ,xm ){0,1}m

i=1

!
i (xi ) f (x1 , x2 , , xm ) (B.130)

where
f (x1 , x2 , , xm ) = k-max [x1 , x2 , , xm , b1 , b2 , , bn ],
(
ai ,
if xi = 1
i (xi ) =
(i = 1, 2, , m).
1 ai , if xi = 0

B.11

(B.131)
(B.132)

Uncertain Random Graph

In classic graph theory, the edges and vertices are all deterministic, either
exist or not. However, in practical applications, some indeterminacy factors
will no doubt appear in graphs. Thus it is reasonable to assume that in a
graph some edges exist with some degrees in probability measure and others

399

Section B.11 - Uncertain Random Graph

exist with some degrees in uncertain measure. In order to model this problem, let us introduce the concept of uncertain random graph by using chance
theory.
We say a graph is of order n if it has n vertices labeled by 1, 2, , n. In
this section, we assume the graph is always of order n, and has a collection
of vertices,
V = {1, 2, , n}.
(B.133)
Let us define two collections of edges,
U = {(i, j) | 1 i < j n and (i, j) are uncertain edges},

(B.134)

R = {(i, j) | 1 i < j n and (i, j) are random edges}.

(B.135)

Note that all deterministic edges are regarded as special uncertain ones. Then
U R = {(i, j) | 1 i < j n} that contains n(n 1)/2 edges. We will call

T=

11
21
..
.
n1

12
22
..
.
n2

..
.

1n
2n
..
.
nn

(B.136)

an uncertain random adjacency matrix if ij represent the truth values in


uncertain measure or probability measure that the edges between vertices
i and j exist, i, j = 1, 2, , n, respectively. Note that ii = 0 for i =
1, 2, , n, and T is a symmetric matrix, i.e., ij = ji for i, j = 1, 2, , n.
.......
.......
.... ......
.... ......
....
.
.......................................................................
...............
...............
....
....
....
....
...
...
...
...
...
...
...
...
...
...
.....
......
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
. ...
.. ......
.
.....
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... ...
...... ........
.
...........
..

0
0.8
0
0.5

0.8
0
1
0

0 0.5
1
0
0 0.3
0.3 0

Figure B.4: An Uncertain Random Graph


Definition B.10 (Liu [130]) Assume V is the collection of vertices, U is the
collection of uncertain edges, R is the collection of random edges, and T is
the uncertain random adjacency matrix. Then the quartette (V, U, R, T) is
said to be an uncertain random graph.
Please note that the uncertain random graph becomes a random graph
(Erd
os and Renyi [34], Gilbert [50]) if the collection U of uncertain edges
vanishes; and becomes an uncertain graph (Gao and Gao [44]) if the collection
R of random edges vanishes.

400

Appendix B - Chance Theory

In order to deal with uncertain random graph, let us introduce some


symbols. Write

x11 x12 x1n


x

21 x22 x2n

(B.137)
X= .
..
..
..
.
..
.
.
xn1
and

xn2

xnn

xij = 0 or 1, if (i, j) R

xij = 0, if (i, j) U
X= X|
.

xij = xji , i, j = 1, 2, , n

xii = 0, i = 1, 2, , n

(B.138)

For each given matrix

Y =

y11
y21
..
.
yn1

y12
y22
..
.
yn2

..
.

y1n
y2n
..
.
ynn

(B.139)

the extension class of Y is defined by

xij = yij , if (i, j) R

x
=
0
or
1,
if
(i,
j)

U
ij

.
Y = X|

xij = xji , i, j = 1, 2, , n

xii = 0, i = 1, 2, , n

(B.140)

Example B.6: (Connectivity Index) An uncertain random graph is connected for some realizations of uncertain and random edges, and disconnected
for some other realizations. In order to show how likely an uncertain random
graph is connected, a connectivity index of an uncertain random graph is defined as the chance measure that the uncertain random graph is connected.
Let (V, U, R, T) be an uncertain random graph. Liu [130] proved that the
connectivity index is

X
Y

=
ij (Y ) f (Y )
(B.141)
Y X

(i,j)R

where

f (Y ) =

sup

min ij (X),

if

XY , f (X)=1 (i,j)U

sup

min ij (X), if

XY , f (X)=0 (i,j)U

sup

min ij (X) < 0.5

XY , f (X)=1 (i,j)U

sup

min ij (X) 0.5,

XY , f (X)=1 (i,j)U

401

Section B.12 - Uncertain Random Network

(
ij (X) =
(
f (X) =

ij ,
if xij = 1
1 ij , if xij = 0

(i, j) U,

(B.142)

1, if I + X + X 2 + + X n1 > 0
(B.143)

0, otherwise,

X is the class of matrixes satisfying (B.138), and Y is the extension class of


Y satisfying (B.140).
Remark B.19: If the uncertain random graph becomes a random graph,
then the connectivity index is

X
Y

=
ij (X) f (X)
(B.144)
XX

1i<jn

where

xij = 0 or 1, i, j = 1, 2, , n

.
X = X | xij = xji , i, j = 1, 2, , n

xii = 0, i = 1, 2, , n

(B.145)

Remark B.20: (Gao and Gao [44]) If the uncertain random graph becomes
an uncertain graph, then the connectivity index is

sup
min ij (X),
if
sup
min ij (X) < 0.5

XX,f
(X)=1 1i<jn
XX,f (X)=1 1i<jn
=

sup
min ij (X), if
sup
min ij (X) 0.5
1
XX,f (X)=0 1i<jn

XX,f (X)=1 1i<jn

where X becomes

xij = 0 or 1, i, j = 1, 2, , n

X = X | xij = xji , i, j = 1, 2, , n
.

xii = 0, i = 1, 2, , n

(B.146)

Exercise B.20: An Euler circuit in the graph is a circuit that passes through
each edge exactly once. In other words, a graph has an Euler circuit if it can
be drawn on paper without ever lifting the pencil and without retracing over
any edge. It has been proved that a graph has an Euler circuit if and only
if it is connected and each vertex has an even degree (i.e., the number of
edges that are adjacent to that vertex). In order to measure how likely an
uncertain random graph has an Euler circuit, an Euler index is defined as
the chance measure that the uncertain random graph has an Euler circuit.
Please give a formula for calculating Euler index.

402

B.12

Appendix B - Chance Theory

Uncertain Random Network

The term network is a synonym for a weighted graph, where the weights may
be understood as cost, distance or time consumed. In this section, we assume
the uncertain random network is always of order n, and has a collection of
nodes,
N = {1, 2, , n}
(B.147)
where 1 is always the source node, and n is always the destination node.
Let us define two collections of arcs,
U = {(i, j) | (i, j) are uncertain arcs},

(B.148)

R = {(i, j) | (i, j) are random arcs}.

(B.149)

Note that all deterministic arcs are regarded as special uncertain ones. Let
wij denote the weights of arcs (i, j), (i, j) U R, respectively. Then wij
are uncertain variables if (i, j) U, and random variables if (i, j) R. Write
W = {wij | (i, j) U R}.

(B.150)

Definition B.11 (Liu [130]) Assume N is the collection of nodes, U is the


collection of uncertain arcs, R is the collection of random arcs, and W is the
collection of uncertain and random weights. Then the quartette (N, U, R, W)
is said to be an uncertain random network.
Please note that the uncertain random network becomes a random network (Frank and Hakimi [38]) if all weights are random variables; and becomes an uncertain network (Liu [120]) if all weights are uncertain variables.
........
................
.... .....
....
.
....................................................
. .....
.... ...
.............................
................. ............
.
.
.
.
.
.
.
.
.
.
......
....
...
....
.
.
.
.
.
......
.
.
.
.
.
.
....
......
....
......
....
......
....
......
....
...... .
......
.... .......
........... ...........
.....................
.... ....
.
.
.
... ...
........
..
...
.....
.
..
..........
.
......................
.
.
...........................
... ......
......
.
.
.
.
.
.
.
.
.
......
....
....
...
.
.
.
......
.
.
.
.
.
.
.
....
.
..
......
....
......
....
......
......
......
....
...... .
................. ...........
.......... .................
.. .. ...
. ..... ..
................................................... ......
....
... .... ....
................
..........

Figure B.5: An Uncertain Random Network


Figure B.5 shows an uncertain random network (N, U, R, W) of order 6 in
which
N = {1, 2, 3, 4, 5, 6},
(B.151)
U = {(1, 2), (1, 3), (2, 4), (2, 5), (3, 4), (3, 5)},

(B.152)

R = {(4, 6), (5, 6)},

(B.153)

W = {w12 , w13 , w24 , w25 , w34 , w35 , w46 , w56 }.

(B.154)

403

Section B.13 - Bibliographic Notes

Example B.7: (Shortest Path Distribution) Consider an uncertain random


network (N, U, R, W). Assume the uncertain weights wij have uncertainty
distributions ij for (i, j) U, and the random weights wij have probability
distributions ij for (i, j) R, respectively. Then the shortest path length
from a source node to a destination node has a chance distribution
Z + Z +
Y
F (x; yij , (i, j) R)
dij (yij )
(B.155)

(x) =
0

(i,j)R

where F (x; yij , (i, j) R) is determined by its inverse uncertainty distribution


F 1 (; yij , (i, j) R) = f (cij , (i, j) U R),
(
1
ij (), if (i, j) U
cij =
yij ,
if (i, j) R,

(B.156)
(B.157)

and f may be calculated by the Dijkstra algorithm (Dijkstra [30]) for each
given .
Remark B.21: If the uncertain random network becomes a random network,
then the probability distribution of shortest path length is
Z
Y
(x) =
dij (yij ).
(B.158)
f (yij ,(i,j)R)x (i,j)R

Remark B.22: (Gao [45]) If the uncertain random network becomes an


uncertain network, then the inverse uncertainty distribution of shortest path
length is
1 () = f (1
(B.159)
ij (), (i, j) U).
Exercise B.21: Maximum flow problem is to find a flow with maximum
value from a source node to a destination node in an uncertain random network. What is the maximum flow distribution?

B.13

Bibliographic Notes

Probability theory was developed by Kolmogorov [79] in 1933 for studying


random phenomenon, while uncertainty theory was founded by Liu [113] in
2007 for modeling human uncertainty. However, in many cases, uncertainty
and randomness simultaneously appear in a complex system. In order to
describe this phenomenon, chance theory was pioneered by Liu [140] in 2013
with the concepts of uncertain random variable and chance measure. Liu
[140] also proposed the concepts of chance distribution, expected value and
variance of uncertain random variables. As an important contribution to

404

Appendix B - Chance Theory

chance theory, Liu [141] presented an operational law of uncertain random


variables. In addition, Guo and Wang [51] proved a formula for calculating
the variance of uncertain random variables, Yao and Gao [227] verified a law
of large numbers for uncertain random variables, and Hou [57] investigated
the distance between uncertain random variables.
Stochastic programming was first studied by Dantzig [25] in 1965, while
uncertain programming was first proposed by Liu [115] in 2009. In order
to model optimization problems with not only uncertainty but also randomness, uncertain random programming was pioneered by Liu [141] in 2013. As
extensions, Zhou, Yang and Wang [251] proposed uncertain random multiobjective programming for optimizing multiple, noncommensurable and conflicting objectives, Qin [179] proposed uncertain random goal programming
order to satisfy as many goals as possible in the order specified, and Ke
[74] proposed uncertain random multilevel programming for studying decentralized decision systems in which the leader and followers may have their
own decision variables and objective functions. After that, uncertain random
programming was developed steadily and applied widely.
Probabilistic risk analysis was dated back to 1952 when Roy [185] proposed his safety-first criterion for portfolio selection. Another important contribution is the probabilistic value-at-risk methodology developed by Morgan
[161] in 1996. On the other hand, uncertain risk analysis was proposed by
Liu [119] in 2010 for evaluating the risk index that is the uncertain measure
of an uncertain system being loss-positive. In addition, Peng [171] developed
an uncertain value-at-risk methodology in 2013. More generally, in order
to quantify the risk of uncertain random systems, Liu and Ralescu [143] invented the tool of risk index in uncertain random risk analysis. Furthermore,
an uncertain random value-at-risk methodology was presented by Liu and
Ralescu [144].
Probabilistic reliability analysis was traced back to 1944 when Pugsley
[175] first proposed structural accident rates for the aeronautics industry.
Nowadays, probabilistic reliability analysis has become a widely used discipline. As a new methodology, uncertain reliability analysis was developed
by Liu [119] in 2010 for evaluating the reliability index. More generally, for
dealing with uncertain random systems, Wen and Kang [209] presented the
tool of uncertain random reliability analysis.
Random graph was defined by Erd
os and Renyi [34] in 1959 and independently by Gilbert [50] at nearly the same time. As an alternative, uncertain
graph was proposed by Gao and Gao [44] in 2013 via uncertainty theory.
Furthermore, Liu [130] assumed that in a graph some edges exist with some
degrees in probability measure and others exist with some degrees in uncertain measure, and defined the concept of uncertain random graph.
Random network was first investigated by Frank and Hakimi [38] in 1965
for modeling communication network with random capacities. From then on,
the random network was well developed and widely applied. As a break-

Section B.13 - Bibliographic Notes

405

through approach, uncertain network was first explored by Liu [120] in 2010
for modeling project scheduling problem with uncertain duration times. More
generally, Liu [130] assumed some weights are random variables and others
are uncertain variables, and initialized the concept of uncertain random network.
Finally, it is worth mentioning that Liu [145] designed an uncertain random logic, and Yao and Gao [228] initialized uncertain random process in the
light of chance theory.

Appendix C

Frequently Asked
Questions
This appendix will answer some frequently asked questions related to uncertainty theory and applications.

C.1

How did uncertainty evolve over the past 100


years?

The word uncertainty has been widely used or abused. In a wide sense,
Knight (1921) and Keynes (1936) used uncertainty to represent any nonprobabilistic phenomena. This type of uncertainty is also known as Knightian
uncertainty, Keynesian uncertainty, or true uncertainty. Unfortunately, it
seems impossible for us to develop a decent mathematical theory to deal
with such a broad class of uncertainty because non-probability represents
too many things. In a narrow sense, Liu (2007) declared that uncertainty is
anything that satisfies the axioms of uncertainty theory. It is emphasized that
uncertainty in the narrow sense is a scientific terminology, but uncertainty
in the wide sense is not. Some people think that uncertainty and probability
are synonymous. This is a wrong viewpoint either in the wide sense or in the
narrow sense.

C.2

What is the difference between probability theory


and uncertainty theory?

Probability theory (Kolmogorov, 1933) is a branch of mathematics for studying the behavior of random phenomena, while uncertainty theory (Liu, 2007)
is a branch of mathematics for modeling human uncertainty. What is the
difference between probability theory and uncertainty theory? The main difference is that the product probability measure of compound event is the

408

Appendix C - Frequently Asked Questions

product of probability measures of individual events, i.e.,


Pr{A B} = Pr{A} Pr{B},

(C.1)

and the product uncertain measure is the minimum of uncertain measures of


individual events, i.e.,
M{A B} = M{A} M{B}.

(C.2)

This difference implies that random variables and uncertain variables obey
different operational laws.
Probability theory and uncertainty theory are complementary mathematical systems that provide two acceptable mathematical models to deal with
the indeterminacy world. Probability is interpreted as frequency, while uncertainty is interpreted as personal belief degree.

C.3

Why is probability theory not the only legitimate


approach?

We are frequently lack of observed data, and the estimated probability distribution may be far from the cumulative frequency. Liu [122] asserted that
probability theory may lead to counterintuitive results in this case. However,
some people still affirm that probability theory is the only legitimate approach.
Perhaps this misconception is rooted in Coxs theorem [23] that any measure
of belief is isomorphic to a probability measure. However, uncertain measure is considered coherent but not isomorphic to any probability measure.
What goes wrong with Coxs theorem? Personally I think that Coxs theorem presumes the truth value of conjunction P Q is a twice differentiable
function f of truth values of two propositions P and Q, i.e.,
T (P Q) = f (T (P ), T (Q))
and then excludes uncertain measure from its start because the function
f (x, y) = x y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, I strongly recognize that probability theory is a legitimate approach to deal with the frequency. On the other hand, at any rate,
it is impossible that probability theory is the unique one for modeling indeterminacy. In fact, it has demonstrated in this book that uncertainty theory
is a consistent mathematical system that is successful to deal with the belief
degree.

Section C.5 - Is fuzzy variable able to model indeterminacy?

C.4

409

What is the difference between possibility theory


and uncertainty theory?

Possibility theory (Zadeh [237]) is a branch of mathematics for studying the


behavior of fuzzy phenomena. What is the difference between possibility
theory and uncertainty theory? The essential difference is that possibility
theory assumes
Pos{A B} = Pos{A} Pos{B}
(C.3)
for any events A and B no matter if they are independent or not, and uncertainty theory holds
M{A B} = M{A} M{B}

(C.4)

only for independent events A and B. However, a lot of surveys showed that
the measure of the union of events is usually greater than the maximum when
the events are not independent. This fact states that human brains do not
behave fuzziness.
Both uncertainty theory and possibility theory attempt to model human
belief degrees, where the former uses the tool of uncertain measure and the
latter uses the tool of possibility measure. Thus they are complete competitors.

C.5

Why is fuzzy variable unable to model indeterminacy quantity?

A fuzzy variable is a function from a possibility space to the set of real


numbers (Nahmias [162]). Some people think that fuzzy variable is a suitable
tool for modeling indeterminacy quantity. Is it really true? Unfortunately,
the answer is negative.
Let us reconsider the counterexample presented by Liu [122]. If the bridge
strength is regarded as a fuzzy variable , then we may assign it a membership
function, say

0,
if x 80

(x 80)/10, if 80 x 90
1,
if 90 x 110
(x) =
(C.5)

(120

x)/10,
if
110

120

0,
if x 120
that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function and the definition
of possibility measure
Pos{ B} = sup (x),
xB

(C.6)

410

Appendix C - Frequently Asked Questions

the possibility theory will immediately conclude the following three propositions:
(a) the bridge strength is exactly 100 tons with possibility measure 1,
(b) the bridge strength is not 100 tons with possibility measure 1,
(c) exactly 100 tons is as possible as not 100 tons.
The first proposition says we are 100% sure that the bridge strength is exactly 100 tons, neither less nor more. What a coincidence it should be!
It is doubtless that the belief degree of exactly 100 tons is almost zero,
and nobody is so naive to expect that exactly 100 tons is the true bridge
strength. The second proposition sounds good. The third proposition says
exactly 100 tons and not 100 tons have the same possibility measure.
Thus we have to regard them equally likely. It seems that no human being
can accept this conclusion because exactly 100 tons is almost impossible
compared with not 100 tons. This paradox shows that those indeterminacy quantities like the bridge strength cannot be quantified by possibility
measure and then they are not fuzzy concepts.

C.6

Why is fuzzy set unable to model unsharp concepts?

A fuzzy set is defined by its membership function which assigns to each


element x a real number (x) in the interval [0, 1], where the value of (x)
represents the grade of membership of x in the fuzzy set. This definition was
given by Zadeh [234] in 1965. Although I strongly respect Professor Lotfi
Zadehs achievements, I disagree with him on the topic of fuzzy set.
Up to now, fuzzy set theory has not evolved as a mathematical system
because of its inconsistence. Theoretically, it is undeniable that there exist too many contradictions in fuzzy set theory. In practice, perhaps some
people believe that fuzzy set is a suitable tool to model unsharp concepts.
Unfortunately, it is not true. In order to convince the reader, let us examine
the concept of young. Without loss of generality, assume young has a
trapezoidal membership function (15, 20, 30, 40), i.e.,

0,
if x 15

(x

15)/5,
if
15 x 20

1,
if 20 x 30
(x) =

(40 x)/10, if 30 x 40

0,
if x 40.
It follows from the fuzzy set theory that young may take any values of
-cut of . Thus we immediately conclude two propositions:
(a) young includes [20yr, 30yr] with possibility measure 1,
(b) young is included in [20yr, 30yr] with possibility measure 1.

Section C.7 - Challenge to Stochastic Finance Theory

411

The first proposition sounds good. However, the second proposition seems
unacceptable because the belief degree that young is between 20yr to 30yr
is impossible to achieve up to 1 (in fact, the belief degree should be almost 0
due to the fact that 19yr and 31yr are also nearly sure to be young). This
result says that young cannot be regarded as a fuzzy set.

C.7

Does the stock price follow stochastic differential


equation or uncertain differential equation?

The origin of stochastic finance theory can be traced to Louis Bacheliers


doctoral dissertation Theorie de la Speculation in 1900. However, Bacheliers work had little impact for more than a half century. After Kiyosi Ito
invented stochastic calculus [60] in 1944 and stochastic differential equation
[61] in 1951, stochastic finance theory was well developed among others by
Samuelson [187], Black and Scholes [5] and Merton [159] during the 1960s
and 1970s.
Traditionally, stochastic finance theory presumes that the stock price (including currency exchange rate and interest rate) follows Itos stochastic differential equation. Is it really reasonable? In fact, this widely accepted
presumption was continuously challenged by many scholars.
As a paradox given by Liu [125], let us assume that the stock price Xt
follows the stochastic differential equation,
dXt = eXt dt + Xt dWt

(C.7)

where e is the log-drift, is the log-diffusion, and Wt is a Wiener process.


Let us see what will happen with such an assumption. It follows from the
stochastic differential equation (C.7) that Xt is a geometric Wiener process,
i.e.,
Xt = X0 exp((e 2 /2)t + Wt )
(C.8)
from which we derive
Wt =

ln Xt ln X0 (e 2 /2)t

(C.9)

whose increment is
Wt =

ln Xt+t ln Xt (e 2 /2)t
.

Write

(C.10)

(e 2 /2)t
.
(C.11)

Note that the stock price Xt is actually a step function of time with a finite
number of jumps although it looks like a curve. During a fixed period (e.g.
one week), without loss of generality, we assume that Xt is observed to have
A=

412

Appendix C - Frequently Asked Questions

100 jumps. Now we divide the period into 10000 equal intervals. Then we
may observe 10000 samples of Xt . It follows from (C.10) that Wt has 10000
samples that consists of 9900 As and 100 other numbers:
A, A, , A, B, C, , Z.
|
{z
} |
{z
}
9900
100

(C.12)

Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance t. This fact is in contradiction with the property of Wiener process that the increment Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
..
.........
...
....
..
...............
...
... ...
...
... ...
...
... ...
...
... ..
...
.
... ..
...
.
... ..
...
... ..
...
... ...
... ...
...
... ..
...
... ...
...
... ...
...
... ...
.
.
.
.
.
.
.
.
.
.
.
... .. ......... .....................
.
.
......
..
... ......
.....
.
.
.
...
... .... ..
.....
.
........ ..
.
.....
.
.
.
.
.....
.
.
.
.
.
.
.....
.
... ... ...
.
.
.
.
.
.
.....
... .... ............................
.
.
......
.
.
... ... ... ..............
...
....
.
.
.
.
.
.............................. ... ... ... ....................................... ............
.
.
.
.
.
.
................... ... ... ... ... ... ... ... ... .............. ................
.
.
.
.
.
.
.
.
.
.
.
.
.........................................................................................................................................................................................................................................................
....

99%

Figure C.1: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of Wt . Hence
it is impossible that the real stock price Xt follows any Itos stochastic differential equation.
Perhaps some people think that the stock price does behave like a geometric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Itos calculus is just built on the microscopic structure (i.e., the differential dWt ) of Wiener process rather than macroscopic
structure. More precisely, Itos calculus is dependent on the presumption
that dWt is a normal random variable with expected value 0 and variance
dt. This unreasonable presumption is what causes the second order term in
Itos formula,
h
h
1 2h
(t, Wt )dt +
(t, Wt )dWt +
(t, Wt )dt.
(C.13)
t
w
2 w2
In fact, the increment of stock price is impossible to follow any continuous
probability distribution.
On the basis of the above paradox, personally I do not think Itos calculus
can play the essential tool of finance theory because Itos stochastic differential equation is impossible to model stock price. As a substitute, uncertain
dXt =

Section C.7 - Challenge to Stochastic Finance Theory

413

calculus may be a potential mathematical foundation of finance theory. If


the stock price is assumed to follow uncertain differential equation, then we
have a theory of uncertain finance.

Bibliography
[1] Barbacioru IC, Uncertainty functional differential equations for finance, Surveys in Mathematics and its Applications, Vol.5, 275-284, 2010.
[2] Bedford T, and Cooke MR, Probabilistic Risk Analysis, Cambridge University
Press, 2001.
[3] Bellman RE, Dynamic Programming, Princeton University Press, New Jersey,
1957.
[4] Bellman RE, and Zadeh LA, Decision making in a fuzzy environment, Management Science, Vol.17, 141-164, 1970.
[5] Black F, and Scholes M, The pricing of option and corporate liabilities, Journal of Political Economy, Vol.81, 637-654, 1973.
[6] Bouchon-Meunier B, Mesiar R, and Ralescu DA, Linear non-additive setfunctions, International Journal of General Systems, Vol.33, No.1, 89-98,
2004.
[7] Buckley JJ, Possibility and necessity in optimization, Fuzzy Sets and Systems,
Vol.25, 1-13, 1988.
[8] Charnes A, and Cooper WW, Management Models and Industrial Applications of Linear Programming, Wiley, New York, 1961.
[9] Chen XW, and Liu B, Existence and uniqueness theorem for uncertain differential equations, Fuzzy Optimization and Decision Making, Vol.9, No.1,
69-81, 2010.
[10] Chen XW, American option pricing formula for uncertain financial market,
International Journal of Operations Research, Vol.8, No.2, 32-37, 2011.
[11] Chen XW, and Ralescu DA, A note on truth value in uncertain logic, Expert
Systems with Applications, Vol.38, No.12, 15582-15586, 2011.
[12] Chen XW, and Dai W, Maximum entropy principle for uncertain variables,
International Journal of Fuzzy Systems, Vol.13, No.3, 232-236, 2011.
[13] Chen XW, Kar S, and Ralescu DA, Cross-entropy measure of uncertain variables, Information Sciences, Vol.201, 53-60, 2012.
[14] Chen XW, Variation analysis of uncertain stationary independent increment
process, European Journal of Operational Research, Vol.222, No.2, 312-316,
2012.

416

Bibliography

[15] Chen XW, and Ralescu DA, B-spline method of uncertain statistics with
applications to estimate travel distance, Journal of Uncertain Systems, Vol.6,
No.4, 256-262, 2012.
[16] Chen XW, Liu YH, and Ralescu DA, Uncertain stock model with periodic
dividends, Fuzzy Optimization and Decision Making, Vol.12, No.1, 111-123,
2013.
[17] Chen XW, and Ralescu DA, Liu process and uncertain calculus, Journal of
Uncertainty Analysis and Applications, Vol.1, Article 3, 2013.
[18] Chen XW, and Gao J, Uncertain term structure model of interest rate, Soft
Computing, Vol.17, No.4, 597-604, 2013.
[19] Chen XW, Uncertain Calculus and Uncertain Finance, http://orsc.edu.cn/
xwchen/ucf.pdf.
[20] Chen Y, Fung RYK, and Yang J, Fuzzy expected value modelling approach for
determining target values of engineering characteristics in QFD, International
Journal of Production Research, Vol.43, No.17, 3583-3604, 2005.
[21] Chen Y, Fung RYK, and Tang JF, Rating technical attributes in fuzzy QFD
by integrating fuzzy weighted average method and fuzzy expected value operator, European Journal of Operational Research, Vol.174, No.3, 1553-1566,
2006.
[22] Choquet G, Theory of capacities, Annals de lInstitute Fourier, Vol.5, 131295, 1954.
[23] Cox RT, Probability, frequency and reasonable expectation, American Journal of Physics, Vol.14, 1-13, 1946.
[24] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathematical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[25] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[26] Das B, Maity K, Maiti A, A two warehouse supply-chain model under possibility/necessity/credibility measures, Mathematical and Computer Modelling,
Vol.46, No.3-4, 398-409, 2007.
[27] De Cooman G, Possibility theory I-III, International Journal of General Systems, Vol.25, 291-371, 1997.
[28] De Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[29] Dempster AP, Upper and lower probabilities induced by a multivalued mapping, Annals of Mathematical Statistics, Vol.38, No.2, 325-339, 1967.
[30] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[31] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[32] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.

Bibliography

417

[33] Elkan C, The paradoxical controversy over fuzzy logic, IEEE Expert, Vol.9,
No.4, 47-49, 1994.
[34] Erd
os P, and Renyi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[35] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[36] Feng Y, and Yang LX, A two-objective fuzzy k-cardinality assignment problem, Journal of Computational and Applied Mathematics, Vol.197, No.1, 233244, 2006.
[37] Feng YQ, Wu WC, Zhang BM, and Li WY, Power system operation risk
assessment using credibility theory, IEEE Transactions on Power Systems,
Vol.23, No.3, 1309-1318, 2008.
[38] Frank H, and Hakimi SL, Probabilistic flows through a communication network, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[39] Fung RYK, Chen YZ, and Chen L, A fuzzy expected value-based goal programing model for product planning using quality function deployment, Engineering Optimization, Vol.37, No.6, 633-647, 2005.
[40] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[41] Gao J, Uncertain bimatrix game with applications, Fuzzy Optimization and
Decision Making, Vol.12, No.1, 65-78, 2013.
[42] Gao X, Some properties of continuous uncertain measure, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.17, No.3, 419426, 2009.
[43] Gao X, Gao Y, and Ralescu DA, On Lius inference rule for uncertain systems, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, Vol.18, No.1, 1-11, 2010.
[44] Gao XL, and Gao Y, Connectedness index of uncertain graphs, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.21, No.1,
127-137, 2013.
[45] Gao Y, Shortest path problem with uncertain arc lengths, Computers and
Mathematics with Applications, Vol.62, No.6, 2591-2600, 2011.
[46] Gao Y, Uncertain inference control for balancing inverted pendulum, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 481-492, 2012.
[47] Gao Y, Existence and uniqueness theorem on uncertain differential equations
with local Lipschitz condition, Journal of Uncertain Systems, Vol.6, No.3,
223-232, 2012.
[48] Ge XT, and Zhu Y, Existence and uniqueness theorem for uncertain delay
differential equations, Journal of Computational Information Systems, Vol.8,
No.20, 8341-8347, 2012.

418

Bibliography

[49] Ge XT, and Zhu Y, A necessary condition of optimality for uncertain optimal
control problem, Fuzzy Optimization and Decision Making, Vol.12, No.1, 4151, 2013.
[50] Gilbert EN, Random graphs, Annals of Mathematical Statistics, Vol.30, No.4,
1141-1144, 1959.
[51] Guo HY, and Wang XS, Variance of uncertain random variables, http://orsc.
edu.cn/online/130411.pdf.
[52] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[53] Ha MH, Li Y, and Wang XF, Fuzzy knowledge representation and reasoning
using a generalized fuzzy petri net and a similarity measure, Soft Computing,
Vol.11, No.4, 323-327, 2007.
[54] Han SW, and Peng ZX, The maximum flow problem of uncertain network,
http://orsc.edu.cn/online/101228.pdf.
[55] He Y, and Xu JP, A class of random fuzzy programming model and its application to vehicle routing problem, World Journal of Modelling and Simulation, Vol.1, No.1, 3-11, 2005.
[56] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[57] Hou YC, Distance between uncertain random variables, http://orsc.edu.cn/
online/130510.pdf.
[58] Hou YC, Subadditivity of chance measure, http://orsc.edu.cn/online/
130602.pdf.
[59] Inuiguchi M, and Ramk J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic programming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[60] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[61] Ito K, On stochastic differential equations, Memoirs of the American Mathematical Society, No.4, 1-51, 1951.
[62] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
[63] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[64] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[65] Ji XY, and Shao Z, Model and algorithm for bilevel newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[66] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.

Bibliography

419

[67] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main developments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[68] Kacprzyk J, and Yager RR, Linguistic summaries of data using fuzzy logic,
International Journal of General Systems, Vol.30, 133-154, 2001.
[69] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[70] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[71] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of randomness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[72] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Applied Mathematical Modelling, Vol.34, No.2, 301-308, 2010.
[73] Ke H, Ma WM, Gao X, and Xu WH, New fuzzy models for time-cost tradeoff problem, Fuzzy Optimization and Decision Making, Vol.9, No.2, 219-231,
2010.
[74] Ke H, Uncertain random multilevel programming with application to product
control problem, http://orsc.edu.cn/online/121027.pdf.
[75] Keynes JM, The General Theory of Employment, Interest, and Money, Harcourt, New York, 1936.
[76] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171182, 1986.
[77] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, PrenticeHall, Englewood Cliffs, NJ, 1980.
[78] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[79] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
[80] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[81] Kwakernaak H, Fuzzy random variablesI: Definitions and theorems, Information Sciences, Vol.15, 1-29, 1978.
[82] Kwakernaak H, Fuzzy random variablesII: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[83] Li J, Xu JP, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[84] Li PK, and Liu B, Entropy of credibility distributions for fuzzy variables,
IEEE Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[85] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.

420

Bibliography

[86] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[87] Li X, and Liu B, Maximum entropy principle for fuzzy variables, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[88] Li X, and Liu B, On distance between fuzzy variables, Journal of Intelligent
& Fuzzy Systems, Vol.19, No.3, 197-204, 2008.
[89] Li X, and Liu B, Chance measure for hybrid events with fuzziness and randomness, Soft Computing, Vol.13, No.2, 105-115, 2009.
[90] Li X, and Liu B, Foundation of credibilistic logic, Fuzzy Optimization and
Decision Making, Vol.8, No.1, 91-102, 2009.
[91] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
[92] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[93] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[94] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[95] Liu B, and Iwamura K, Chance constrained programming with fuzzy parameters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[96] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[97] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[98] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transactions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
[99] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[100] Liu B, Uncertain Programming, Wiley, New York, 1999.
[101] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[102] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[103] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[104] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[105] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Heidelberg, 2002.

Bibliography

421

[106] Liu B, Toward fuzzy optimization without mathematical ambiguity, Fuzzy


Optimization and Decision Making, Vol.1, No.1, 43-63, 2002.
[107] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
[108] Liu B, Random fuzzy dependent-chance programming and its hybrid intelligent algorithm, Information Sciences, Vol.141, Nos.3-4, 259-271, 2002.
[109] Liu B, Inequalities and convergence concepts of fuzzy and rough variables,
Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.
[110] Liu B, Uncertainty Theory: An Introduction to its Axiomatic Foundations,
Springer-Verlag, Berlin, 2004.
[111] Liu B, A survey of credibility theory, Fuzzy Optimization and Decision Making, Vol.5, No.4, 387-408, 2006.
[112] Liu B, A survey of entropy of fuzzy variables, Journal of Uncertain Systems,
Vol.1, No.1, 4-13, 2007.
[113] Liu B, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 2007.
[114] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Uncertain Systems, Vol.2, No.1, 3-16, 2008.
[115] Liu B, Theory and Practice of Uncertain Programming, 2nd edn, SpringerVerlag, Berlin, 2009.
[116] Liu B, Some research problems in uncertainty theory, Journal of Uncertain
Systems, Vol.3, No.1, 3-10, 2009.
[117] Liu B, Uncertain entailment and modus ponens in the framework of uncertain
logic, Journal of Uncertain Systems, Vol.3, No.4, 243-251, 2009.
[118] Liu B, Uncertain set theory and uncertain inference rule with application to
uncertain control, Journal of Uncertain Systems, Vol.4, No.2, 83-98, 2010.
[119] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of
Uncertain Systems, Vol.4, No.3, 163-170, 2010.
[120] Liu B, Uncertainty Theory: A Branch of Mathematics for Modeling Human
Uncertainty, Springer-Verlag, Berlin, 2010.
[121] Liu B, Uncertain logic for modeling human language, Journal of Uncertain
Systems, Vol.5, No.1, 3-20, 2011.
[122] Liu B, Why is there a need for uncertainty theory? Journal of Uncertain
Systems, Vol.6, No.1, 3-10, 2012.
[123] Liu B, and Yao K, Uncertain integral with respect to multiple canonical
processes, Journal of Uncertain Systems, Vol.6, No.4, 250-255, 2012.
[124] Liu B, Membership functions and operational law of uncertain sets, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 387-410, 2012.
[125] Liu B, Toward uncertain finance theory, Journal of Uncertainty Analysis and
Applications, Vol.1, Article 1, 2013.
[126] Liu B, Extreme value theorems of uncertain process with application to insurance risk model, Soft Computing, Vol.17, No.4, 549-556, 2013.

422

Bibliography

[127] Liu B, A new definition of independence of uncertain sets, Fuzzy Optimization


and Decision Making, to be published.
[128] Liu B, and Yao K, Uncertain multilevel programming: Algorithm and applications, http://orsc.edu.cn/online/120114.pdf.
[129] Liu B, and Chen XW, Uncertain multiobjective programming and uncertain
goal programming, Technical Report, 2013.
[130] Liu B, Uncertain random graphs and uncertain random networks, Technical
Report, 2013.
[131] Liu B, Polyrectangular theorem in product uncertainty space, Technical Report, 2013.
[132] Liu B, Fundamentals of uncertain vector, Technical Report, 2013.
[133] Liu B, Uncertainty distribution and independence of uncertain processes,
Technical Report, 2013.
[134] Liu HJ, and Fei WY, Neutral uncertain delay differential equations, Information: An International Interdisciplinary Journal, Vol.16, No.2, 1225-1232,
2013.
[135] Liu JJ, Uncertain comprehensive evaluation method, Journal of Information
& Computational Science, Vol.8, No.2, 336-344, 2011.
[136] Liu LZ, and Li YZ, The fuzzy quadratic assignment problem with penalty:
New models and genetic algorithm, Applied Mathematics and Computation,
Vol.174, No.2, 1229-1244, 2006.
[137] Liu W, and Xu JP, Some properties on expected value operator for uncertain
variables, Information: An International Interdisciplinary Journal, Vol.13,
No.5, 1693-1699, 2010.
[138] Liu YH, and Ha MH, Expected value of function of uncertain variables, Journal of Uncertain Systems, Vol.4, No.3, 181-186, 2010.
[139] Liu YH, An analytic method for solving uncertain differential equations, Journal of Uncertain Systems, Vol.6, No.4, 244-249, 2012.
[140] Liu YH, Uncertain random variables: A mixture of uncertainty and randomness, Soft Computing, Vol.17, No.4, 625-634, 2013.
[141] Liu YH, Uncertain random programming with applications, Fuzzy Optimization and Decision Making, Vol.12, No.2, 153-169, 2013.
[142] Liu YH, Chen XW, and Ralescu DA, Uncertain currency model and currency
option pricing, International Journal of Intelligent Systems, to be published.
[143] Liu YH, and Ralescu DA, Risk index in uncertain random risk analysis,
http://orsc.edu.cn/online/130403.pdf.
[144] Liu YH, and Ralescu DA, Value-at-risk in uncertain random risk analysis,
Technical Report, 2013.
[145] Liu YH, Uncertain random logic and uncertain random entailment, Technical
Report, 2013.
[146] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.

Bibliography

423

[147] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value operator, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[148] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[149] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[150] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[151] Liu YK, Fuzzy programming with recourse, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[152] Liu YK, and Gao J, The independence of fuzzy variables with applications to
fuzzy random optimization, International Journal of Uncertainty, Fuzziness
& Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[153] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125133, 2003.
[154] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[155] Maiti MK, and Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[156] Mamdani EH, Applications of fuzzy algorithms for control of a simple dynamic plant, Proceedings of IEEE, Vol.121, No.12, 1585-1588, 1974.
[157] Marano GC, and Quaranta G, A new possibilistic reliability index definition,
Acta Mechanica, Vol.210, 291-303, 2010.
[158] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
[159] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[160] M
oller B, and Beer M, Engineering computation under uncertainty, Computers and Structures, Vol.86, 1024-1041, 2008.
[161] Morgan JP, Risk Metrics TM Technical Document, 4th edn, Morgan Guaranty Trust Companies, New York, 1996.
[162] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[163] Negoita CV, and Ralescu DA, Representation theorems for fuzzy concepts,
Kybernetes, Vol.4, 169-174, 1975.
[164] Negoita CV, and Ralescu DA, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.
[165] Nguyen HT, Nguyen NT, and Wang TH, On capacity functionals in interval
probabilities, International Journal of Uncertainty, Fuzziness & KnowledgeBased Systems, Vol.5, 359-377, 1997.
[166] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.

424

Bibliography

[167] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[168] ksendal B, Stochastic Differential Equations, 6th edn, Springer-Verlag,
Berlin, 2005.
[169] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[170] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[171] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[172] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[173] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[174] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
http://orsc.edu.cn/online/100110.pdf.
[175] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[176] Puri ML, and Ralescu DA, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
[177] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[178] Qin ZF, and Gao X, Fractional Liu process with application to finance, Mathematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[179] Qin ZF, Uncertain random goal programming, http://orsc.edu.cn/online/
130323.pdf.
[180] Ralescu AL, and Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets
and Systems, Vol.86, No.3, 321-330, 1997.
[181] Ralescu DA, A generalization of representation theorem, Fuzzy Sets and Systems, Vol.51, 309-311, 1992.
[182] Ralescu DA, Cardinality, quantifiers, and the aggregation of fuzzy criteria,
Fuzzy Sets and Systems, Vol.69, No.3, 355-365, 1995.
[183] Ralescu DA, and Sugeno M, Fuzzy integral representation, Fuzzy Sets and
Systems, Vol.84, No.2, 127-133, 1996.
[184] Robbins HE, On the measure of a random set, Annals of Mathematical Statistics, Vol.15, No.1, 70-74, 1944.
[185] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[186] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for twolevel linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.

Bibliography

425

[187] Samuelson PA, Rational theory of warrant pricing, Industrial Management


Review, Vol.6, 13-31, 1965.
[188] Shafer G, A Mathematical Theory of Evidence, Princeton University Press,
Princeton, NJ, 1976.
[189] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[190] Shao Z, and Ji XY, Fuzzy multi-product constraint newsboy problem, Applied
Mathematics and Computation, Vol.180, No.1, 7-15, 2006.
[191] Shen Q and Zhao R, A credibilistic approach to assumption-based truth
maintenance, IEEE Transactions on Systems, Man, and Cybernetics Part
A, Vol.41, No.1, 85-96, 2011.
[192] Sheng YH, Stability in the p-th moment for uncertain differential equation,
Journal of Intelligent & Fuzzy Systems, to be published.
[193] Sheng YH, Exponential stability of uncertain differential equation, http://
orsc.edu.cn/online/130122.pdf.
[194] Sheng YH, Some formulas of moments of uncertain variable via inverse uncertainty distribution, http://orsc.edu.cn/online/130910.pdf.
[195] Shih HS, Lai YJ, and Lee ES, Fuzzy approach for multilevel programming
problems, Computers and Operations Research, Vol.23, 73-91, 1996.
[196] Slowinski R, and Teghem J, Fuzzy versus stochastic approaches to multicriteria linear programming under uncertainty, Naval Research Logistics, Vol.35,
673-695, 1988.
[197] Sugeno M, Theory of Fuzzy Integrals and its Applications, Ph.D. Dissertation,
Tokyo Institute of Technology, 1974.
[198] Sun JJ, and Chen XW, Asian option pricing formula for uncertain financial
market, http://orsc.edu.cn/online/130511.pdf.
[199] Takagi T, and Sugeno M, Fuzzy identication of system and its applications to
modeling and control, IEEE Transactions on Systems, Man and Cybernatics,
Vol.15, No.1, 116-132, 1985.
[200] Taleizadeh AA, Niaki STA, and Aryanezhad MB, A hybrid method of Pareto,
TOPSIS and genetic algorithm to optimize multi-product multi-constraint
inventory control systems with random fuzzy replenishments, Mathematical
and Computer Modelling, Vol.49, Nos.5-6, 1044-1057, 2009.
[201] Tian DZ, Wang L, Wu J, and Ha MH, Rough set model based on uncertain
measure, Journal of Uncertain Systems, Vol.3, No.4, 252-256, 2009.
[202] Tian JF, Inequalities and mathematical properties of uncertain variables,
Fuzzy Optimization and Decision Making, Vol.10, No.4, 357-368, 2011.
[203] Torabi H, Davvaz B, Behboodian J, Fuzzy random events in incomplete probability models, Journal of Intelligent & Fuzzy Systems, Vol.17, No.2, 183-188,
2006.
[204] Wang XS, Gao ZC, and Guo HY, Uncertain hypothesis testing for two experts empirical data, Mathematical and Computer Modelling, Vol.55, 14781482, 2012.

426

Bibliography

[205] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncertainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[206] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimization and Decision Making, Vol.12, No.1, 99-109, 2013.
[207] Wang XS, and Peng ZX, Method of moments for estimating uncertainty distributions, http://orsc.edu.cn/online/100408.pdf.
[208] Wang XS, and Wang LL, Delphi method for estimating membership function
of the uncertain set, http://orsc.edu.cn/online/130330.pdf.
[209] Wen ML, and Kang R, Reliability analysis in uncertain random system,
http://orsc.edu.cn/online/120419.pdf.
[210] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131174, 1923.
[211] Yager RR, A new approach to the summarization of data, Information Sciences, Vol.28, 69-86, 1982.
[212] Yager RR, Quantified propositions in a linguistic logic, International Journal
of Man-Machine Studies, Vol.19, 195-227, 1983.
[213] Yang LX, and Liu B, On inequalities and critical values of fuzzy random
variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[214] Yang N, and Wen FS, A chance constrained programming approach to transmission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[215] Yang XH, Moments and tails inequality within the framework of uncertainty
theory, Information: An International Interdisciplinary Journal, Vol.14,
No.8, 2599-2604, 2011.
[216] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimization and Decision Making, Vol.12, No.1, 89-98, 2013.
[217] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[218] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
[219] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[220] Yao K, Extreme values and integral of solution of uncertain differential equation, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
[221] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[222] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[223] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, 2013, to
be published.

Bibliography

427

[224] Yao K, A no-arbitrage determinant theorem for uncertain stock model,


http://orsc.edu.cn/online/100903.pdf.
[225] Yao K, Block replacement policy in uncertain environment, http://orsc.edu.
cn/online/110612.pdf.
[226] Yao K, Relative entropy of uncertain set, http://orsc.edu.cn/online/
120313.pdf.
[227] Yao K, and Gao J, Law of large numbers for uncertain random variables,
http://orsc.edu.cn/online/120401.pdf.
[228] Yao K, and Gao J, Some concepts and theorems of uncertain random process,
http://orsc.edu.cn/online/120402.pdf.
[229] Yao K, and Sheng YH, Stability in mean for uncertain differential equation,
http://orsc.edu.cn/online/120611.pdf.
[230] Yao K, Time integral of independent increment uncertain process, http://
orsc.edu.cn/online/130302.pdf.
[231] Yao K, A formula to calculate the variance of uncertain variable, http://orsc.
edu.cn/online/130831.pdf.
[232] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[233] Yu XC, A stock model with jumps for uncertain markets, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.20, No.3, 421432, 2012.
[234] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[235] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[236] Zadeh LA, The concept of a linguistic variable and its application to approximate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[237] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[238] Zadeh LA, A computational approach to fuzzy quantifiers in natural languages, Computers and Mathematics with Applications, Vol.9, No.1, 149-184,
1983.
[239] Zhang B, and Peng J, Euler index in uncertain graph, Applied Mathematics
and Computation, Vol.218, No.20, 10279-10288, 2012.
[240] Zhang XF, Ning YF, and Meng GW, Delayed renewal process with uncertain
interarrival times, Fuzzy Optimization and Decision Making, Vol.12, No.1,
79-87, 2013.
[241] Zhang ZM, Some discussions on uncertain measure, Fuzzy Optimization and
Decision Making, Vol.10, No.1, 31-43, 2011.
[242] Zhao R and Liu B, Stochastic programming models for general redundancy
optimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181191, 2003.

428

Bibliography

[243] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[244] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[245] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[246] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[247] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[248] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[249] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[250] Zhou J, and Liu B, Modeling capacitated location-allocation problem with
fuzzy demands, Computers & Industrial Engineering, Vol.53, No.3, 454-468,
2007.
[251] Zhou J, Yang F, and Wang K, Multi-objective optimization in uncertain
random environments, http://orsc.edu.cn/online/130322.pdf.
[252] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[253] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent & Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[254] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy variables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[255] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[256] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.

List of Frequently Used Symbols


M
(, L, M)
, ,
, ,
1 , 1 , 1
, ,
1 , 1 , 1
L(a, b)
Z(a, b, c)
N (e, )
LOGN (e, )
(a, b)
(a, b, c)
(a, b, c, d)
E
V
H
Xt , Yt , Zt
Ct
Nt
Q
(Q, S, P )

Pr
(, A, Pr)
Ch
k-max
k-min

<
iid

uncertain measure
uncertainty space
uncertain variables
uncertainty distributions
inverse uncertainty distributions
membership functions
inverse membership functions
linear uncertain variable
zigzag uncertain variable
normal uncertain variable
lognormal uncertain variable
rectangular uncertain set
triangular uncertain set
trapezoidal uncertain set
expected value
variance
entropy
uncertain processes
Liu process
renewal process
uncertain quantifier
uncertain proposition
maximum operator
minimum operator
negation symbol
universal quantifier
existential quantifier
probability measure
probability space
chance measure
the kth largest value
the kth smallest value
the empty set
the set of real numbers
independent and identically distributed

Index
absorbtion law, 176
age replacement policy, 289
algebra, 5
-path, 319
alternating renewal process, 286
American option, 338
Asian option, 340
associative law, 175
belief degree, 2
bisection method, 59
block replacement policy, 281
Boolean function, 61
Boolean system calculator, 67
Boolean uncertain variable, 61
Borel algebra, 7
Borel set, 7
bridge system, 152
Brownian motion, 293
chain rule, 303
chance distribution, 375
chance inversion theorem, 376
chance measure, 370
change of variables, 304
Chebyshev inequality, 91
Chen-Ralescu theorem, 159
commutative law, 175
comonotonic function, 75
complement of uncertain set, 172, 193
complete uncertainty space, 16
compromise model, 123
compromise solution, 123
conditional uncertain measure, 25
consistency condition, 15
convergence almost surely, 93
convergence in distribution, 94
convergence in mean, 94
convergence in measure, 93
delayed renewal process, 281
Delphi method, 134

De Morgans law, 176


diffusion, 296, 301
discrete uncertain variable, 37
distance, 89, 209
distributive law, 175
double-negation law, 174
drift, 296, 301
dual quantifier, 221
duality axiom, 10
empirical membership function, 211
empirical uncertainty distribution, 36
entropy, 83, 205
European option, 335
event, 9
expected value, 68, 198, 384
experts experimental data, 127, 210
exponential random variable, 357
extreme value theorem, 52, 267
feasible solution, 107
Feynman-Kac formula, 368
first hitting time, 271, 327
frequency, 1
fundamental theorem of calculus, 302
fuzzy set, 410
hazard distribution, 142
H
olders inequality, 92
hypothetical syllogism, 168
idempotent law, 174
imaginary inclusion, 198
independence, 20, 44, 186
independent increment, 259
indeterminacy, 1
individual feature data, 215
inference rule, 241
integration by parts, 305
intersection of uncertain sets, 172, 191
inverse membership function, 183
inverse uncertainty distribution, 41
inverted pendulum, 249

Index
investment risk analysis, 146
Ito formula, 367
Ito integral, 366
Ito process, 367
Jensens inequality, 93
joint uncertainty distribution, 103
k-out-of-n system, 138
law of contradiction, xiv, 174
law of excluded middle, xiv, 174
law of large numbers, 363, 390
law of truth conservation, xiv
linear uncertain variable, 35
linguistic summarizer, 237
Liu integral, 297
Liu process, 293, 301
logical equivalence theorem, 230
lognormal random variable, 357
lognormal uncertain variable, 36
loss function, 137
machine scheduling problem, 112
Markov inequality, 91
maximum entropy principle, 87
maximum uncertainty principle, xiv
measurable function, 7
measurable set, 6
measure inversion formula, 177
measure inversion theorem, 39
membership function, 177
method of moments, 132
Minkowski inequality, 92
modus ponens, 165
modus tollens, 166
moment, 80
monotone quantifier, 219
monotonicity theorem, 12
multivariate normal distribution, 105
Nash equilibrium, 125
negated quantifier, 220
no-arbitrage, 343
nonempty uncertain set, 183
normal random variable, 357
normal uncertain variable, 36
normality axiom, 10
operational law, 46, 190
optimal solution, 108
option pricing, 335
parallel system, 138
Pareto solution, 123

431
Peng-Iwamura theorem, 32
Poisson process, 365
polyrectangular theorem, 23
portfolio selection, 343
principle of least squares, 130, 211
probability density function, 355
probability distribution, 355
probability inversion theorem, 356
probability measure, 353
product axiom, 16
product probability, 354
product uncertain measure, 16
project scheduling problem, 119
random variable, 354
rectangular uncertain set, 179
regular membership function, 185
regular uncertainty distribution, 41
reliability index, 151, 397
renewal reward process, 283
risk index, 139, 394
ruin index, 288
ruin time, 289
rule-base, 246
sample path, 254
series system, 137
-algebra, 5
stability, 317
Stackelberg-Nash equilibrium, 126
standby system, 138
stationary increment, 262
stochastic calculus, 366
stochastic differential equation, 367
stochastic process, 364
strictly decreasing function, 53
strictly increasing function, 46
strictly monotone function, 56
structural risk analysis, 143
structure function, 149
subadditivity axiom, 10
time integral, 274, 330
trapezoidal uncertain set, 179
triangular uncertain set, 179
truth value, 157, 230
uncertain calculus, 293
uncertain control, 249
uncertain currency model, 347
uncertain differential equation, 307
uncertain entailment, 164

432
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain
uncertain

Index
finance, 335
graph, 398
inference, 241
insurance model, 287
integral, 297
interest rate model, 346
logic, 215
measure, 11
network, 402
process, 253
programming, 107
proposition, 155, 229
quantifier, 216
random programming, 391
random variable, 373
reliability analysis, 150
renewal process, 277
risk analysis, 137

uncertain set, 171


uncertain statistics, 127, 210
uncertain stock model, 335
uncertain system, 245
uncertain variable, 29
uncertain vector, 101
uncertainty, definition of, 407
uncertainty distribution, 31, 254
uncertainty space, 16
unimodal quantifier, 219
union of uncertain sets, 172, 190
value-at-risk, 396
variance, 77, 204, 388
vehicle routing problem, 115
Wiener process, 366
Yao-Chen formula, 320
Yao integral, 306
zigzag uncertain variable, 35

Baoding Liu
Uncertainty Theory
When no samples are available to estimate a probability distribution, we have
to invite some domain experts to evaluate the belief degree that each event
will occur. Perhaps some people think that the belief degree is subjective
probability or fuzzy concept. However, it is usually inappropriate because
both probability theory and fuzzy set theory may lead to counterintuitive
results in this case.
In order to rationally deal with belief degrees, an uncertainty theory was
founded in 2007 and subsequently studied by many researchers. Nowadays,
uncertainty theory has become a branch of axiomatic mathematics for modeling human uncertainty.
This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process,
uncertain calculus, and uncertain differential equation. This textbook also
shows applications of uncertainty theory to scheduling, logistics, networks,
data mining, control, and finance.
Axiom 1. (Normality Axiom) M{} = 1 for the universal set .
Axiom 2. (Duality Axiom) M{} + M{c } = 1 for any event .
Axiom 3. (Subadditivity Axiom) For every countable sequence of events 1 ,
2 , , we have
( )

[
X
M
i
M{i }.
i=1

i=1

Axiom 4. (Product Axiom) Let (k , Lk , Mk ) be uncertainty spaces for k =


1, 2, The product uncertain measure M is an uncertain measure satisfying
(
)

Y
^
M
k =
Mk {k }
k=1

k=1

where k are arbitrarily chosen events from Lk for k = 1, 2, , respectively.


....
........
........
.....
................................
....
................... .... ...
...
................... .... .... ....
...
.. .. ... ... ...
.
.
...
.
.
.............. ... ... ... ..
...
..... ... ... ... ... ...
...
... .. ... ... .. ..
...
................ .... .... .... ..... ....
...
.
. . . . . . .
... .. ... .. .. .. ..
...
.... .. .. .. .. .. ..
...
................. .... .... ..... .... .... ....
.. .. . . .. .. .
...
.
.
... .. .. ... .. .. .. ..
...
................ .... .... .... ..... .... .... ....
...
..... .... .... .... ... ... .... .... ...
.
.
...
.
............... ... .. .. ... ... ... .. ..
...
. .
.
. . .
............ .. .. .. .. .. ... .. .. ..
...
........... .. .. .. .. .. .. .. .. .. ..
....................................................................................................................................................................................................
..
....
...

Probability

....
........
........
.........................
....
. . ...........
...
........ ............
. ......
...
................... ....
...
..... .. .
...
..... ... ... ..
...
........ .. .. ...
.... .. ... ... ..
...
.. .... .... .... .... ....
.
...
.
. . .. . . .
...
... ... ... ... ... ..
...
... ....... .. .. .. ...
.. ... .. ... ... ... ..
...
... ....... .... .... .... .... ....
.
...
.
... .... .... .... .... .... .... ...
...
....
.
...
.... ....... .... .... .... ..... ..... ....
.....
... ... ... .. ... .. .. ...
...
.....
.
.....
.
.
.
...
.
.
....... .... .... .... .... .... .... ....
......
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
......................... ................................................................................................................................
..
....
...

Uncertainty

For modeling indeterminacy, there exist two mathematical systems, one is


probability theory and the other is uncertainty theory. Probability is interpreted as frequency, while uncertainty is interpreted as personal belief degree.

Das könnte Ihnen auch gefallen