Sie sind auf Seite 1von 20

Eur. Phys. J.

Plus (2016) 131: 379


DOI 10.1140/epjp/i2016-16379-8

THE EUROPEAN
PHYSICAL JOURNAL PLUS

Regular Article

Tsallis statistics in reliability analysis: Theory and methods


Fode Zhang1,2,a , Yimin Shi1,b , Hon Keung Tony Ng2,c , and Ruibing Wang1
1
2

Department of Applied Mathematics Northwestern Polytechnical University, Xian, Shaanxi 710072, China
Department of Statistical Science, Southern Methodist University Dallas, 75275-0332 Dallas TX, USA
Received: 15 August 2016 / Revised: 5 October 2016
c Societ`
Published online: 31 October 2016 
a Italiana di Fisica / Springer-Verlag 2016
Abstract. Tsallis statistics, which is based on a non-additive entropy characterized by an index q, is a very
useful tool in physics and statistical mechanics. This paper presents an application of Tsallis statistics in
reliability analysis. We rst show that the q-gamma and incomplete q-gamma functions are q-generalized.
Then, three commonly used statistical distributions in reliability analysis are introduced in Tsallis statistics, and the corresponding reliability characteristics including the reliability function, hazard function,
cumulative hazard function and mean time to failure are investigated. In addition, we study the statistical
inference based on censored reliability data. Specically, we investigate the point and interval estimation
of the model parameters of the q-exponential distribution based on the maximum likelihood method. Simulated and real-life datasets are used to illustrate the methodologies discussed in this paper. Finally, some
concluding remarks are provided.

1 Introduction
Consider the nonlinear dierential equation
equation is given by

dy(x)
dx

= [y(x)]q , q R with the initial value y(0) = 1, the solution of this


1/(1q)

y(x) = [1 + (1 q)x]+

where [a]+ = max{0, a}. The function y(x) is the q-exponential (or deformed exponential) function given by
1/(1q)

expq (x) = [1 + (1 q)x]+


and its inverse is the q-logarithm function given by
lnq (x) =

x1q 1
.
1q

The conventional exponential and logarithm functions are the limiting cases of the q-exponential and q-logarithm
functions, respectively, when q 1 [1]. The q-exponential and q-logarithm functions play an important role in the
Tsallis entropy and nonextensive statistics. For more detailed discussions concerning the properties of the q-exponential
and q-logarithm functions, one may refer to Yamano [2], Bercher and Vignat [3].
Based on the q-logarithm function, Tsallis [1] generalized the Boltzmann-Gibbs extensive statistical mechanics by
dening the entropy
 q
pi lnq pi ,
(1)
Sq (pi ) = k
i

where k is some conventional positive constant and pi is the probability associated with the i-th microstate. The
entropy in eq. (1) is called Tsallis entropy, which becomes Boltzmann-Gibbs entropy by letting q 1,

S1 (pi ) = k
pi ln pi .
(2)
i
a
b
c

e-mail: lnsz-zfd@163.com
e-mail: ymshi@nwpu.edu.cn
e-mail: ngh@smu.edu

Page 2 of 20

Eur. Phys. J. Plus (2016) 131: 379

It has been shown that for arbitrarily correlated subsystems subject to generic values of q, the entropy in eq. (1)
remains nonextensive. Since the pioneering work by Tsallis [1], nonextensive statistics has received increasing interests
in statistical mechanics, information geometry, mathematics and statistics.
In statistical mechanics, Martinez et al. [4] discussed the q-generalized equipartition theorem and the virial theorem. The Tsallis statistics in the grand canonical ensemble was considered in a general form by Parvan [5]. Parvan [5]
also proved that the Tsallis statistics in the grand canonical ensemble satises the requirements of the equilibrium
thermodynamics in the thermodynamic limit under certain conditions. Liu et al. [6] proposed the denitions of different multiwavelet packet entropies, including the Tsallis singular entropy, for transmission line fault recognition
and classication. Furuichi [7] studied the relations among Tsallis type entropies and generalized the chain rule for
entropies derived by Dar
ocy [8]. Furuichi [7] also dened the Tsallis mutual entropy and Tsallis conditional mutual
entropy and discussed their fundamental properties.
In information geometric, Amari [9, chapt. 4] showed that the geometry originating from Tsallis q-entropy is
equivalent to the -geometry and a type of at structure, called conformal attening, induced from the Tsallis qentropy. Amari and Ohara [10] treated the q-exponential family by generalizing the exponential function to the q-family
of power functions, and suggested that the maximizer of the q-escort distribution is a Bayesian maximum a posteriori
probability estimator. Ohara [11] revealed a close relation with the nonextensivity and curvature and investigated
several properties of the minimization of the Tsallis relative entropy. Matsuzoe [12] summarized the dualistic Hessian
structures of a deformed exponential family of statistical model, which is a generalization of the exponential family of
distributions.
In mathematics and statistics, Loaiza and Quiceno [13] constructed a non-parametric statistical manifold for the
q-exponential family. Nelson and Umarov [14] proposed a physical interpretation of Tsallis statistics as representing
the nonlinear coupling or decoupling of statistical states. Jauregui and Tsallis [15] studied the q-generalization of the
inverse Fourier transform and derived a method enabling the inversion of the q-Fourier transform. Huang et al. [16]
generalized the Kullerback-Leibler (KL) divergence, one of the important measures for the distance between two
probability distributions, in form of Tsallis statistics. Huang et al. [16] showed several important properties of the
proposed q-generalized KL divergence, including the pseudo-additivity, positivity and monotonicity.
In this paper, we aim to apply the q-generalized distributions in reliability data analysis, which is an important
subject in both statistics and engineering. Nowadays, due to the advance in technologies and increasing in consumer
expectations, manufacturers need to ensure the reliability of their products in order to stay competitive in the market.
Reliability analysis is used in dierent stages of the product development cycle, including the design and testing
stages, to improve the quality and reliability of products. Statistical methods for life testing experiments were well
developed for reliability analysis in the past decades (see, for example, [1721]). In our opinion, using Tsallis statistics
in reliability analysis is an interesting problem for both statisticians and physicists and it is a problem that has not
received much attention in the literature. Kumar [22] discussed some results on a non-additive Tsallis entropy for
k-record values from some continuous probability models, and studied the entropy measure of the n-th upper k-record
value. Mathai and Provost [23] proposed and investigated several q-generalizations of logistic distribution, the type-1
and type-2 beta distributions and other extensions. Following these interesting papers, we discuss the use of Tsallis
statistics in the reliability analysis in this paper.
This paper is organized as follows. In sect. 2, we rst discuss the q-generalization of gamma and incomplete gamma
functions. Based on these q-generalized functions, several commonly used lifetime distributions in reliability analysis,
including the exponential, Weibull and gamma distributions, are introduced in the Tsallis statistics. The reliability
characteristics such as reliability function, hazard function, cumulative hazard functions and mean time to failure
(MTTF) of those q-generalized distributions are also presented in sect. 2. Then, in sect. 3, we take the q-exponential
distribution as an example to demonstrate the statistical inference based on progressively Type-II censored data.
Point estimation and interval estimation procedures for the model parameters are discussed. In sect. 4, we illustrate
the methodologies developed in sect. 3 by using simulated datasets and a real dataset. Some concluding remarks are
provided in sect. 5.

2 Main results
Reliability data analysis, also known as survival analysis, time-to-event data analysis, event history analysis or reliability analysis, is widely used in dierent areas including social sciences, biomedical sciences, engineering sciences,
etc. Reliability data measures the time at which a certain event, such as the death of the patient or the failure of an
electronic component, occurs. The time-to-event is always viewed as a random variable (denoted by X) and follows a
particular lifetime probability distribution. Some commonly used lifetime distributions include exponential, gamma,
lognormal and Weibull distributions. The probability distribution of reliability data is generally characterized by the
probability density function (PDF) f (x), cumulative distribution function (CDF) F (x), survival function (or reliability

Eur. Phys. J. Plus (2016) 131: 379

Page 3 of 20

function) R(x), hazard function h(x) or cumulative hazard function H(x) dened as
Pr(x < X < x + x)
,
x0
x
 x
F (x) = Pr(X x) =
f (y)dy,
0

R(x) = Pr(X > x) =
f (y)dy,
f (x) = lim

f (x)
Pr(x < X < x + x)
=
,
h(x) = lim
x0
x Pr(X > x)
R(x)
 x
H(x) =
h(y)dy,
0

respectively. These functions are mathematically equivalent in the sense that if one of them is given, the others can be
determined (see, for example, refs. [19, 20, 24, 25]). Note that the term f (x)x refers to the unconditional probability
that the unit will fail in the interval (x, x + x], while the quantity h(x)x refers to the conditional probability that
the unit will fail in the time interval (x, x + x], given that the unit has survived until time x. The reliability function
is a probability such that 0 R(x) 1, but h(x) is not a probability in which h(x) is always greater than or equal
to f (x). For example, consider a human being, the probability for the individual to die at an age between 99 and 100
years (i.e., f (x)x) is quite small because only a small fraction of human beings can survive that long. Provided that
an individual has survived until the age of 99 years (corresponding to h(x)x), then the probability for that individual
to die at an age between 99 and 100 will be much larger than f (x)x [18].
In this section, we discuss a generalization of the probability distribution called q-generalized distribution, in the
Tsallis statistics. For > 0, we rst dene the function Cq () as

Cq () =

1
,
(1 q)

q (0, 1), > 0,




1
.
q 1, 1 +

The following results can be derived directly.

Lemma 1. Let x (0, Cq ( + 1)), we have




( + 1)
,
+1 ( + 1)

q
0


 x

k
q (k+1)


(
+
1)
(x)
(
+
1)

q
1
expq (x) q (k)
,
t expq (t)dt = +1

q ( + 1)
k! q ( + 1 k)
0
x expq (x)dx =

k=0

where the function q () is dened as


q () = (2 q)(3 2q) . . . ( + 1 q)



1
,
for q 0, 1 +

Proof. For = 0, it is clear that




expq (x)dx =
0

(1)
1
=
.
(2 q)
q (1)

q (0) = 1.

(3)
(4)

Page 4 of 20

Eur. Phys. J. Plus (2016) 131: 379

When > 0, using integration by parts, we have



0

x [1 + (1 q)(x)] 1q dx
0

2q

=
x1 [1 + (1 q)(x)] 1q dx
(2 q) 0

32q
( 1)
x2 [1 + (1 q)(x)] 1q dx
= 2
(2 q)(3 2q) 0
...

k+1kq
( + 1)
xk [1 + (1 q)(x)] 1q dx for k [1, ]
=
( k + 1)k q (k) 0
...

+1q
( + 1)
[1 + (1 q)(x)] 1q dx
=

(1) q () 0
( + 1)
.
= +1

q ( + 1)

x expq (x)dx =

Then, eq. (4) can be obtained by repeating the process in a similar manner.
Since 1 () = 1 and q () > 0 for all > 0, eqs. (3) and (4) are the generalized gamma and
 incomplete gamma
functions, respectively. When q 1 and = 1, we have the gamma function ( + 1) = 0 x exp(x)dx and
x

k
1
t exp(t)dt = 1 exp(x) k=0 xk! .
incomplete gamma function (+1)
0
A considerable amount of research work has been done on q-generalized distribution. For instance, Nadarajah
and Kotz [26] discussed two families of q-type distributions, Burr-type XII and Burr-type III distributions. Bercher
and Vignat [3] provided an independent rationale for q-exponential distributions and indicated that the q-exponential
distribution is stable by a statistical normalization operation. Picoli Jr. et al. [27] introduced a q-Weibull distribution
and applied it to model the frequency distributions of basketball baskets, brand-name drugs by retail sales, cyclone
victims, and highway length. In the following sections, we study the applications of several q-generalized distributions
in reliability analysis.

2.1 q-generalization of the exponential distribution


Suppose that a random variable X follows a q-exponential distribution (denoted by X expq ()) with PDF and CDF
fq (x|) = (2 q) expq (x) ,
and

0 < q < 2,

> 0,

0 < x < Cq (1)


2q
Fq (x|) = 1 expq (x)
,

(5)

(6)

respectively [3, 26]. The PDF in eq. (5) is the solution of the maximum Tsallis entropy equation [1], i.e.,
fq (x|) = arg max
f

1
1q




f q (x)dx 1 .

Then, the reliability function of the random variable X that follows a q-exponential distribution is given by

2q
.
Rq (x|) = 1 Fq (x|) = expq (x)
Figures 1 and 2 give the log-log scale plots of the reliability function Rq (x|) with dierent parameter settings. From
gs. 1 and 2, we can see that the larger the parameter , the quicker the failures will occur while the smaller the
parameter q, the quicker the failures will occur. Thus, the hazard function and cumulative hazard function of the

Page 5 of 20

0.20

= 0.15
= 0.2
= 0.3
= 0.4
= 0.5
= 0.7
= 1.2
=3

0.05

0.10

qexponential reliability

0.50

1.00

Eur. Phys. J. Plus (2016) 131: 379

0.02

0.05

0.10

0.20

0.50

1.00

2.00

5.00

20.00

loglog scale plots

0.8
0.7

q = 0.1
q = 0.6
q = 0.9
q = 1.09
q = 1.25
q = 1.35
q = 1.45
q = 1.55

0.5

0.6

qexponential reliability

0.9

1.0

Fig. 1. Plots of q-exponential reliability with q = 1.05.

0.02

0.05

0.10

0.20

0.50

1.00

2.00

5.00

loglog scale plots

Fig. 2. Plots of q-exponential reliability with = 0.5.

random variable X follow a q-exponential distribution given by


fq (x|)
Rq (x|)
= (2 q) exp2 ((q 1)x)
(2 q)
,
=
1 + (q 1)x
 x
Hq (x|) =
hq (t|)(dt)
hq (x|) =

2q
=
ln (1 + (q 1)x) ,
1q
respectively. For q (0, 1), the hazard function hq (x|) is a decreasing function with respect to time, while for
q (1, 2), the hazard hq (x|) is an increasing function in x under the condition 1 + (q 1)x > 0. When q 1, the
q-exponential distribution reduces to the conventional exponential distribution with a constant hazard rate .

Page 6 of 20

Eur. Phys. J. Plus (2016) 131: 379

The n-th moment of the random variable X that follows a q-exponential distribution can be expressed as

xn fq (x|)(dx)
E(X n ) =
0

= (2 q)
xn expq (x)(dx)
0

(n + 1)q (1)
,
= n
q (n + 1)

for x (0, Cq (n + 1)).

Hence, the MTTF of the unit is given by


E(X) =

1
,
(3 2q)

for x (0, Cq (2)).

In order to discuss the memoryless property of the q-exponential distribution, we obtain


1


1q
1q  1q
(1 q)2 exp2q (x)
xt
,
expq (x + t) = expq (x) expq (t)
 x+t

2q
2q
fq (y|)(dy) = expq (x)
expq ((x + t))
,
x

and


2q
fq (y|)(dy) = expq (x)
.

Then, we have the conditional probability


Pr(x < X x + t)
Pr(X > x)
[expq (x)]2q [expq ((x + t))]2q
=
[expq (x)]2q


1q
1q 2  2q
1q
= 1 expq (t)
(1 q)2 exp2q (x)
xt
 2q

1q
(1 q)t
=1 1
.
1 (1 q)x

Pr(X x + t|X > x) =

(7)

Since the conditional probability Pr(X x + t|X > x) depends on x except for q = 1, the q-exponential distribution
does not have the memoryless property in general except when q = 1. It is worth mentioning that with the relationship

2q
1 [(2 q)(x + t)] = exp ((x + t))
exp 2q
,
q
the conditional probability presented in eq. (7) is the same as the risk functions presented in Ludescher et al. [28]
(eqs. (6a) and (6b)).
Here, we can further consider a q-generalization of the two-parameter exponential distribution with location parameter and scale parameter . The PDF and CDF of the q-generalized exponential distribution (denoted by
X expq (, )) with a location parameter is given by


x
(2 q)
expq
, 0 < q < 2, > 0, > 0,
(8)
fq (x|, ) =

and

2q


x
Fq (x|, ) = 1 expq
,

0 < q < 2,

> 0,

> 0,

(9)

+ for q (0, 1) and < x < for q (1, 2). The hazard function and cumulative
respectively, for < x < 1q
hazard rate function of the expq (, ) are given by


x
2q
exp2 (q 1)
,
hq (x|, ) =

and

respectively.



x
2q
Hq (x|, ) =
ln 1 + (q 1)
,
q1

Page 7 of 20
0.8

Eur. Phys. J. Plus (2016) 131: 379

0.4
0.0

0.2

qWeibull PDF

0.6

= 0.75
=1
= 1.5
=2
= 2.5
=3
= 3.5
=4

0.6

Fig. 3. Plots of q-Weibull PDF with = 0.5, q = 1.05.

0.3
0.0

0.1

0.2

qWeibull PDF

0.4

0.5

q = 0.4
q = 0.8
q = 1.1
q = 1.25
q = 1.4
q = 1.5
q = 1.6
q = 1.7

10

Fig. 4. Plots of q-Weibull PDF with = 0.5, = 1.5.

2.2 q-generalization of the Weibull distribution


The Weibull distribution proposed by Weibull [29] has long been used for reliability analysis problems. Weibull distribution has been used successfully in many applications in dierent elds including geology, material sciences, physics,
chemistry, economics and business, etc. (see, for example, Almalki and Nadarajah [30], Lai [31], Zhang et al. [32]).
If a random variable X follows a q-Weibull distribution with scale parameter and shape parameter (denoted
by X Weibullq (, )), the PDF of X can be expressed as


(10)
fq (x|, ) = (2 q)(x)1 expq (x) , q (0, 2), > 0, > 0,
1
+
where 0 < x < (1q)
for q (1, 2) [3, 27].
1/ for q (0, 1), x R
The peaks of frequency of failure and the probability that a unit fails in any given time interval can be obtained
from the PDF in eq. (10). Figures 35 present the PDF fq (x|, ) with dierent parameter settings. Figure 3 gives a
trend of high failure rate at the beginning of the experiment, a decreasing failure rate is recorded as time increases for
= 0.75 and 1 (see also g. 6). As increases from 1.5 to 4, the peak of failure frequency concentrates around the
time interval [1.5, 2.5]. From g. 4, as the parameter q increases from 0.4 to 1.7, the peak of failure frequency remains

Eur. Phys. J. Plus (2016) 131: 379


0.8

Page 8 of 20

0.4
0.0

0.2

qWeibull PDF

0.6

= 0.15
= 0.2
= 0.25
= 0.4
= 0.5
= 0.7
= 0.9
= 1.1

10

12

1.0

Fig. 5. Plots of q-Weibull PDF with q = 1.05, = 1.5.

0.6
0.4
0.0

0.2

qWeibull reliability

0.8

= 0.1
= 0.25
= 0.5
= 0.75
=1
= 1.5
= 2.5
= 3.5

10

Fig. 6. Plots of q-Weibull reliability with = 0.5, q = 1.05.

stable around time 1.5. From g. 5, however, as increases from 0.15 to 1.1, the peak of failure frequency shifts to
the right from 0.8 to 4.
The CDF of the q-Weibull distribution can be expressed as
 x


(2 q)(t)1 expq (t) (dt)
Fq (x|, ) =
0

(x)

= (2 q)

expq (u)(du)
0

2q

,
= 1 expq [(x) ]
and the reliability function is


2q
.
Rq (x|, ) = expq [(x) ]

Figures 69 present the reliability function Rq (x|, , q) with dierent parameter settings. When and q are xed,
g. 6 indicates that the larger the value of , the higher the reliability for x < 2 and the lower the reliability for

Page 9 of 20
1.0

Eur. Phys. J. Plus (2016) 131: 379

0.6
0.4
0.0

0.2

qWeibull reliability

0.8

q = 0.4
q = 0.8
q = 1.1
q = 1.3
q = 1.4
q = 1.5
q = 1.6
q = 1.65

10

1.0

Fig. 7. Plots of q-Weibull reliability with = 0.5, = 1.5.

0.6
0.4
0.0

0.2

qWeibull reliability

0.8

= 0.15
= 0.2
= 0.25
= 0.3
= 0.4
= 0.6
=1
=2

10

Fig. 8. Plots of q-Weibull reliability with q = 1.05, = 1.5.

x > 2. Figure 7 shows that as the parameter q increases, the reliability increases. In g. 8, we observe that when the
parameter increases, the reliability decreases.
The hazard rate function and cumulative hazard rate function of the Weibullq (, ) are given by


hq (x|, ) = (2 q)(x)1 exp2 (q 1)(x)
and
Hq (x|, ) =


2q
ln 1 + (q 1)(x) ,
q1

respectively. Figures 911 present the q-Weibull hazard function hq (x|, ) and gs. 1214 present the q-Weibull
cumulative hazard function Hq (x|, ) for dierent parameter settings. In g. 9, for = 0.3, 0.8 and 1, a decreasing
hazard function is observed, while for > 1.2, the hazard increases during the initial period, stays approximately
constant for a certain time, after which it decreases. Figure 10 shows that when the parameter q increases from 0.98
to 1.5, the hazard becomes smaller for all x. Figure 11 shows that as the parameter increases from 0.2 to 1.5, the
hazard becomes lager for all x. Similar behaviors can be observed from gs. 1214.

Eur. Phys. J. Plus (2016) 131: 379


2.5

Page 10 of 20

1.5
1.0
0.0

0.5

qWeibull hazard

2.0

= 0.3
= 0.8
=1
= 1.2
= 1.4
= 1.6
= 1.8
=2

20

40

60

80

3.0

Fig. 9. Plots of q-Weibull hazard with = 0.5, q = 1.05.

2.0
1.5
1.0
0.0

0.5

qWeibull hazard

2.5

q = 0.98
q = 0.99
q = 1.005
q = 1.02
q = 1.05
q = 1.1
q = 1.2
q = 1.5

10

15

20

Fig. 10. Plots of q-Weibull hazard with = 0.5, = 1.5.

The n-th moment of the random variable X that follows a q-Weibull distribution is given by

xn fq (x|, )(dx)
E(X n ) =
0

1
= n
un/ expq (u)(du)
0
1 (n/ + 1)
, for x (0, Cq (n/ + 1)),
= n
q (n/ + 1)
where u = (x) .
Remark 1. The above results indicate that the functions Cq () and q () play an important role in the process of
q-generalization of a probability distribution. The function Cq () provides the domain of the random variable X in
which the improper integrals exist. We have also found that as , q 1. The function q () is a bounded
function with 1 () 1 for all , which means that q () vanished for the usual case.

Page 11 of 20
3.0

Eur. Phys. J. Plus (2016) 131: 379

2.0
1.5
1.0
0.0

0.5

qWeibull hazard

2.5

= 0.2
= 0.3
= 0.5
= 0.7
= 0.9
= 1.1
= 1.3
= 1.5

10

20

30

40

50

60

3.0

Fig. 11. Plots of q-Weibull hazard with q = 1.05, = 1.5.

2.0
1.5
1.0
0.0

0.5

qWeibull cumulative hazard

2.5

= 0.1
= 0.3
= 0.45
= 0.6
= 0.8
=1
= 1.2
= 1.4

10

15

20

Fig. 12. Plots of q-Weibull cumulative hazard with = 0.5, q = 1.05.

2.3 q-generalization of the gamma distribution


The gamma distribution has been used extensively in modeling data from dierent elds including reliability analysis,
hydrology and engineering. In this subsection, we consider the q-generalization of the gamma distribution with the
scale parameter and shape parameter . From eqs. (3) and (4), the gamma distribution can be q-generalized to a
statistical distribution with PDF and CDF
fq (x|, ) =

q () 1
x
expq (x) ,
()

and
Fq (x|, ) = 1

1

k=0

q (k)
(x)k q ()
expq (x) q (k1) ,
k! q ( k)

Eur. Phys. J. Plus (2016) 131: 379


3.0

Page 12 of 20

2.0
1.5
1.0
0.0

0.5

qWeibull cumulative hazard

2.5

q = 0.99
q = 1.2
q = 1.3
q = 1.4
q = 1.5
q = 1.6
q = 1.7
q = 1.8

10

15

20

Fig. 13. Plots of q-Weibull cumulative hazard with = 0.5, = 1.5.

3
2
1
0

qWeibull cumulative hazard

= 0.125
= 0.15
= 0.175
= 0.2
= 0.25
= 0.3
= 0.35
= 0.4

10

15

20

25

30

Fig. 14. Plots of q-Weibull cumulative hazard with q = 1.05, = 1.5.

respectively, with > 0, > 0, q (0, 2), x (0, Cq ()). The reliability function of the q-generalized gamma
distribution is given by
1
 (x)k q ()
q (k)
expq (x) q (k1) ,
Rq (x|, ) =
k! q ( k)
k=0

and the hazard rate function can be expressed as

hq (x|, ) =

()

1

k=0

q (k) 1
k xk+1
expq (x) q (k1)
k! q ( k)

1
.

The n-th moment of the random variable X that follows the q-generalized gamma distribution is

n
xn fq (x|, )(dx)
E(X ) =
0

q () n+1
=
x
expq (x) (dx)
()
0
(n + )
q ()
, for x (0, Cq (n + )).
=
() n q (n + )

Eur. Phys. J. Plus (2016) 131: 379

Page 13 of 20

Remark 2. Note that limq1 q () = 1 and limq1 expq (x) = exp(x). Hence, we have
f (x|, ) = lim fq (x|, ) =
q1

1
x
exp (x) ,
()

Fq (x|, ) = lim Fq (x|, ) = 1 exp(x)


q1

1

k=0

> 0,

> 0,

x > 0,

(x)k
,
k!

which are the PDF and CDF of the gamma distribution. In other words, the gamma distribution is the limiting special
case of the q-generalized distribution when q 1.

3 Statistical inference with censored data


In reliability and survival analysis, there are many situations in which the experimental units fail for reasons unrelated to the normal failure mechanism. For example, in a life testing experiment of lamps, one of the lamps might be
accidently broken after the start of the experiment but before all the lamps had burned out [33]. The removal of experimental units may occur unintentionally, or it can be intentional and pre-planned due to time or budget constraints.
Some commonly used censoring schemes are the Type-I right censoring scheme and the Type-II right censoring scheme.
For Type-I censoring scheme, also known as time censoring scheme, the experiment is terminated at a prexed time.
Therefore, the experimental time is prexed, but the number of observed failures is a random variable when Type-I
censoring scheme is employed. In contrast, Type-II right censoring scheme, also known as item censoring, terminates
the experiment when a pre-xed number of failures have been observed.
The Type-I and Type-II censored life-testing experiments described here can be extended to situations wherein
censoring occurs in multiple stages. Data arising from such life-tests are referred to as progressively censored data.
A progressive Type-II censored life-testing experiment will be carried out in the following manner. Consider that n
units are placed on a life-testing experiment, it is planned that only m complete failures will be observed and the
R
), Ri of the
remaining n m lifetimes are censored progressively. At the time of the i-th failure (denoted by Xi:m:n
surviving units are removed randomly from the life-test. The experiment will be terminated when the m-th observed
R
, i = 1, 2, . . . , m), where R = (R1 , R2 , . . . , Rm ). The set of
failure occurs and the observed failure times are (Xi:m:n
progressive censoring schemes with eective sample size m and total sample size n is denoted by

PC(m, n) =

R = (R1 , . . . , Rm )

Nm
0 |

m



Ri + m = n ,

i=1

where N0 is the set of nonnegative integers. For comprehensive reviews and recent developments on progressive censoring, one may refer to Balakrishnan and Aggarwala [17], Balakrishnan and Cramer [34], Chan et al. [35], Park et
al. [36], Zhang et al. [32], Zhang and Shi [37, 38].
Given a censoring scheme R PC(m, n), let

 R
R
xR
m:n = x1:m:n , . . . , xm:m:n
be the set of the observed progressively Type-II censored order statistics under the censoring scheme R. Consider that
the lifetimes of the experimental units are independent and identically distributed with a q-exponential distribution
expq (), then the likelihood function based on the data xR
m:n is given by
m




Ri
R
L , q|xR
fq (xR
m:n = c(R)
i:m:n |) Rq (xi:m:n |)

= c(R)

i=1
m



Ri (2q)
R
(2 q) expq (xR
i:m:n ) expq (xi:m:n )

i=1

= c(R)m (2 q)m

m


Ri (2q)+1
expq (xR
,
i:m:n )
i=1

where c(R) is a normalizing constant independent of the parameters and q.

Page 14 of 20

Eur. Phys. J. Plus (2016) 131: 379

3.1 Maximum likelihood estimation


To estimate the unknown model parameters, we consider the method of maximum likelihood. Since ln expq (x) =
ln[1 + (1 q)x]/(1 q), the log-likelihood function can be expressed as




R
l , q|xR
m:n = ln L , q|xm:n
m


1 
[Ri (2 q) + 1] ln 1 + (1 q)(xR
(11)
= ln (c(R)) + m ln() + m ln(2 q) +
i:m:n ) .
1 q i=1
Taking the partial derivatives of l(, q|xR
m:n ) with respect to and q and setting them to zero, we have the normal
equations
m  (Ri (2 q) + 1)xR
l
i:m:n
=

= 0,
R

1
+
(1

q)(x
)
i:m:n
i=1


m 

(Ri (2 q) + 1)xR
m
1 
Ri (2 q) + 1
l
i:m:n
=
+
Ri ln 1 + (1 q)(xR
= 0.
)
+
i:m:n
q
q 2 1 q i=1
1q
1 + (1 q)(xR
i:m:n )
m

(12)
(13)

and q, respectively, can be


The maximum likelihood estimators (MLEs) of the parameters and q, denoted by
obtained by solving the normal equations (12) and (13) simultaneously. Since these equations cannot be solved analytically, numerical methods such as Newton-Raphson or some other iterative procedure must be employed. In this
paper, we utilize the function multiroot in the R-package rootSolve [39] to solve the equations numerically.
In addition to the MLEs, we also consider approximate maximum likelihood estimators (AMLEs) as an alternative
method to estimate the parameters. Using the results from Tsallis [1], we have


1
R
R
R
2
for q 1
expq (xi:m:n ) = exp(xi:m:n ) 1 (1 q)(xi:m:n ) + o(1 q)
2
and

1
R
R
2
log expq (xR
i:m:n ) = xi:m:n (1 q)(xi:m:n ) + o(1 q) for q 1.
2
We can approximate the log-likelihood function in eq. (11) asymptotically by


lA , q|xR
m:n = log (c(R)) + m log() + m log(2 q)


m

1
R
R
2
+
(Ri (2 q) + 1) xi:m:n (1 q)(xi:m:n ) + o(1 q).
2
i=1

(14)

Taking the partial derivatives of lA (, q|xR


m:n ) with respect to and q and setting them to zero, we obtain two nonlinear
equations
(1 q)

m

i=1

m
2 2 

(Ri (2 q) + 1) xR

+
(Ri (2 q) + 1)xR
i:m:n
i:m:n m = 0,

(15)

i=1

m
m


2 2

1
(2 q)
(Ri (3 2q) + 1) xR

+
(2

q)
Ri xR
i:m:n
i:m:n m = 0.
2
i=1
i=1

(16)

and q, can be obtained by solving eqs. (15) and (16) simultaneously. The advantage
Then, the AMLEs, denoted by
of the AMLEs over the MLEs is that the parameter in the solution of eqs. (15) and (16) can be written in terms of
q and hence we only need to deal with an equation with one unknown to obtain the AMLEs instead of two equations
and two unknowns to obtain the MLEs. Moreover, the AMLEs can be served as the initial values of the iterative
procedure used to obtain the MLEs.
3.2 Condence intervals
In this subsection, we discuss two methods, the normal approximation method and the bootstrap method, to construct
condence intervals for parameters and q. Let (1 , 2 ) = (, q), then the (i, j)-th element of the Fisher information
matrix g(, q) = (gij (, q)) is given by


l2
, i, j = 1, 2.
gij (, q) = E
i j

Eur. Phys. J. Plus (2016) 131: 379

Page 15 of 20

We further denote the inverse of the Fisher information matrix as g 1 (, q) = (g ij (, q)). Based on the theory of
maximum likelihood estimation, the MLEs are asymptotically normal distributed. The 100(1)% condence intervals
based on normal approximation (Norm-CI) for parameters and q can be expressed as




z/2 g 11 (,
q) ,
+ z/2 g 11 (,
q) ,

and


q z/2

q) ,
g 22 (,



22

q + z/2 g (, q) ,

respectively, where z/2 is the upper (/2)-th percentile of the standard normal distribution.
Another method for constructing condence intervals for the parameters is the percentile bootstrap method [40].
The bootstrap method has been used extensively in the analysis of reliability data (see, for example, Kundu and
Joarder [41], Meeker and Escobar [21]). The algorithm to construct condence intervals for and q based on bootstrap
method can be described as follows:

and q).
1) Based on the observed data xR
(or the AMLEs
m:n with censoring scheme R, compute the MLEs and q
R(b)
(or exp ())

2) Generate a bootstrap progressively Type-II censored sample, denoted as xm:n , from the expq()
q
distribution with censoring scheme R.
R(b)
(b) and q(b) (or the AMLEs
(b) and q(b) ).
3) Based on the bootstrap sample xm:n , compute the MLEs
4) Repeat steps 2 and 3 B times to obtain the sequences of bootstrap MLEs


(2) , . . . ,
(B)
(1) ,

q(1) , q(2) , . . . , q(B)

and

and the sequences of bootstrap AMLEs




(1) ,
(2) , . . . ,
(B)


q(1) , q(2) , . . . , q(B) .

and

5) Order the bootstrap estimators in ascending order to obtain the ordered sequences of bootstrap MLEs


(2) < . . . <


[B]
[1] <

q[1] < q[2] < . . . < q[B]

and

and the ordered sequences of bootstrap AMLEs




[1] <
[2] < . . . <
[B]


q[1] < q[2] < . . . < q[B] .

and

Then, the 100(1 )% bootstrap condence intervals (Boot-CI) for and q based on MLEs can be obtained as,
respectively,




[B(1/2)]
[B(/2)] ,
and
q[B(/2)] , q[B(1/2)] ,

where a is the largest integer less or equal to a, and the 100(1 )% bootstrap condence intervals for and q based
on AMLEs can be obtained as, respectively,


[B(1/2)]
[B(/2)] ,


and


q[B(/2)] , q[B(1/2)] .

4 Illustrative examples
In this section, several simulation datasets under progressive censoring and a real dataset are used to illustrate the
methodologies developed in sect. 3.

Page 16 of 20

Eur. Phys. J. Plus (2016) 131: 379


Table 1. Censoring schemes considered in the simulation study.

CS

(m, n)

Schemes

CS

(m, n)

Schemes

CS

(m, n)

Schemes

R1

(20, 20)

(0 . . . 0)

R2

(16, 20)

(1 0 1 0 . . . 0 2)

R3

(12, 20)

(2 0 1 2 0 . . . 0 1 0 2)

R4

(30, 30)

(0 . . . 0)

R5

(25, 30)

(1 1 0 . . . 0 1 0 2)

R6

(20, 30)

(2 1 2 0 2 0 . . . 0 1 0 2)

Table 2. Parameter estimates of simulated datasets with qT = 0.5 and dierent values of T .
MLEs

AMLEs

CS

Boot-CI

R1

1.2

1.2043

(1.1851, 1.2092)

(1.1846, 1.2058)

1.1979

(1.1897, 1.2023)

(1.1965, 1.2048)

1.5

1.4988

(1.4953, 1.5074)

(1.4955, 1.5076)

1.5025

(1.4963, 1.5021)

(1.4949, 1.5039)

1.5

1.5094

(1.4892, 1.5144)

(1.4921, 1.5079)

1.4939

(1.4834, 1.5091)

(1.4936, 1.5024)

1.8

1.8116

(1.7908, 1.8063)

(1.7915, 1.8176)

1.8041

(1.7886, 1.8056)

(1.7938, 1.8099)

1.8

1.7936

(1.7809, 1.8136)

(1.7869, 1.8273)

1.7913

(1.7629, 1.8149)

(1.7859, 1.8127)

2.0

1.9809

(1.9524, 2.1637)

(1.9674, 2.0461)

2.0434

(1.9137, 2.1236)

(1.9529, 2.0536)

1.2

1.2026

(1.1767, 1.2091)

(1.1969, 1.2024)

1.1987

(1.1908, 1.2013)

(1.1982, 1.2027)

1.5

1.5006

(1.4910, 1.5061)

(1.4964, 1.5033)

1.5007

(1.4976, 1.5021)

(1.4954, 1.5022)

1.5

1.5029

(1.4714, 1.5069)

(1.4898, 1.5136)

1.5022

(1.4947, 1.5027)

(1.4839, 1.5076)

1.8

1.7905

(1.7934, 1.8024)

(1.7926, 1.8086)

1.8052

(1.7651, 1.8113)

(1.7934, 1.8057)

1.8

1.8107

(1.7911, 1.8195)

(1.7815, 1.8079)

1.8035

(1.7938, 1.8124)

(1.7896, 1.8193)

2.0

2.0138

(1.9105, 2.1101)

(1.9417, 2.1245)

1.9835

(1.9334, 2.0758)

(1.9283, 2.0527)

R2
R3
R4
R5
R6

Norm-CI

Norm-CI

Boot-CI

4.1 Simulated datasets


Datasets with dierent censoring schemes from a q-exponential distribution with dierent parameter settings are
simulated and the statistical inference procedures developed in sect. 3 are applied to these datasets. The censoring
schemes used in this study are provided in table 1. Note that the datasets obtained under schemes R1 are R4 are
considered as complete data. The simulation algorithm proposed by Balakrishnan and Sandhu [42] is used to generate
the progressively Type-II censored samples from the q-exponential distribution.
1)
2)
3)
4)

Simulate m independent random variables W1 , . . . , Wm from the uniform distribution Uniform(0, 1).
m
Compute zi = (i + j=mi+1 Rj )1 , and Vi = Wizi for i = 1, . . . , m.
Compute Ui:m:n = 1 Vm Vm1 Vmi+1 , we have the data U = (U1:m:n , . . . , Um:m:n ).
Use the inverse transform method, compute Xi:m:n = 1 lnq (1 Ui:m:n )1/(2q) for i = 1, . . . , m.

The simulated data X = (X1:m:n , . . . , Xm:m:n ) are the required progressively Type-II censored sample from the qexponential distribution expq () with censoring scheme R = (R1 , . . . , Rm ). For each simulated dataset, the MLEs
and AMLEs of parameters and q are computed as described in sect. 3.1 and their corresponding 95% condence
intervals are computed using the normal approximation method and the bootstrap method presented in sect. 3.2. For
the bootstrap condence intervals, B = 5000 bootstrap samples are used.
Tables 27 present the point and interval estimates of parameters or q based on the simulated data under dierent
censoring schemes and dierent true values of parameters (denoted as T and qT ). Tables 24 report the estimation
results for xed values of qT and tables 57 report the estimation results for xed values of T .
For point estimation, from tables 27, we observe that the AMLEs are very close to the MLEs in most cases. As we
expected, the estimates become more accurate (close to the true values of the parameters) when the eective sample
size m or the total sample size n increases.
For interval estimation, from tables 27, the condence intervals obtained based on MLEs are quite close to the
condence intervals obtained based on AMLEs. We also observe that the widths of the condence intervals become
smaller when the eective sample size m or the total sample size n increases. The widths of the condence intervals
based on normal approximation (Norm-CI) are larger than the condence intervals obtained by bootstrap method
(Boot-CI) in most cases. However, for a xed censoring scheme and parameter setting, the computational eort
required for Boot-CI is larger than that of Norm-CI.

Eur. Phys. J. Plus (2016) 131: 379

Page 17 of 20

Table 3. Parameter estimates of simulated datasets with qT = 0.9 and dierent values of T .
MLEs
CS
R1
R2
R3
R4
R5
R6

Norm-CI

AMLEs
Boot-CI

Norm-CI

Boot-CI

1.2

1.2038

(1.1859, 1.2125)

(1.1870, 1.2158)

1.1986

(1.1891, 1.2153)

(1.1901, 1.2084)

1.5

1.4892

(1.4778, 1.5198)

(1.4808, 1.5171)

1.5086

(1.4742, 1.5853)

(1.4815, 1.5083)

1.5

1.5109

(1.4644, 1.5176)

(1.4754, 1.5197)

1.5100

(1.4813, 1.5883)

(1.4822, 1.5260)

1.8

1.8094

(1.7727, 1.8266)

(1.7809, 1.8204)

1.7895

(1.7649, 1.8239)

(1.7768, 1.8167)

1.8

1.8109

(1.7675, 1.8344)

(1.7681, 1.8330)

1.8135

(1.7491, 1.8644)

(1.7681, 1.8230)

2.5

2.5152

(2.4431, 2.5492)

(2.4615, 2.5316)

2.5104

(2.4324, 2.5641)

(2.4545, 2.5415)

1.2

1.2017

(1.1912, 1.2119)

(1.1942, 1.2167)

1.2016

(1.1829, 1.2136)

(1.1802, 1.2183)

1.5

1.4690

(1.4610, 1.5832)

(1.4580, 1.5730)

1.5071

(1.4123, 1.5261)

(1.4311, 1.5279)

1.5

1.5071

(1.4613, 1.5708)

(1.4288, 1.5718)

1.5068

(1.4526, 1.6133)

(1.4209, 1.6149)

1.8

1.8054

(1.7618, 1.9368)

(1.7770, 1.8416)

1.8116

(1.7526, 1.9133)

(1.7209, 1.8149)

1.8

1.8103

(1.7223, 1.8558)

(1.7411, 1.8641)

1.7984

(1.7499, 1.9011)

(1.7056, 1.9021)

2.5

2.4957

(2.4361, 2.6147)

(2.4166, 2.5819)

2.5052

(2.4258, 2.5763)

(2.4438, 2.5655)

Table 4. Parameter estimates of simulated datasets with qT = 1.2 and dierent values of T .
MLEs

AMLEs

CS

Boot-CI

R1

1.5

1.4921

(1.4952, 1.5191)

(1.4941, 1.5243)

1.5036

(1.4504, 1.5232)

(1.4724, 1.5342)

1.9818

(1.9763, 2.0434)

(1.9778, 2.0353)

2.0101

(1.9806, 2.0128)

(1.9701, 2.0117)

R2

2.0

2.0034

(1.9788, 2.1809)

(1.9679, 2.1145)

2.2027

(1.9752, 2.1351)

(1.9753, 2.1078)

2.5

2.5089

(2.4651, 2.5236)

(2.4527, 2.5157)

2.5014

(2.4803, 2.5213)

(2.4809, 2.5776)

R3
R4
R5
R6

Norm-CI

Norm-CI

Boot-CI

2.5

2.5030

(2.4807, 2.5337)

(2.4727, 2.5965)

2.5028

(2.4694, 2.5578)

(2.4753, 2.5126)

3.0

3.0045

(2.9712, 3.1551)

(2.9803, 3.1680)

3.0036

(2.9582, 3.0255)

(2.9629, 3.0354)

1.5

1.5016

(1.4896, 1.5132)

(1.5074, 1.6466)

1.5007

(1.4702, 1.6232)

(1.4675, 1.6092)

2.0

2.0028

(1.8809, 2.0585)

(1.9641, 2.0413)

2.0096

(1.9728, 2.0836)

(1.9703, 2.1123)

2.0103

(1.9741, 2.0271)

(1.9769, 2.0536)

1.9982

(1.9672, 2.1157)

(1.9532, 2.1106)

2.5

2.5098

(2.4538, 2.5697)

(2.4576, 2.5335)

2.5034

(2.4740, 2.5237)

(2.4464, 2.5116)

2.5

2.5018

(2.4434, 2.5887)

(2.4766, 2.5592)

2.5014

(2.4835, 2.5251)

(2.4683, 2.5160)

3.0

3.0064

(2.9415, 3.1803)

(2.9643, 3.0161)

3.0028

(2.9844, 3.0559)

(2.9404, 3.0667)

Table 5. Parameter estimates of simulated datasets with T = 1.2 and dierent values of qT .
MLEs
CS
R1
R2
R3
R4
R5
R6

qT

Norm-CI

AMLEs
Boot-CI

Norm-CI

Boot-CI

0.5

0.5024

(0.4964, 0.5084)

(0.4970, 0.5014)

0.4978

(0.4828, 0.5092)

(0.4907, 0.5055)

0.7

0.7005

(0.6912, 0.7039)

(0.6899, 0.7028)

0.6988

(0.6886, 0.7048)

(0.6894, 0.7016)

0.7

0.7059

(0.6918, 0.7105)

(0.6961, 0.7032)

0.7115

(0.6813, 0.7057)

(0.6822, 0.7150)

0.9

0.9105

(0.8902, 0.9123)

(0.8946, 0.9075)

0.9106

(0.8879, 0.9112)

(0.8840, 0.9113)

0.9

0.8901

(0.8895, 0.9136)

(0.8751, 0.9203)

0.9131

(0.8741, 0.9215)

(0.8722, 0.9200)

1.1

1.0722

(0.9325, 1.1597)

(1.0554, 1.1533)

1.0292

(1.0209, 1.1386)

(1.0250, 1.1256)

0.5

0.4987

(0.4973, 0.5031)

(0.4956, 0.5086)

0.5007

(0.4910, 0.5013)

(0.4982, 0.5017)

0.7

0.7003

(0.6937, 0.7029)

(0.6935, 0.7014)

0.6996

(0.6897, 0.7074)

(0.6899, 0.7097)

0.7

0.7025

(0.6946, 0.7066)

(0.6951, 0.7027)

0.7051

(0.6868, 0.7088)

(0.6838, 0.7102)

0.9

0.9103

(0.8954, 0.9036)

(0.8971, 0.9061)

0.9059

(0.8792, 0.9068)

(0.8791, 0.9046)

0.9

0.8924

(0.8972, 0.9088)

(0.8965, 0.9101)

0.9054

(0.8804, 0.9152)

(0.8303, 0.9108)

1.1

1.0646

(1.0828, 1.1516)

(1.0609, 1.1411)

1.1042

(1.0128, 1.1230)

(1.0299, 1.1174)

Page 18 of 20

Eur. Phys. J. Plus (2016) 131: 379


Table 6. Parameter estimates of simulated datasets with T = 1.5 and dierent values of qT .
MLEs

CS
R1
R2
R3
R4
R5
R6

qT

Norm-CI

AMLEs
Boot-CI

Norm-CI

Boot-CI

0.7

0.6947

(0.6531, 0.7394)

(0.6889, 0.7385)

0.7038

(0.6827, 0.7255)

(0.6834, 0.7250)

0.9

0.9061

(0.8447, 0.9342)

(0.8653, 0.9416)

0.9045

(0.8646, 0.9611)

(0.8769, 0.9163)

0.9

0.8909

(0.8644, 0.9437)

(0.8658, 0.8228)

0.9062

(0.8601, 1.0014)

(0.8709, 0.9204)

1.1

1.0779

(0.9662, 1.1541)

(0.9963, 1.2098)

1.0814

(1.0658, 1.1343)

(1.0657, 1.1548)

1.1

1.1101

(1.0613, 1.2215)

(1.0577, 1.1235)

1.0565

(1.0589, 1.1644)

(1.0551, 1.1720)

1.5

1.5152

(1.4632, 1.6103)

(1.3506, 1.6108)

1.4804

(1.3253, 1.6042)

(1.4405, 1.6600)

0.7

0.7017

(0.6636, 0.7289)

(0.6827, 0.7407)

0.7026

(0.6817, 0.7132)

(0.6909, 0.7140)

0.9

0.9026

(0.8795, 0.9321)

(0.8698, 0.9240)

0.9018

(0.8715, 0.9136)

(0.8703, 0.9119)

0.9

0.9036

(0.8773, 1.0313)

(0.8634, 0.9313)

0.9021

(0.8593, 0.9865)

(0.8493, 0.9273)

1.1

1.0968

(1.0466, 1.1776)

(1.0574, 1.1413)

1.0924

(1.0497, 1.1345)

(1.0699, 1.2041)

1.1

1.0964

(1.0821, 1.1312)

(1.0922, 1.1181)

1.0298

(1.0377, 1.1761)

(1.0365, 1.1869)

1.5

1.5032

(1.4818, 1.6042)

(1.4867, 1.6013)

1.4929

(1.4438, 1.5454)

(1.4626, 1.5379)

Table 7. Parameter estimates of simulated datasets with T = 1.8 and dierent values of qT .
MLEs
CS
R1
R2
R3
R4
R5
R6

qT

Norm-CI

AMLEs
Boot-CI

Norm-CI

Boot-CI

0.8

0.7923

(0.7895, 0.8783)

(0.7801, 0.8487)

0.8901

(0.7815, 0.8238)

(0.7703, 0.8249)

1.2

1.2036

(1.1816, 1.2276)

(1.1681, 1.2342)

1.2027

(1.1633, 1.2122)

(1.1705, 1.2162)

1.2

1.0042

(1.1733, 1.2381)

(1.1791, 1.2085)

1.2059

(1.1716, 1.2699)

(1.1616, 1.2705)

1.4

1.4070

(1.3805, 1.5226)

(1.3645, 1.4314)

1.4039

(1.3740, 1.4135)

(1.3630, 1.4123)

1.4

1.4109

(1.3717, 1.4196)

(1.3869, 1.5014)

1.4105

(1.3896, 1.4360)

(1.3739, 1.4329)

1.6

1.6085

(1.5844, 1.6232)

(1.5834, 1.6204)

1.6096

(1.5538, 1.6342)

(1.5874, 1.6445)

0.8

0.8006

(0.7701, 0.9035)

(0.7785, 0.9105)

0.8003

(0.7683, 0.8236)

(0.7707, 0.8202)

1.2

1.2018

(1.1669, 1.2580)

(1.1836, 1.2548)

1.2016

(1.1781, 1.2832)

(1.1764, 1.2104)

1.2

1.2033

(1.1754, 1.2833)

(1.1825, 1.2849)

1.2080

(1.1768, 1.2635)

(1.1853, 1.2619)

1.4

1.4035

(1.3372, 1.5026)

(1.3717, 1.5109)

1.4016

(1.3723, 1.4851)

(1.3649, 1.4228)

1.4

1.4013

(1.3657, 1.5273)

(1.3873, 1.4803)

1.4012

(1.3861, 1.4331)

(1.3785, 1.4265)

1.6

1.6025

(1.5889, 1.5125)

(1.5149, 1.6168)

1.6011

(1.5717, 1.6058)

(1.5707, 1.6402)

4.2 Real data analysis


In this subsection, we analyze a real data set from Buckley and James [43] based on the Stanford Heart Transplantation
Program begun in October 1967. Patients were admitted to the heart transplant program to wait for a donors heart
after a review by a committee. Unfortunately, some of the patients on the heart transplant waiting list could die or be
transferred out of the waiting list. In total, 184 patients received a heart transplant. The dataset contains the lifetime
(survival time) of patients in days after the heart transplant, the age in years at the time of transplant, an indicator
on whether the patient is dead or alive, and some other related variables. In this analysis, we focus on the survival
times and assume that the survival times follow a q-exponential distribution expq ().
q and AMLEs ,
q as
From eqs. (12), (13), (15) and (16), we computed the MLEs ,
= 1.12866,

q = 0.00125;

= 1.1185,

q = 0.0113.

The estimates of the MTTF based on the MLEs and AMLEs are given by
1
= 0.2955818
2
(3
q)

and

1
= 0.3002803.
2
(3
q)

Figure 15 presents the estimated reliability functions of the q-exponential distributions based on MLEs (solid line)
and AMLEs (dashed line).

Page 19 of 20
1.0

Eur. Phys. J. Plus (2016) 131: 379

0.6
0.4
0.0

0.2

qexponential reliability

0.8

^ ^
c(, q
)
~
c( , ~
q)

0.0

0.2

0.4

0.6

0.8

1.0

Fig. 15. Plots of q-exponential reliability with estimators.

5 Concluding remarks
This paper studied an application of the Tsallis statistics in reliability analysis. We discussed the q-gamma and
incomplete q-gamma functions, which can be considered as a q-generalization of the gamma and incomplete gamma
functions. Two functions Cq () and q () are dened and they are utilized in the q-generalization of dierent
lifetime statistical models. Three commonly used distributions in reliability analysis exponential, Weibull and gamma
distributions are q-generalized and their related reliability characteristics are presented. We demonstrated the use
of the q-generalized exponential distribution in reliability analysis and developed statistical inferential procedures for
censored reliability data. Point and interval estimation procedures are developed and these methods are illustrated by
simulated and real datasets. It would be of great interest to apply the Tsallis statistics in other aspects in reliability
analysis such as competing risks modeling and accelerated life testing experiments. We are currently working on these
problems and hope to report these ndings in a future paper.
The authors thank the anonymous referee and the editor for their useful comments and suggestions on an earlier version of
this manuscript which resulted in this improved version. This work is supported by the National Natural Science Foundation
of China (71401134, 71571144, 71171164, 70471057), the China Scholarship Council (201606290192), the Natural Science Basic
Research Program of Shaanxi Province (2015JM1003), and the Program of International Cooperation and Exchanges in Science
and Technology Funded by Shaanxi Province (2016KW-033). The work of HKTN was supported by a grant from the Simons
Foundation (#280601).

References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

C. Tsallis, Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World (Springer, New York, 2009).
T. Yamano, Physica A 305, 486 (2002).
J.-F. Bercher, C. Vignat, Physica A 387, 5422 (2008).
S. Martinez, F. Pennini, A. Plastino, Phys. Lett. A 278, 47 (2000).
A.S. Parvan, Eur. Phys. J. A 51, 108 (2015).
Z. Liu, Q. Hu, Y. Cui, Q. Zhang, Neurocomputing 142, 393 (2014).
S. Furuichi, J. Math. Phys. 47, 023302 (2006).
Z. Dar
ocy, Inform. Control 16, 36 (1970).
S. Amari, Information Geometry and Its Applications (Springer, Japan, 2016).
S. Amari, A. Ohara, Entropy 13, 1170 (2011).
A. Ohara, Phys. Lett. A 370, 184 (2007).
H. Matsuzoe, Dier. Geom. Appl. 35, 323 (2014).
G. Loaiza, H.R. Quiceno, J. Math. Anal. Appl. 398, 466 (2013).
K.P. Nelson, S. Umarov, Physica A 389, 2157 (2010).
M. Jauregui, C. Tsallis, Phys. Lett. A 375, 2085 (2011).

Page 20 of 20

Eur. Phys. J. Plus (2016) 131: 379

16. J. Huang, W.A. Yong, L. Hong, J. Math. Anal. Appl. 436, 501 (2016).
17. N. Balakrishnan, R. Aggarwala, Progressive Censoring: Theory, Methods, and Applications (Birkh
auser, Boston, 2000).
18. K.S. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications, 2nd edition (John
Wiley & Sons, New Jersey, 2001).
19. E.T. Lee, J.W. Wang, Statistical Methods for Survival Data Analysis 3rd edition (John Wiley & Sons, New Jersey, 2003).
20. W. Nelson, Accelerated Testing: Statistical Models, Test Plans, and Data Analyses (John Wiley & Sons, New York, 1990).
21. W.Q. Meeker, L.A. Escobar, Statistical Methods for Reliability Data (John Wiley & Sons, New York, 1998).
22. V. Kumar, Physica A 462, 667 (2016).
23. A.M. Mathai, S.B. Provost, IEEE Trans. Reliab. 55, 237 (2006).
24. X.P. Zhang, J.Z. Shang, X. Chen, C.H. Zhang, Y.S. Wang, IEEE Trans. Reliab. 63, 764 (2014).
25. D. Han, H.K.T. Ng, Commun. Stat. Theory Methods 43, 2384 (2014).
26. S. Nadarajah, S. Kotz, Physica A 377, 465 (2007).
27. S. Picoli Jr., R.S. Mendes, L.C. Malacarne, Physica A 324, 678 (2003).
28. J. Ludescher, C. Tsallis, A. Bunde, EPL 95, 68002 (2011).
29. W. Weibull, J. Appl. Mech. 18, 293 (1951).
30. S.J. Almalki, S. Nadarajah, Reliab. Eng. Syst. Saf. 124, 32 (2014).
31. C.D. Lai, D.N.P. Murthy, M. Xie, WIREs Comput. Stat. 3, 282 (2011).
32. C.F. Zhang, Y.M. Shi, M. Wu, J. Comput. Appl. Math. 297, 65 (2016).
33. A.C. Cohen, Technometrics 5, 327 (1963).
34. N. Balakrishnan, E. Cramer, The Art of Progressive Censoring: Applications to Reliability and Quality (Birkh
auser, Basel,
2004).
35. P.S. Chan, H.K.T. Ng, F. Su, Metrika 78, 747 (2015).
36. S. Park, H.K.T. Ng, P.S. Chan, Stat. Probab. Lett. 97, 142 (2015).
37. F. Zhang, Y. Shi, Physica A 446, 234 (2016).
38. F. Zhang, Y. Shi, SpringerPlus 5, 1 (2016).
39. K. Soetaert, rootSolve: Nonlinear root nding, equilibrium and steady-state analysis of ordinary dierential equations, Rpackage version 1.6 (2009).
40. B. Efron, The Jackknife, the Bootstrap and Other Re-sampling plans, in CBMS-NSF Regional Conference Series in Applied
Mathematics, Vol. 38 (SIAM, Philadelphia, PA, 1982).
41. D. Kundu, A. Joarder, Comput. Stat. Data Anal. 50, 2509 (2006).
42. N. Balakrishnan, R.A. Sandhu, Am. Stat. 49, 229 (1995).
43. J. Buckley, I. James, Biometrika 66, 429 (1979).

Das könnte Ihnen auch gefallen