Sie sind auf Seite 1von 16

WEIGHTED AMARENDRA DISTRIBUTION

Abstract:
In this paper, we introduced a new generalization of Amarendra distribution called as weighted
Amarendra distribution (WAD), for modeling life-time data. This has been compared with one parameter
Amarendra distribution. The statistical properties of this distribution has been derived and the model
parameters are estimated by maximum likelihood estimation. Finally, an application to real life-time data
set is fitted and the fit has been found to be good.

Keywords-Weighted distribution, Amarendra distribution, Reliability analysis, Maximum


likelihood estimator, Order statistics, Bonferroni and Lorenz curves ,Entropies.

INTRODUCTION

The idea of weighted distributions was first given by Fisher (1934) to model the ascertainment
bias. Later Rao (1965) has developed this concept in a unified manner while modeling the
statistical data, when the standard distributions were not appropriate to record these observations
with equal probabilities. As a result, weighted models were formulated in such situations to
record the observations according to some weighted function. The weighted distribution reduces
to length biased distribution as the weight function considers only the length of the units. The
concept of length biased sampling, was first introduced by Cox (1969) and Zelen (1974). Van
Deusen in (1986) arrived at Size-biased distribution theory independently and fit it to the
distributions of diameter of breast height (DBH). Subsequently, Lappi and Bailey (1987), used
weighted distributions to analyse the HPS diameter increment data. In fisheries, Taillie et al
(1995) modeled populations of fish stocks using weights. Generally, the size-biased distribution
is when the sampling mechanism selects the units with probability which is proportional to some
measure of the unit size. There are various good sources which provide the detailed description
of weighted distributions. Different authors have reviewed and studied the various weighted
probability models and illustrated their applications in different fields. Weighted distributions are
applied in various research areas related to biomedicine, reliability, ecology and branching
processes. Para and Jan (2018) introduced the Weighted Pareto type-II distribution as a new
model for handling medical science data and studied its statistical properties and applications.
Rather and Subramanian (2018) discussed the characterization and estimation of length biased
weighted generalized uniform distribution. Rather and Subramanian (2019), discussed the
weighted sushila distribution with its various statistical properties and applications.

Amarendra Distribution
WEIGHTED AMARENDRA DISTRIBUTION

The probability density function (pdf) of the Amarendra distribution is given by

4
f ( x, ) 
 
3 2
 2  6
1  x  x 2

 x 3 e x ; x  0,  0 (1)

and the cumulative density function (cdf) of the amarendra distribution is given by

  3 x 3   2   2x 2   ( 3   2  2  6) x  x
F ( x, )  1  1  e ; x  0,  0 (2)
 ( 3   2  2  6) 

Suppose X is a non- negative random variable with probability density function f (x ) .Let w(x ) be
the nonnegative weight function, then the probability density function of the weighted random
variable Xwis given

w( x) f ( x)
f w ( x)  ;x  0
E ( w( x))

Where w(x) be a non-negative weight function and E ( X )   w( x) f ( x)   .

For different weighted models, we have different choices of the weight function
w(x).when x c the resulting distribution is termed as weighted distribution. In this paper, we have
to find the weighted version of Amarendra distribution, in weights w( x)  x c ,
in order to get the weighted Amarendra distribution and its probability density function (pdf) is
given by:
x c f ( x)
f w ( x)  (3)
E( x c )

E ( x c )   x c f w ( x, c, )dx
Where 0

 5 c!(c  1)! 4  (c  2)! 3  (c  3)! 2


E( x c )  (4)
 c4
Substitute, the values of equation (1) and (4) in equation (3), we will get the probability density
function (pdf) of weighted Amarendra distribution.
 c4
f w ( x, c, )  x c (1  x  x 2  x 3 )e x ; x  0,  0 (5)
 c!(c  1)!  (c  2)!  (c  3)!
3 2

Now, the cumulative distribution function (cdf) of the weighted Amarendra distribution is
obtained as
x
Fw ( x, , c)   f w ( x)dx
0

x
 c4
Fw ( x, , c)   3 x c (1  x  x 2  x 3 )e x dx
0
 c!(c  1)!  (c  2)!  (c  3)!
2

After simplification, we will get the cumulative distribution function (cdf) of weighted
amarendra distribution as

1 1 1


 3 c  1;x     c  2;x    2  c  3;x    3  c  4;x 
Fw ( x, c,  )        (6)
 c! (c  1)!(c  2)!  (c  3)!
3 2

RELIABILITY ANALYSIS
Here, we have obtained the reliability function, failure rate, reverse hazard rate of the weighted
Amarendra distribution.
The survival function or the reliability function of the weighted Amarendra distribution is given
by

R( x)  1  Fw ( x)
1  1   1 
 3 c  1;x     c  2;x    2  c  3;x    3  c  4;x 
Rw ( x, c,  )  1        ; x  0,  0, c  0
 c! (c  1)!(c  2)!  (c  3)!
3 2

The corresponding hazard function or failure rate is given by


f w ( x , c,  )
h ( x , c,  ) 
R ( x , c,  )
f w ( x)
h( x, c, ) 
1  Fw ( x)
 c  4 x c 1  x  x 2  x 3 e x
h( x ) 
 
 3
 1  1   1 
c! 2 (c  1)!(c  2)!  (c  3)!   3  c  1;x     c  2;x    2  c  3;x    3  c  4;x 
       
The Reverse hazard rate is given by

f w ( x, c,  )
hr (t ) 
Fw ( x, c, )

 c4 x c (1  x  x 2  x 3 )e x
hr (t ) 
 1  1   1  
 c  1;x      c  2;x     2  c  3;x     3  c  4;x 
       

and the Mills Ratio of the weighted Amarendra distributi on is given by

 1  1   1  
 c  1;x      c  2;x     2  c  3;x     3  c  4;x 
     
 
1
MillsRatio  x
hr (t )  x (1  x  x  x )e
c4 c 2 3

MOMENTS AND ASSOCIATED MEASURES


Let X denotes the random variable of weighted Amarendra distribution then the r th order moment
E ( X r ) of weighted Amarendra distribution is obtained as


E ( X )   r   x r f w ( x)dx
r '


 c4
E( X r )  µr   x r x c (1  x  x 2  x 3 )e x dx
'

0
 c!(c  1)!  (c  2)!  (c  3)!
3 2
  3 (c  r )!(c  r  1)! 2  (c  r  2)!  (c  r  3)! 
E ( X r )   
 
 r  3c!(c  1)! 2  (c  2)!  (c  3)!  
Putting r = 1 in equation (10), we will get the mean of weighted Amarendra distribution which
is given by

  3 (c  1)!(c  2)! 2  (c  3)!  (c  4)! 


1   

   c!(c  1)!  (c  2)!  (c  3)! 
3 2

and putting r = 2 in equation (10), we get the second moment of weighted Amarendra
distribution as

  3 (c  2)!(c  3)! 2  (c  4)!  (c  5)! 


 2   
2 3

   c!(c  1)!  (c  2)!  (c  3)! 
2

Variance =  2  ( 1 ) 2
2
  3 (c  2)!(c  3)! 2  (c  4)!  (c  5)!    3 (c  1)!(c  2)! 2  (c  3)!  (c  4)! 
  2 3    
  
   c!(c  1)!  (c  2)!  (c  3)!     c!(c  1)!  (c  2)!  (c  3)! 
2 3 2

 
  3 (c  2)!(c  3)! 2  (c  4)!  (c  5)!  3 c!(c  1)! 2  (c  2)!  (c  3)!

 


   3 (c  1)!(c  2)! 2  (c  3)!  (c  4)! 2  

 
 2  3 c!(c  1)! 2  (c  2)!  (c  3)!
2
 
 
 

 
  3 (c  2)!(c  3)! 2  (c  4)!  (c  5)!  3 c!(c  1)! 2  (c  2)!  (c  3)!

 
S .D( )  

   3 (c  1)!(c  2)! 2  (c  3)!  (c  4)! 2  

 
 2  3 c!(c  1)! 2  (c  2)!  (c  3)!
2
 
 
 

HARMONIC MEAN

The Harmonic mean of the Weighted Amarendra distribution model can be obtained as

1
H .M  E  
 x

1
  f w ( x) dx
0
x


1  c4
 x c (1  x  x 2  x 3 )e x dx
0
x  c!(c  1)!  (c  2)!  (c  3)!
3 2


 c4

 3c!(c  1)! 2  (c  2)!  (c  3)! 0
1  x  x 2

 x 3 x c1e x dx

   
 c4
 x e dx   x e dx   x e dx   x e dx
c 1 x c x c 1 x c  2 x

 3c!(c  1)! 2  (c  2)!  (c  3)! 0 0 0 0

on simplification we get

 4 ( 3 c   2 c  1  c  2  c  3
 3
 c!(c  1)! 2  (c  2)!  (c  3)!

MOMENT GENERATING FUNCTION AND CHARACTERISTIC FUNCTION

Let X have a Weighted Amarendra distribution, then the MGF of X is obtained as



M X (t )  E (e )   e tx f w ( x)dx
tx


 c4
M X (t )   e tx x c (1  x  x 2  x 3 )e x dx
0
 c!(c  1)!  (c  2)!  (c  3)!
3 2


 c4  t  x

 (c  2)!  (c  3)! 
M X (t )  e x c (1  x  x 2  x 3 )dx
 c!(c  1)!
3 2
0

  c  t x 
 t  x c 1

 t  x c  2


  c4
 0 x e xdx  0 e x dx  0 e x dx  
M X (t )   3  
 
 c!(c  1)!  (c  2)!  (c  3)!  e  t x x c 3 dx
2 

 
  0  

 c4  c  1 c  2 c  3 c  4 
M X (t )  3     
 c!(c  1)! 2  (c  2)!  (c  3)!    t c1   t c2   t c3   t c4 

1 
 s  k  1 k n n!
Using binomial expansion :   
1  x  k 0  k 
x ;   
 k  n  k !k!
s

 c4
M X (t ) 
 3 c!(c  1)! 2  (c  2)!  (c  3)!
 c!   c  k  (c  1)!   c  k  1 (c  2)!   c  k  2  (c  3)!   c  k  3   t  k
  c 1     c  2      k    c4     
  k 0  k   k 0  k   c 3 k 0   k 0  k    

On simplification we get

(c  k )!  3   2 (c  k  1)   (c  k  1)(c  k  2)  (c  k  1)(c  k  2)(c  k  3)  t k
M X (t )    
k 0  k   3c!(c  1)! 2  (c  2)!  (c  3)!  k!


tr
M X (t )   r '
r 0 r!

(c  r )!   3   2 (c  r  1)   (c  r  1)(c  r  2)  (c  r  1)(c  r  2)(c  r  3) 


r    
 r   3c!(c  1)! 2  (c  2)!  (c  3)! 

(c  1)!   3   2 (c  2)   (c  2)(c  3)  (c  2)(c  3)(c  4) 


1   
   3c!(c  1)! 2  (c  2)!  (c  3)! 

(c  2)!   3   2 (c  3)   (c  3)(c  4)  (c  3)(c  4)(c  5) 


 2   
 2   3c!(c  1)! 2  (c  2)!  (c  3)! 

Similarly we can get the characteristic function of weighted Amarendra Distribution

 X (t )  M X (it )

 X (t )  

it r r
r 0 r!

(c  r )!  3   2 (c  r  1)   (c  r  1)(c  r  2)  (c  r  1)(c  r  2)(c  r  3)  it 
r
M X (it )    
r 0  r   3c!(c  1)! 2  (c  2)!  (c  3)!  r!

ORDER STATISTICS
Let X(1), X(2), ...,X(n) be the order statistics of a random sample X1, X2, ..., Xn drawn from the
continuous population with probability density function fx(x) and cumulative density function
with Fx(x), then the pdf of rthorder statistics X(r) is given by

f X ( x) FX ( x) 1  FX ( x)


n! r 1 nr
f X ( r ) ( x)  (14)
(r  1)!(n  r )!

Using the equations (5) and (6) in equation (10), the probability density function of rth order
statistics X(r) of Weighted Amarendra distribution is given by
n!   c4 x 
f X ( r ) ( x)   x c
(1  x  x 2
 x 3
) e 
(r  1)!(n  1)!   3 c! (c  1)! 2  (c  2)!  (c  3)! 
r 1
 3  1  1   1  
  3   c  1;x     c  2;x    2  c  3;x    3  c  4;x  
  c!(c  1)!  (c  2)!  (c  3)!       
2

nr
  3 
    c  1;x    1  c  2;x    1  c  3;x    1  c  4;x   
         
2 3
 1    
   c!(c  1)!  (c  2)!  (c  3)!
3 2

  
  

Therefore, the probability density function of higher order statistics X(n) can be obtained as
  c4 
f X ( n ) ( x)  n 3 x c (1  x  x 2  x 3 )e x 
  c!(c  1)!  (c  2)!  (c  3)!
2

n 1
 3 
   c  1;x     c  2;x    2  c  3;x    3  c  4;x  
1 1 1
       
  
 c!(c  1)!  (c  2)!  (c  3)!
3 2
 
 
 

and the pdf of 1st order statistic X(1)can be obtained as


  c4 
f X (1) ( x)   3 x c (1  x  x 2  x 3 )e x  
  c!(c  1)!  (c  2)!  (c  3)!
2

n 1
   c4  1  1   1    
1  
   3 c!(c  1)! 2  (c  2)!  (c  3)! 
 c  1; x     c  2; x     c  3; x     c  4; x   
     
2 3
   

ENTROPIES

The concept of entropy is important in different areas such as probability and statistics, physics,
communication theory and economics. Entropies quantify the diversity, uncertainty, or
randomness of a system. Entropy of a random variable X is a measure of variation of the
uncertainty.
Renyi Entropy
The Renyi entropy is important in ecology and statistics as index of diversity. The Renyi
entropy is also important in quantum information, where it can be used as a measure of
entanglement. For a given probability distribution, Renyi entropy is given by

e ( ) 
1
1 
log  f 
( x)dx 

  c4 
1  x  
e( )  Log   3 x c
(1  x  x 2
 x 3
)e 
 dx
1   0   c!(c  1)! 2  (c  2)!  (c  3)!  
 

 

  c4   c 
e( ) 
1
1 

   c!(c  1)! 2  (c  2)!  (c  3)!  0
Log  3  x (1  x  x 2
 x 3  x
) e dx 


 

  c4   c 
e( ) 
1
1 

   c!(c  1)! 2  (c  2)!  (c  3)!  0
Log  3  x (1 x  x 2
 x 3  x
) e dx 


 

Using binomial expansion:



1   c4    
   i  j  c i  j  k 11  x
e( ) 
1 
log  3        x e dx
  c!(c  1)!  (c  2)!  (c  3)! 
2
i 0 j 0 k 0  i  j  k  0


1   c4    
   i  j  c  i  j  k  1
e( ) 
1 
log  3    i  j  k    c  i  j  k 1
  c!(c  1)!  (c  2)!  (c  3)! 
2
   
i 0 j 0 k 0

Tsallis Entropy:

A generalization of Boltzmann-Gibbs (B-G) statistical mechanics initiated by Tsallis has


focussed a great deal to attention. This generalization of B-G statistics was proposed firstly by
introducing the mathematical expression of Tsallis entropy (Tsallis, 1988) for a continuous
random variable is defined as follows
S 
1
1 

log  f  ( x)dx 

  c4 
1  x  
S  Log   3 x c
(1  x  x 2
 x 3
)e 
 dx
1    c!(c  1)!  (c  2)!  (c  3)!
2

0  
 

  c4   c 
S 
1
1 

   c!(c  1)! 2  (c  2)!  (c  3)!  0
Log  3  x (1  x  x 2
 x 3   x
) e dx 


 

  c4   c 
S 
1
1 

   c!(c  1)! 2  (c  2)!  (c  3)!  0
Log  3  x (1 x  x 2
 x 3   x
) e dx 


 

Using binomial expansion:



1   c4    
   i  j  c i  j  k 11  x
S 
1 
log  3        x e dx
  c!(c  1)!  (c  2)!  (c  3)! 
2
i 0 j 0 k 0  i  j  k  0


1   c4    
   i  j  c  i  j  k  1
S 
1 
log  3    i  j  k    c  i  j  k 1
  c!(c  1)!  (c  2)!  (c  3)! 
2
   
i 0 j 0 k 0

BONFERRONI AND LORENZ CURVES:

The Bonferroni and Lorenz curves (Bonferroni, 1930) and Bonferroni and Gini indices have
applications not only in economics to study income and poverty, but also in other fields like
reliability, demography, insurance and medicine. The Bonferroni and Lorenz curves are defined
as
q
1
p1 ' 0
B( p)  xf ( x)dx

and
q
1
1 ' 0
L( p )  xf ( x)dx
  3 (c  1)!(c  2)! 2  (c  3)!  (c  4)!
1 
Where
  3c!(c  1)! 2  (c  2)!  (c  3)! and q  F 1 ( p)

 c4
q
1
B( p )   
p1 ' 0  c!(c  1)!  (c  2)!  (c  3)!
3 2
x c1 (1  x  x 2  x 3 )e  x dx

  c4  q c1
B( p)    x (1  x  x 2  x 3 )e  x dx
3

 p1 '  c!(c  1)!  (c  2)!  (c  3)!
2
 0

  c4  q c1 x q q q



B( p )    x e dx   x c2 e x dx   x c3e x dx   x c4 e x dx

 p1 '  c!(c  1)!  (c  2)!  (c  3)!
3 2
 0 0 0 0

𝑑𝑡 𝑡
put 𝜃𝑥 = 𝑡 ; 𝜃𝑑𝑥 = 𝑑𝑡 ; 𝑑𝑥 = 𝜃
𝑥 = 𝜃 as 𝑥 → 0, 𝑡 → 0 ; 𝑥 → 𝑞, 𝑡 → 𝜃𝑞

c 1 c2 c 3 c4
  c4 q  t  t q q q
    e dx     t t
t
B( p)   e t dx     e t dx     e t dx

 p1 '  c!(c  1)!  (c  2)!  (c  3)!
3 2
  0   0
 0
 0


after simplification we get

 
  (c  2),q    (c  3),q   2  (c  4),q 
1 1
 3
   
B( p) 
p( (c  1)!(c  2)!  (c  3)!  (c  4)!)  1
3 2

  3  (c  5),q  
  

And

L( p)  pB( p)
 
  (c  2), q    (c  3), q   2  (c  4), q 
1 1
3    
L( p )  3
( (c  1)!(c  2)! 2  (c  3)!  (c  4)!)  
  3  (c  5), q 
1

  

LIKELIHOOD RATIO TEST

Let X1, X2, ... ,Xn be a random sample from the weighted amarendra distribution. To test the
hypothesis
H o : f ( x)  f ( x ; ) against H1 : f ( x)  f w ( x ; c; )
In order to test whether the random sample of size n comes from the amarendra distribution or
weighted amarendra distribution, the following test statistic is used

L1 n
f ( x; c; )
  w
L0 i 1 f ( x; )

n

   3

 c  3   2  2  6  c
xi
i 1   c!(c  1)!  (c  2)!  (c  3)! 
2

 
 c  3   2  2  6  
n
n

 3  x
c

        
2 i
 c! ( c 1)! ( c 2)! ( c 3)!  i 1

We reject the null hypothesis if

 
 c  3   2  2  6  
n
n
   3  x k
c

        
2 i
 c! ( c 1)! ( c 2)! ( c 3)!  i 1

n
  3 c!(c  1)! 2  (c  2)!  (c  3)! 
n
or,    x i  k 
* c

i 1   c  3   2  2  6  
n
n
  3 c!(c  1)! 2  (c  2)!  (c  3)! 
   xi  k , where k  k  
* c * *

i 1  
 c  3   2  2  6  
For large sample size n, 2 log∆ is distributed as chi-square distribution with one degree of
freedom and also p-value is obtained from the chi-square distribution. Thus we reject the null
hypothesis, when the probability value is given by

 
n n
p *   * , where  *   x c i is less than a specified level of significance and x
c
i is
i 1 i 1

the observed value of the statistic ∆*

MAXIMUM LIKELIHOOD ESTIMATOR

In this section we will discuss the maximum likelihood of the parameters of weighted amarendra
distribution. Consider ( x1 , x2 , x3 ........., xn ) be a random sample of size n from weighted
Amarendra distribution then The likelihood function, of is given by

 x (1  x 
n
  c4  n
L( x; c, )   3
   xi  xi e xi
c 2 3

        
2 i i
 c! ( c 1)! ( c 2)! ( c 3)!  i 1

The log likelihood function is

   
n n
Log L  n(c  4) log   n log  3 c!(c  1)! 2  (c  2)!  (c  3)!   log xi 1  xi  xi  xi    xi
c 2 3

i 0 i 0

The maximum likelihood estimates of θ ,c can be obtained by differentiating equation (17) with
respect to θ , c and must satisfy the normal equation

 log L n(c  4) 6n n
  x  0 (12)
   i 0 i

 log L n
 n log   n (c  1)   (c  2)   (c  3)   (c  4)    log xi  0 (13)
c i o

From equation (12) we get

(c  2)

X

Because of the complicated form of likelihood equations (13) algebraically it is very difficult to
solve the system of nonlinear equations. Therefore, we use R and wolfram mathematics for
estimating the required parameters.

Data analysis
In order to compare the weighted amarendra distribution with amarendra distribution, we
consider the criteria like Bayesian information criterion (BIC),Akaike Information Criterion
(AIC),Akaike Information Criterion Corrected (AICC) and -2 logL. The better distribution is
which corresponds to lesser values of AIC,BIC, AICC and – 2 log L. For calculating AIC, BIC,
AICC and -2 logL can be evaluated by using the formulas as follows:

2k (k  1)
AIC = 2K - 2logL, BIC = klogn - 2logL, AICC  AIC 
(n  k  1)

Where k is the number of parameters, n is the sample size and -2 logL is the maximized value of
log likelihood function and are showed in table 1.

Table.1:Performance of distributions.

Data Distribution MLE S.E -2 logL AIC BIC AICC


Sets
Devya 𝜃̂ = 1.841946 𝜃̂ = =0.1692256 54.50256 56.50436 55.80539 56.72658
Distribution

1 Length biased 𝜃̂ =2.4679492 𝜃̂ =0.2123947 46.16638 48.16638 47.48838 48.38860


Devya
Distribution

From table 1, we can see that the weighted amarendra distribution have the lesser AIC, BIC,
AICC and -2 logL values as compared to amarendra distribution. Hence, we can conclude that
the weighted amarendra distribution leads to better fit than the amarendra distribution.

12. Conclusion
In the present study, we have introduced a new generalization of the Amarendra distribution termed as
Weighted Amarendra distribution and has two parameters. The subject distribution is generated by
using the weighting technique and the parameters have been obtained by using maximum likelihood
technique. Some mathematical properties along with reliability measures are discussed. The new
distribution with its applications in real lifetime data has been demonstrated. The results are compared
with Amarendra distribution and has been found that weighted Amarendra distribution provides better
fit than Amarendra distribution.
References
1) Cox, D. R., (1969), Some sampling problems in technology. In New Development in Survey
Sampling, Johnson, N. L. and Smith, H., Jr .(eds.) New York WileyInterscience, 506-527. 2)
2) Fisher, R.A., (1934), The effects of methods of ascertainment upon the estimation of
frequencies. Annals of Eugenics, 6, 13-25. 3)
3) Gross, A.J. and Clark, V.A., (1975), Survival Distributions: Reliability Applications in the
Biometrical Sciences, John Wiley, New York.
4) Lappi, J. and Bailey, R.L., (1987), Estimation of diameter increment function or other tree
relations using angle-count samples. Forest Science, 33, 725-739.
5) Para, B. A., & Jan T. R., (2018), On Three Parameter Weighted Pareto Type II Distribution:
Properties and Applications in Medical Sciences. Applied Mathematics & Information Sciences
Letters, 6(1),13-26. Journal of Information and Computational Science Volume 9 Issue 8 - 2019
ISSN: 1548-7741 405 www.joics.org
6) Rao, C. R., (1965), On discrete distributions arising out of method of ascertainment, in classical
and Contagious Discrete, G.P. Patiled; Pergamum Press and Statistical publishing Society,
Calcutta. 320-332.
7) Rather, A. A. and Subramanian, C. (2019), On Weighted Sushila Distribution with Properties and
its Applications, International Journal of Scientific Research in Mathematical and Statistical
Sciences, vol-6, Issue.1,pp.105-117.
8) Rather, A. A. and Subramanian. C., (2018), Characterization and Estimation of Length Biased
Weighted Generalized Uniform Distribution, International Journal of Scientific Research in
mathematical and Statistical sciences, vol-5,issue-5,pp.72-76
9)

Das könnte Ihnen auch gefallen