Sie sind auf Seite 1von 11

1

Introduction

The radio spectrum is one of the most important resources for communications. So spectrum detection is essential for wireless communication, which
is the key issue in Cognitive Radio.

1.1

What is cognitive radio?

Cognitive Radio is generic term used to describe a radio that is aware of the
surrounding environment and can accordingly adapts it transmission. Moreover Cognitive radio is a flexible system as it can change the communication
parameter according to the condition.

1.2

Purpose

The purpose of cognitive radio Spectrum sensing plays a main role in CR


because its important to avoid interference with PU and guarantee a good
quality of service of the PU. Its predicted that CR identifies portions of
unused spectrum to transmits in it to avoid interferences with others users.
The cognitive radio basically detects the unused spectrum and transmits the
information to the fusion center whether the user is present is or not. Now
our aim is to check whether the primary user is present or not.So for that we
will perform hypothesis testing :
1. Simple hypothesis testing
2. Composite hypothesis testing.
For simple hypothesis testing there is :
1. Binary hypothesis testing.
2. Bayesian hypothesis testing.
Let us study about Bayesian hypothesis testing in detail.

Bayesian Binary Hypothesis Testing.

In Bayesian Binary Hypothesis Testing we just select from two hypothesis,


H0 and H1 based on the observation on random vector Y . Here we consider
the domain Y . From this we consider,
H0 = Y f (y/H0 ) & H1 = Y f (y/H1 )

(1)

If we consider as a discrete case then,


H0 = P (y/H0 )
H1 = P (y/H1 )
So from given Y we will decide whether H0 or H1 is true.This can be
accomplished using a function called (y) which is also called a decision
function which takes values (0,1). Hence,
(y) = 1; if H1 holds and
(y) = 0; if H0 holds.
On the basis of decision function we have a disjoint set which is given by
Y0 & Y1 where,
Y0 = {y : (y) = 0}
Y1 = {y : (y) = 1}
The Hypothesis has prior probabilities which are :
0 = P (H0 ) & 1 = P (H1 ).
with 0 + 1 = 1.
Now there exist a Cost function which can be given by Cij i.e it states
that cost incurred on selecting Hi when Hj holds. On the basis of the cost
function, the Bayes Risk can be given as follows :
R(/Hj ) =

1
X

Cij P [Yi /Hj ]

(2)

i=0

Therefore the average Bayes Risk is


R() =

1
X

R(/Hj )j

i=0

(3)

Hence the optimum Bayes Decision rule B is obtained by reducing the risk
given over here i.e,
R() =

n X
n
X

Cij P [Yi /Hj ]j

(4)

i=0 j=0

Now as the Y0 and Y are disjoint sets,we can write as


P (Y1 /Hj ) + P (Y0 )/Hj ) = 1

(5)

so for j=0,1 we can get


R() =

1
X

C0j j +

1
X

j=0

R() =

1
X

(C1j C0j )P [Y1 /Hj ]

(6)

j=0

Z
C0j j +

1
X

(C1j C0j )f (y/Hj ), dy

(7)

Y1 j=0

j=0

Hence by reducing the Y1 we can minimize the Risk.


1
X
Y1 = {yR :
(C1j C0j )f (y/Hj )}
n

(8)

j=0

Now for j=1,we can see that taking H1 and H1 holds is lest costly then the
selecting H0 H1 holds i.e.(C11 C01 ) In order to make a correct a decision
we introduce a likelihood ratio which takes the values of vector Y,given as
L(y) =

f (y/H1 )
f (y/H0 )

(9)

This can be given as if L(y) then H0 is present and if L(y)


thenH1 is present.
(C10 C00)
=
(10)
(C01 C11)
Then under the condition C11 < C01 as optimum bayesian decision rule can
be written as ,

1
if L(y)
B =
(11)
0
if L(y)
From this we can derive equation for Minimum Probability of error .

2.1

Minimum Probability of Error

Taking Cij =0 for i=j and Cij =1 for i 6= j


The Cost is incurred only when the error occurs i.e when H1 is true when
H0 holds and H0 is true when H1 holds.
R(/H1 ) = P [E/H0 ]
R(/H0 ) = P [E/H1 ]
Hence,
P [E] = P [E/H0 ] + P [E/H1 ]
and consequently the =

0
1

Neyman-Pearson Test

In Bayesian hypothesis Testing it would require the knowledge of cost functions and prior probabilities 0 &1 . In Neyman-Pearson Testing the aim is to
design the decision function in such a way the it maximizes the probability
of Detection PD by bounding the Probability of false alarm PF to .
D = { : PF () }

(12)

N P = arg maxPD ()

(13)

The optimization problem can be solved by using Lagrangian Optimization


,
L(, ) = PD () + ( PF ())
(14)
A Test will be optimal if it maximizes L(, )& satisfies KKT condition
( PF ()) = 0
Now

(15)

Z
PD () =

f (y/H1 ) dy
Y1

Z
PF () =

f (y/H0 ) dy
Y1

The Lagrangian can be written as,


Z
Z
L(, ) =
f (y/H1 ) dy + (
f (y/H0 ) dy)
Y1

Y1

Z
(f (y/H1 ) f (y/H0 )) dy +

L(, ) =

(16)

Y1

To maximize L(, ),(y) should be chosen such that


f (y/H1 ) f (y/H0 ) 0
f (y/H1 )

f (y/H0 )
L(y)
Thus the (y) can be written as

1
0 or 1
(y) =

if L(y) >
if L(Y)=
if L(y) <

(17)

Bayesian Sequential Hypothesis Testing

The standard Hypothesis problem involves fixed number of observation.On the other hand in sequential Hypothesis testing Problem, the
number of observation are not fixed .Depending the observed samples decision
may be taken after just few samples or large number of samples are observed
if the decision is not concluded.We consider infinite number of I.I.D (independant identically Distributed ) observations { YK : K 1 } are available.By
using this a sequential decision rule can be formed by a pair (, ),where
= {n , n N } a sampling plan or stopping rule and = {n , n N }
denotes the terminal decision rule.The function n (Y1 , Y2 , ......Yn ) maps Yn
into {0,1}.After observing YK (for 1 K n) we have, n (Y1 , Y2 , ....., Yn ) =
0indicates we should take one more sample till the decision is made ,and if at
the same we have n (Y1 , Y2 , ....., Yn ) = 1 one should stop sampling and make
a decision.
The terminal decision function n (Y1 , Y2 , ....., Yn ) takes the values in 0,1 where
n (Y1 , Y2 , ....., Yn )= 0 or 1 depending on whether H0 or H1 holds,more over
n (Y1 , Y2 , ....., Yn ) can be defined only if we have decided to stop sampling.

1
for n=N
n (Y1 , Y2 , ....., Yn ) =
(18)
0
for n 6= N

undefined for n 6= N
B =
(19)
n (Y1 , Y2 , ....., Yn ) for n=N
Now we associate cost for decision in order to determine the sequential decision rule(, ) in the Bayesian setting.To compute Bayes Risk for a sequential
5

decision rule (, ) as R(, ) one needs to first compute Conditional Bayes


risk for each of the hypothesis.
R(, /H0 ) = conditional bayes risk of (, ) under H0 R(, /H1 ) = conditional bayes risk of (, ) under H1 Proir probabilities are given by 0 =
P[H0 ] & 1 = P[H1 ]. Let CM =the cost corresponding to a miss,i.e. choosing
H0 when H1 is true. and CF =the cost corresponding to a false alarm,i.e.
choosing H1 when H0 is true. Unlike in the case of fixed sample size hypothesis testing problem ,here we also assume that each observation incurs
a cost denoted by D,i.e. strictly positive (D>0). In the absence of this cost
for each observation D, one can collect an infinite number of observations,
which would ensure that the decision is error-free.
With this one can express the conditional Bayes Risk for H0 as
R(, /H0 ) = CF P [N (Y1 , Y2 , ....., YN ) = 1/H0 ] + DE[N/H0 ]

(20)

where N denotes the random stopping time. The Conditional Bayes risk
for H1 given by ,
R(, /H1 ) = CM P [N (Y1 , Y2 , ....., YN ) = 0/H1 ] + DE[N/H0 ]

(21)

Therefore the average Bayes Risk for the sequential decision rule(, ) is
given by ,
1
X
R(, ) =
R(, /Hj )j
(22)
j=0

Our aim is to choose a decision rule (, ),so that the Bayes Risk can be
minimized i.e if (B , B )denotes the optimum Bayesian Sequential decision
rule , then
R(B , B ) = minR(, ) = V (0 )
(23)
(,)

Next we want to divide the set of all sequential decision rules into the following two categories:
S = {(, ) : 0 = 0}and{(0 = 1, 0 = 1) (0 = 1, 0 = 0)}

(24)

S here corresponds to the case of a sequential decision rule set,where one


sample is taken to decide between H0 and H1 . While {(0 = 1, 0 = 1)(0 =
1, 0 = 0)} corresponds to the case when one doesnt take further more sample.
Let
J(0 ) = min R(, )
(25)
(,)S

Note that since N 1 for all rules in S therefore we have E[N/H0 ] or E[N/H1 ]is
greater than or equal to 1. Thus we have ,
R(, ) D
min R(, ) D
(,)S

J(0 ) D
Also note that for 0 =1 and 0 =0,the
P [N (Y1 , Y2 , ......YN ) = 1/H0 ]and[P [N (Y1 , Y2 , ......YN ) = 0/H1 ]are equal to zero
Therefore, for 0 = 1, R(, = DE[N/H0 ] = D. Thus J(1)=D.Similarly for
0 = 0,J(0)=D.
Next we compute the Bayes risk for the following two cases of sequential
decision rules, when no sample is taken implying 0 = 1.
When (0 = 1, 0 = 1) since no samples is taken during this , we have
E[N/H0 ]=0.This implies
R(, /H0 ) = CF P [N (Y1 , Y2 , ......YN ) = 1/H0 ] = CF
and R(, /H1 ) = 0 as P [N (Y1 , Y2 , ......YN ) = 1/H1 ] = 0
R(, ) = CF 0 .

(26)

Similarly for (0 = 1, 0 = 0),this implies


R(, ) = CM (1 0 )

(27)

.
Therefore the minimum Bayes risk for sequential decision rule that do
not take any sample which correspondsto ( the rules ( = 1, = 1)&( =
1, = 0) is therefore given by the piecewise linear function,
T (0 ) = min{CF 0 , CM (1 0 )}
(
M
CF 0
for0 < (CFC+C
M)
T (0 ) =
M
CF (1 0 )
for0 > (CFC+C
M)

(28)
(29)

Since the Bayes risk obtained by any of other strategies,should lie between
J(0 ) and T (0 ),therefore,we have
V (0 ) = min(T {0 ), J(0 )}
7

(30)

The sketch of V (0 ) vs 0 is shown in the figure given below,


Depending on the fact that whether fact whether J(0 ) always remain
above T (0 ) or whether it intersects T (0 )at 2 pts as shown in figure above
two cases are possible, In the First case, when J(0 ) remains above T (0 ),
the optimum decision rule is to make a decision immediately i.e. 0 =1 and
0 is given as follows :
(
M
1
if0 (CFC+C
M)
(31)
0 =
M
0
if0 (CFC+C
)
M
In the second case the optimum decision is
0 = 1, 0 = 1f or0 L
0 = 1, 0 = 0f or0 U
and
0 = 0 for L < 0 < U
Thus sequential decision rule can be summarized as follows:
Decide H1 or H0 if 0 falls below a low threshold L or above high
threshold U repectively.
We keep on taking one sample when L < 0 < U .
Now next we want to get what one should do in case when L < 0 < U for
that what is to be done by taking one sample Y1 = y1 .It is the same situation
what we are having at first stage that we have CF and CM and D which are
same.However with knowledge of observation, y1 one can update ones prior
probability 0 as follows using Bayes rule:
0 (y1 ) = P r[H0 /Y1 = y1 ]
P r[Y1 = y1 /H0 ]P r[H0 ]
= P1
j=0 P r[Y1 = y1 /Hj ]P r[Hj ]
=

0 f0 (y1 )
0 f0 (y1 ) + (1 0 )f1 (y1 )

One can now again give the optimum decision rule after taking a sample
Y1 = y1 as follows :
decide H0 if 0 (y1 ) L or 0 (y1 ) U
If L < 0 (y1 ) < U ,then take another sample.
8

Evaluation of J(0 )
J(0 ) = D + EY1 [V (0 (Y1 ))]

(32)

The PDF/PMF of Y1 required for the evaluation of above equation can be


obtained as follows :
f (y1 ) =

1
X

f (y1 /Hj )P (Hj )

(33)

j=0

f (y1 ) = 0 f0 (y1 ) + (1 0 )f1 (y1 )


Combining equation (30) and (33) we obtain,
V (0 ) = min{T (0 ), D + EY1 [V (0 (Y1 ))]}

(34)

Next let us assume that observation samples have been obtained.Given this
let us compute
0 (y1 , ...., yn ) = P r[H0 /y1 , .....yn ]
f (y1 , ....., yn /H0 )P r[H0 ]
f (y1 , ....., yn )
Qn
f (yk /H0 )0
= k=1
f (y1 , ....., yn )
0
(35)
0 (y1 , ...., yn ) =
0 + (1 0 )Ln (y1 , ...., yn )
Q
(yk )
. Thus from the same reason the optimum
where Ln (y1 , ...yn ) = nk=1 ff01 (y
k)
bayesian rule is given by

0
ifL < 0 (y1 , ....yn ) < U
Bn (y1 , ...., yn ) =
(36)
1
otherwise

1
if0 (y1 , ....yn ) L
Bn (y1 , ...., yn ) =
(37)
0
if0 (y1 , ....yn ) U
=

Using (35) and (37) we can say that ,



0
ifA < Ln (y1 , ....yn ) < B
Bn (y1 , ...., yn ) =
1
otherwise

1
ifLn (y1 , ....yn ) B
Bn (y1 , ...., yn ) =
0
ifLn (y1 , ....yn ) A

(38)

(39)

0 (1U )
0 (1L )
where A = (1
& B = (1
0 )U
0 )L
Thus the optimal Bayesian sequential Hypothesis rule can be expressed as a
sequential probability ratio test(SPRT),where we can take sample as long as
the likelihood ratio Ln stays between A and B and we select H0 or H1 as soon
as Ln falls below A or Exceeds B respectively. The condition L < 0 < U
required to ensure we pick atleast one sample,can be transformed into a
condition on A and B as follows:

A<1<B
Till this point, we have completely characterized the structure of the optimal
Bayesian sequential Decision Test.But, to specify it completely we need the
values of L and U or equivalently A or B.
In addition to piL and U , to determine the worst case prior
OM = arg maxV (0 )
0

one also requires the average Bayes risk V (0 ).


Conceptually,V (0 ) can be determined from, the functional equation .

Sequential Probability Ratio test

We now examine properties of SPRTs.To this end, consider an SPRT with


upper and lower thresholds A and B such that
0<A<1<B<

(40)

Qn
We select H0 whenever
Qn k=1 L(Yk A and
We select H1 when k=1 L(YQ
k B
Take another sample if A < nk=1 L(Yk < B
It is convenient to express the SPRT of the form above in logarithmic form.To
this end,Define
n
n
Y
X
n = ln( L(Yk )) =
ln(L( Yk ))
k=1

k=1

a = ln(A) and b = ln(B) Note that the random variables Zk = ln(L(YK )


Under H0 , have mean




f0 (yk )
f1 (yk )
/H0 ] = E[ln
/H0 ] = D(f0 /f1 )
m0 = E[Zk /H0 ] = E[ln
f0 (yk )
f1 (yk )

10

where D(f /g) is the relative entropy between f and g.


Under H1 , we have mean


f1 (yk )
m0 = E[Zk /H1 ] = E[ln
/H0 ] = D(f1 /f0 )
f0 (yk )
Since A < 1 < B ln(A) < ln(1) < ln(B)
a < 0 < b.
Then the original SPRT can be transformed into SPRT(a,b) such that :
We select H0 whenever n a.
We select H1 whenver n b.
Keep sampling if a < n < b.
n can be viewed as adding zero mean fluctuation to mean trajectories
nm0 or nm1 , depending on whether H0 or H1 holds.
When H0 holds, then n gets towards the lower threshold a.If n crosses
a ,then the correct decision is made.
When H0 holds,incorrect decision is made if the fluctuations are so
large that they cause n cross the threshold b.
An obvious way to avoid incorrect decision by choosing the thresholds,
a and b as low and high as possible.

11

Das könnte Ihnen auch gefallen