Beruflich Dokumente
Kultur Dokumente
Introduction
The radio spectrum is one of the most important resources for communications. So spectrum detection is essential for wireless communication, which
is the key issue in Cognitive Radio.
1.1
Cognitive Radio is generic term used to describe a radio that is aware of the
surrounding environment and can accordingly adapts it transmission. Moreover Cognitive radio is a flexible system as it can change the communication
parameter according to the condition.
1.2
Purpose
(1)
1
X
(2)
i=0
1
X
R(/Hj )j
i=0
(3)
Hence the optimum Bayes Decision rule B is obtained by reducing the risk
given over here i.e,
R() =
n X
n
X
(4)
i=0 j=0
(5)
1
X
C0j j +
1
X
j=0
R() =
1
X
(6)
j=0
Z
C0j j +
1
X
(7)
Y1 j=0
j=0
(8)
j=0
Now for j=1,we can see that taking H1 and H1 holds is lest costly then the
selecting H0 H1 holds i.e.(C11 C01 ) In order to make a correct a decision
we introduce a likelihood ratio which takes the values of vector Y,given as
L(y) =
f (y/H1 )
f (y/H0 )
(9)
2.1
0
1
Neyman-Pearson Test
In Bayesian hypothesis Testing it would require the knowledge of cost functions and prior probabilities 0 &1 . In Neyman-Pearson Testing the aim is to
design the decision function in such a way the it maximizes the probability
of Detection PD by bounding the Probability of false alarm PF to .
D = { : PF () }
(12)
N P = arg maxPD ()
(13)
(15)
Z
PD () =
f (y/H1 ) dy
Y1
Z
PF () =
f (y/H0 ) dy
Y1
Y1
Z
(f (y/H1 ) f (y/H0 )) dy +
L(, ) =
(16)
Y1
f (y/H0 )
L(y)
Thus the (y) can be written as
1
0 or 1
(y) =
if L(y) >
if L(Y)=
if L(y) <
(17)
The standard Hypothesis problem involves fixed number of observation.On the other hand in sequential Hypothesis testing Problem, the
number of observation are not fixed .Depending the observed samples decision
may be taken after just few samples or large number of samples are observed
if the decision is not concluded.We consider infinite number of I.I.D (independant identically Distributed ) observations { YK : K 1 } are available.By
using this a sequential decision rule can be formed by a pair (, ),where
= {n , n N } a sampling plan or stopping rule and = {n , n N }
denotes the terminal decision rule.The function n (Y1 , Y2 , ......Yn ) maps Yn
into {0,1}.After observing YK (for 1 K n) we have, n (Y1 , Y2 , ....., Yn ) =
0indicates we should take one more sample till the decision is made ,and if at
the same we have n (Y1 , Y2 , ....., Yn ) = 1 one should stop sampling and make
a decision.
The terminal decision function n (Y1 , Y2 , ....., Yn ) takes the values in 0,1 where
n (Y1 , Y2 , ....., Yn )= 0 or 1 depending on whether H0 or H1 holds,more over
n (Y1 , Y2 , ....., Yn ) can be defined only if we have decided to stop sampling.
1
for n=N
n (Y1 , Y2 , ....., Yn ) =
(18)
0
for n 6= N
undefined for n 6= N
B =
(19)
n (Y1 , Y2 , ....., Yn ) for n=N
Now we associate cost for decision in order to determine the sequential decision rule(, ) in the Bayesian setting.To compute Bayes Risk for a sequential
5
(20)
where N denotes the random stopping time. The Conditional Bayes risk
for H1 given by ,
R(, /H1 ) = CM P [N (Y1 , Y2 , ....., YN ) = 0/H1 ] + DE[N/H0 ]
(21)
Therefore the average Bayes Risk for the sequential decision rule(, ) is
given by ,
1
X
R(, ) =
R(, /Hj )j
(22)
j=0
Our aim is to choose a decision rule (, ),so that the Bayes Risk can be
minimized i.e if (B , B )denotes the optimum Bayesian Sequential decision
rule , then
R(B , B ) = minR(, ) = V (0 )
(23)
(,)
Next we want to divide the set of all sequential decision rules into the following two categories:
S = {(, ) : 0 = 0}and{(0 = 1, 0 = 1) (0 = 1, 0 = 0)}
(24)
Note that since N 1 for all rules in S therefore we have E[N/H0 ] or E[N/H1 ]is
greater than or equal to 1. Thus we have ,
R(, ) D
min R(, ) D
(,)S
J(0 ) D
Also note that for 0 =1 and 0 =0,the
P [N (Y1 , Y2 , ......YN ) = 1/H0 ]and[P [N (Y1 , Y2 , ......YN ) = 0/H1 ]are equal to zero
Therefore, for 0 = 1, R(, = DE[N/H0 ] = D. Thus J(1)=D.Similarly for
0 = 0,J(0)=D.
Next we compute the Bayes risk for the following two cases of sequential
decision rules, when no sample is taken implying 0 = 1.
When (0 = 1, 0 = 1) since no samples is taken during this , we have
E[N/H0 ]=0.This implies
R(, /H0 ) = CF P [N (Y1 , Y2 , ......YN ) = 1/H0 ] = CF
and R(, /H1 ) = 0 as P [N (Y1 , Y2 , ......YN ) = 1/H1 ] = 0
R(, ) = CF 0 .
(26)
(27)
.
Therefore the minimum Bayes risk for sequential decision rule that do
not take any sample which correspondsto ( the rules ( = 1, = 1)&( =
1, = 0) is therefore given by the piecewise linear function,
T (0 ) = min{CF 0 , CM (1 0 )}
(
M
CF 0
for0 < (CFC+C
M)
T (0 ) =
M
CF (1 0 )
for0 > (CFC+C
M)
(28)
(29)
Since the Bayes risk obtained by any of other strategies,should lie between
J(0 ) and T (0 ),therefore,we have
V (0 ) = min(T {0 ), J(0 )}
7
(30)
0 f0 (y1 )
0 f0 (y1 ) + (1 0 )f1 (y1 )
One can now again give the optimum decision rule after taking a sample
Y1 = y1 as follows :
decide H0 if 0 (y1 ) L or 0 (y1 ) U
If L < 0 (y1 ) < U ,then take another sample.
8
Evaluation of J(0 )
J(0 ) = D + EY1 [V (0 (Y1 ))]
(32)
1
X
(33)
j=0
(34)
Next let us assume that observation samples have been obtained.Given this
let us compute
0 (y1 , ...., yn ) = P r[H0 /y1 , .....yn ]
f (y1 , ....., yn /H0 )P r[H0 ]
f (y1 , ....., yn )
Qn
f (yk /H0 )0
= k=1
f (y1 , ....., yn )
0
(35)
0 (y1 , ...., yn ) =
0 + (1 0 )Ln (y1 , ...., yn )
Q
(yk )
. Thus from the same reason the optimum
where Ln (y1 , ...yn ) = nk=1 ff01 (y
k)
bayesian rule is given by
0
ifL < 0 (y1 , ....yn ) < U
Bn (y1 , ...., yn ) =
(36)
1
otherwise
1
if0 (y1 , ....yn ) L
Bn (y1 , ...., yn ) =
(37)
0
if0 (y1 , ....yn ) U
=
(38)
(39)
0 (1U )
0 (1L )
where A = (1
& B = (1
0 )U
0 )L
Thus the optimal Bayesian sequential Hypothesis rule can be expressed as a
sequential probability ratio test(SPRT),where we can take sample as long as
the likelihood ratio Ln stays between A and B and we select H0 or H1 as soon
as Ln falls below A or Exceeds B respectively. The condition L < 0 < U
required to ensure we pick atleast one sample,can be transformed into a
condition on A and B as follows:
A<1<B
Till this point, we have completely characterized the structure of the optimal
Bayesian sequential Decision Test.But, to specify it completely we need the
values of L and U or equivalently A or B.
In addition to piL and U , to determine the worst case prior
OM = arg maxV (0 )
0
(40)
Qn
We select H0 whenever
Qn k=1 L(Yk A and
We select H1 when k=1 L(YQ
k B
Take another sample if A < nk=1 L(Yk < B
It is convenient to express the SPRT of the form above in logarithmic form.To
this end,Define
n
n
Y
X
n = ln( L(Yk )) =
ln(L( Yk ))
k=1
k=1
10
11