Beruflich Dokumente
Kultur Dokumente
and
and
2
and
and
and
and
= 1.
To simplify the analysis, we assume that the observations
at different secondary users are mutually independent given
the state of the primary user. Since the secondary users carry
out local detections before sending the reports, we assume
that the miss detection and false alarm probabilities of the
1
Note that this only means the independence of the noise, not the indepen-
dence of observations.
local detections are the same for different secondary users. In
practice, these assumptions may not be true. For example, a
common characteristic in the primary users signal may cause
correlated miss detection or false alarm at all secondary users.
We will relax these assumptions and study the corresponding
algorithms in the future work.
IV. ATTACKER-DETECTION ALGORITHM
In this section, we rst propose an attacker-detection algo-
rithm based on detecting abnormalities. Then, we discuss the
problem of threshold selection. As stated in the introduction,
the feature of the proposed algorithm is that it does not assume
any knowledge about the attacker, which makes it universal.
A. Double-Sided Neighbor Distance Algorithm
For detecting malicious users, we consider a -dimensional
space, where is the number of spectrum-sensing periods
that have been completed. Then, the history of reports of each
secondary user is represented by a point in the space, denoted
by x
= (
1
,
2
, ...,
=
x
is equal to the
number of different elements in x
and x
. We consider the
1
-th and the
2
-th neighbors in the report space for each
secondary user. Then, we propose a double-sided neighbor
distance (DSND) algorithm, where <
1
and
2
< . The steps are given in Procedure 1. Intuitively,
the algorithm aims to nd the outlier users that are far away
from most secondary users in the history space. Note that
the distance used in Procedure 1 could be either Hamming
distance or Euclidean distance.
Procedure 1 Procedure of Double-Sided Neighbor Distance
Algorithm
1: Set neighbor indices
1
and
2
(
1
2
), as well as
thresholds
and
.
2: for Secondary user do
3: for Secondary user = do
4: Compute the distance between x
and x
, namely
.
5: end for
6: Sort all distances {
}
=1,...,,=
in an ascending order.
7: Choose the secondary user
1
such that
1
is the
1
-th
smallest to secondary user . Choose the secondary user
2
such that
2
is the
2
-th smallest to secondary user .
8: Set metrics
1
1
and
2
2
for the attacker
detection.
9: If
1
>
or
2
<
and
are thresholds.
10: end for
The intuition of the proposed algorithm is that, if a sec-
ondary users history is too far away from others histories or
too close to others histories, its behavior is abnormal and is
probably a malicious user. If an attacker wants to disguise its
identity, it must behave the same as other honest secondary
users, thus losing the capability of attacking. Note that, if
considering only the metric
1
=
1
( 1)
, (1)
and
=
1
( 1)
, (2)
where is a predetermined value that represents an estimation
of the variance. The intuition of the thresholds in (1) and (2) is
to consider the average distance, namely
1
(1)
,
as the normal distance between any two honest secondary
users. The reason for the term is that the variance of the
distance between two honest secondary users is proportional
to
1
.
V. PERFORMANCE ANALYSIS
In this section, we analyze the performance of the proposed
DSND algorithm. For simplicity, we assume that there is only
one malicious user-i.e., = 1. For the general case of ,
we use numerical simulations to evaluate the performance in
Section VI.
For the performance analysis, we consider two cases of
the information available to the malicious user. In the rst
case, the malicious user does not know the reports of other
secondary users (called an independent attack). In the second
case, the malicious user knows all reports of other secondary
users, which can be achieved by letting all other secondary
users report rst and then decoding their reports (called a
dependent attack). To simplify the analysis of the performance
of Procedure 1, we assume that the observation distributions
are the same for all secondary users. We denote by
and
and
= 1,
= 0channel is idle)
+
= 1,
= 0channel is busy)
= 2
(1
) + 2
(1
), (3)
where the superscript means that the probability is with
respect to two honest secondary users. Recall that
and
(1
)(1
1
) +
2
+
(1
(1
2
) +
(1
)
2
1
+
(1
)(1
2
) +
1
+
(1
(1
1
) +
(1
)
2
2
, (4)
where the superscript means the probability is with
respect to an honest secondary user and a malicious user.
Therefore, the difference between the reports of two sec-
ondary users is a Bernoulli random variable with expectation
or
(
1
)
and
(
1
)
, respectively.
Since all local decisions are mutually independent in differ-
ent spectrum-sensing periods, the normalized distance between
two honest secondary users converges, i.e.
, (5)
almost surely as , according to the strong law of
large numbers, if secondary users and are both honest.
Similarly, the normalized distance between the malicious user
and any honest secondary user converges to
.
Based on the above discussion, the following proposition
shows that the false-alarm and miss-detection probabilities for
the attacker detection decrease exponentially with respect to
as . The proof is given in Appendix A.
Proposition 1: As , the false-alarm and miss-
detection probabilities for the attacker detection, denoted by
and
, respectively, satisfy
(log
) / <
(
(
2
+ 1)(
) +
1
(
)
)
, (6)
and
(log
) / <
(
(
2
+ 1)(
) +
1
(
)
)
. (7)
3558 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
2) Detectability: Obviously, when
, the ma-
licious user can be detected with probability 1 when ,
if the thresholds are properly chosen. The miss-detection and
false-alarm probabilities for the attacker detection decrease
exponentially with respect to when is sufciently large.
We need to answer the rst question in the introduction
section-i.e., is the malicious user always detectable? From
the above analysis, the malicious user is non-detectable when
, which is equivalent to
1
= (
)
2
, (8)
where
(
)
=
(1 2
(1
)(1 2
(1
)(1 2
(1 2
)
. (9)
When (
2
according to (8) such that the attacker cannot be detected
by the DSDN algorithm. However, we can still try to nd
another clue to catch the malicious user. We notice that honest
secondary users share the same probability of reporting 1 due
to the assumption of identical observation distributions. If the
malicious user reports 1 with a different probability, the fusion
center can always detect the malicious user by computing the
frequencies of reporting 1. The algorithm is summarized in
Procedure 2.
Procedure 2 Procedure of Frequency Check Algorithm
1: Set Threshold .
2: for Each spectrum sensing period do
3: Accumulate the times of reporting 1 for each secondary user.
4: end for
5: Compute the frequency of reporting 1, denoted by
1
for
secondary user .
6: If
1
1
=1
1
= (
)
2
, (10)
where
(
) =
(1
(1
) +
. (11)
Therefore, the malicious user can avoid both the DSND
detection and the frequency check only when
(
) = (
). (12)
For the attacker, the hope to avoid the DSND detection and
frequency check is eliminated by the following proposition.
The proof is given in Appendix B.
Proposition 2: The attacker can avoid the DSND detection
and frequency check only when
= 0.5. (13)
Obviously, the condition
+(1), (14)
with large probability, according to the strong law of large
numbers. When the threshold is reached (i.e.
+(1), (15)
) we obtain that the time needed to detect the attacker is given
by
=
+(). (16)
Then, the time needed to detect the attacker is proportional
to . A small can reduce the detection time but increase
error probabilities. The detection time is also inversely pro-
portional to the difference between
and
, which is
determined by the behavior of the malicious secondary user.
B. Dependent Attack
1) Performance Analysis: Now, we assume that the ma-
licious user knows the reports of all other secondary users,
based on which it decides its report. We rst observe that
a malicious user can launch an attack only when all other
secondary users are reporting 0-i.e., when there is no primary
user. Otherwise, there will be a secondary user reporting
1-i.e. an alarm-and the fusion center will make a decision
1, regardless of how the malicious user changes its report.
Therefore, an intelligent malicious user launches attacks only
when all honest secondary users report 0. The fusion center
should check only the rounds in which at least 1 secondary
users report 0 because the malicious user does not attack in
other cases. Hence, in the following discussion, we ignore all
other cases of reports.
Note that this dependent attack is valid only when the OR
rule is used at the fusion center. If the fusion centers uses
a majority voting rule, i.e., the fusion center will take the
decision of the majority (suppose that there are odd number
of secondary users), then the decision does not change even
though the attacker swaps its decision. However, when the
majority rule is used, the attacker can change its strategy in
the dependent attack, e.g., swapping its decision when there
is a tie at all other secondary users. More detailed study is out
of the scope of this paper.
Denote by
. (17)
When , the normalized distance converges to
(1
)
1
(1
(1
)
1
+
,(19)
where the numerator equals (n reports 1, all others report 0)
while the denominator equals (all others report 0). For ex-
ample,
(1
)
1
(1
)
1
(1
(1
)
1
+
(1
)
1
(1
2
) +
(1
)
1
(1
)
1
+
(1
)(1
2
) +
(1
)
1
+
. (20)
Similarly to the independent attack case, if (19) and (20)
are different, the fusion center can always detect the malicious
user. We can obtain approximations of false-alarm and miss-
detection probabilities similarly to (6) and (7).
2) Detectability: To avoid the detection, the malicious user
can equalize (19) and (20), which requires
1
=
(1
)
1
(1
(1
2
. (21)
Obviously, it is easy to choose
1
and
2
satisfying (21).
Particularly, when
2
. (22)
Then, the malicious user can avoid the DSND detection
by setting
1
and
2
according to (21). It can maximize
1
and
2
to maximize the performance damage of collaborative
spectrum sensing under the constraint that the probability must
be smaller than or equal to 1. Now, we wonder whether we
can apply Algorithm 2, namely the frequency check, again to
detect the malicious user, similarly to that in the independent
attack. Unfortunately, the answer is no. We notice that (21) is
equivalent to
(
=
(
(1
)
1
(1
)
)
2
.(23)
It is easy to verify that the left-hand side of (23) is equal
to the expected number of changes from 0 to 1, while the
right-hand side of (23) equals the expected number of changes
from 1 to 0. Therefore, the average numbers of 0s and 1s are
unchanged, and the fusion center is unable to detect the the
malicious user. Thus, we call the dependent attack using (21) a
balanced dependent attack. Moreover, the fusion center cannot
distinguish the deliberate swap of reports of the malicious user
from its false alarms and miss detections. Even if the fusion
center knows the strategy of the attacker (both
1
and
2
are
known), the following proposition still states that the attacker
cannot be detected, (proof is given in Appendix C). Hence,
the malicious user can completely disguise its attacks in the
randomness of detection and can never be detected by the
fusion center.
Proposition 3: When the attacker applies the balanced de-
pendent attack, the a posteriori probabilities of being an
attacker, i.e.,
(secondary user is an attackerall report history
) are the same for all secondary users. Therefore, the fusion
center cannot distinguish the attacker from other secondary
users.
One hope to detect the malicious user is from the fact that
the malicious user does not know
and
and is unable
to set
1
and
2
according to (21). It is even impossible
for the malicious user to estimate
and
and
. It is sufcient to estimate
and
(1
)
1
(1
)
directly. The former is equal to the probability that all sec-
ondary users report 0 while the latter equals the probability
that all secondary users report 0 except one reporting 1. Both
probabilities can be estimated from experience. Therefore,
the malicious user can estimate both probabilities without
attacking for a sufciently long period of time. Then, it sets
1
and
2
according to (21) and begins attacks. Thus, the fusion
center is still unable to detect the malicious user. When
is
small, an alternative simple approach for the attacker without
any estimation is called a swap conservation attack, whose
algorithm is given in Procedure 3 below. The principle is to
switch all decision 1s to 0s and then swap the same amount
of decision 0s to 1s. Both switches are carried out only when
all other reports are 0.
Based on the above discussions, we can draw the following
conclusion: if the malicious user is able to monitor the reports
of all other secondary users and make decisions based on
these reports, the malicious user can attack the collaborative
spectrum sensing without being detected by the fusion center
using the detection algorithm proposed in this papers. This
motivates cognitive radio networks to protect the reports of
all secondary users-e.g., using encryption so that the report
of a secondary user cannot be decrypted by other secondary
users.
3560 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
Procedure 3 Procedure of Swap Conservation Attack
1: Set counter = 0.
2: for Each time slot do
3: if All other secondary users report 0 then
4: if Its own decision is 1 then
5: Report 0.
6: c=c+1.
7: end if
8: if Its own decision is 0 then
9: if > 0 then
10: Report 1.
11: c=c-1.
12: end if
13: end if
14: end if
15: end for
C. Optimal Attacking
Now, we answer the third question: what is the optimal
attacking strategy of the malicious user? We rst analyze the
performance degradation of the collaborative spectrum sensing
due to the attacks. Then, we obtain the attacking strategy
by maximizing the performance damage. For simplicity of
analysis, we consider only the independent attack.
1) Performance Degradation of Spectrum Sensing: To
maximize the damage, the attacker needs to know how much
damage it brings when not being detected. Attacks increase
both false-alarm and miss-detection probabilities. It is easy to
verify that the increases of both probabilities are given by
{
= (1
1
(1
)
1
=
1
(1
)
2
1
. (24)
We know that the original false-alarm and miss-detection
probabilities are given by
{
= 1 (1
. (25)
Therefore, the relative increases of both error probabilities
are given by
{
=
(1
1
1(1
(1
)
1
2
1(1
=
(1
)
2
1
2
1
,
(26)
where the approximations are for the case of very small
and
2
are given by
(
1
,
2
) = arg max
1
,
2
(
(
1
,
2
) +
(
1
,
2
))
(
1
,
2
)
,
(27)
where
and
2
increase, the probability of attacks is increased, while
the period during which the malicious secondary user can
launch attacks is reduced since the detection time is also
0 50 100 150 200
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
time
m
n 1
attacker
honest user
honest user
Fig. 2. The evolution of metric
1
= 0.01,
= 0.05,
1
= 5, and
2
= 15. We also assume that the probability
of the channel being idle (i.e.
and
2
and
2
for the
single attacker employing a dependent attack. In particular, we
assume that the attacker uses the swap conservation policy.
We observe that the metric of the attacker is indistinguishable
from those of honest secondary users. During the 2000 time
slots, the attacker launched 692 attacks (half from 0 to 1 and
half from 1 to 0). Simulation shows that the metric
2
is also
indistinguishable if the attacker adopts the swap conservation
policy (the gure is omitted).
D. Optimal Attack
Figure 12 shows the optimal
1
and
2
obtained from (27)
via exhaustive search. We set
= 0.9 and
= 0.1. We
tested two sets of error probabilities,
= 0.03,
= 0.02,
0 0.2 0.4 0.6 0.8 1
1
1.5
2
2.5
false alarm rate
a
v
e
r
a
g
e
a
t
t
a
c
k
t
i
m
e
s
M=4
M=5
M=6
Fig. 10. The curves of the attacking time and false alarm probability with
different numbers of attackers.
200 400 600 800 1000 1200 1400 1600 1800 2000
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
time
m
n 1
attacker
honest user
honest user
Fig. 11. The evolution of metric
1
= 0.01,
increases (recall
that
is increased. When
and
alpha1: p
f
=0.03, p
m
=0.02
alpha2: p
f
=0.03, p
m
=0.02
alpha1: p
f
=0.01, p
m
=0.005
alpha2: p
f
=0.01, p
m
=0.005
Fig. 12. Optimal
1
and
2
for attacks.
APPENDIX A
PROOF OF PROP. 1
Proof: According to the Central Limit Theorem, we have
that
(
1
)
, if
secondary users and are both honest secondary users.
Therefore, we can approximate the distribution of
with
Gaussian distribution having expectation
and variance
(1
and
(
1
,
2
)
(
2
(
1
(
1
(
1
(
2
(
1
1
1
(
1
(
1
1
1
(
1
(
1
(
2
(
1
, (28)
where () is the -function dened by
() =
1
2
2
, (29)
and () is the probability density function of standard
Gaussian random variables, which is given by
() =
1
2
2
. (30)
By applying the following expansion of (), which is
given by
() =
2
2
2
(
1
1
2
+
3
4
+...
)
, (31)
(
1
,
2
)
2
1
2
2
1
2
2
2
2
1
1
1
2
1
+
2
2
2
2
1
2
, (32)
where
1
=
(
1
(
1
), (33)
and
2
=
(
2
(
1
). (34)
Then, the false-alarm probability of honest secondary users
is given by
(
1
,
2
)
1
2
. (35)
Substituting (32) into (35), we obtain (6). The detailed ma-
nipulation is omitted due to limited space.
Using a similar analysis, the joint probability of
and
(
1
,
2
)
(
2
(
1
(
1
(
1
(
2
(
1
1
1
(
1
(
1
1
1
(
1
(
1
(
2
(
1
. (36)
Therefore, the miss detection probability of the attacker de-
tection can be approximated by
(
1
,
2
)
1
2
. (37)
The remainder of the proof is the same as that of (6). This
concludes the proof.
3564 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
APPENDIX B
PROOF OF PROP. 2
Proof: As argued in Section IV.A, a necessary condition
for the attacker to avoid both DSND detection and frequency
check is
(
) = (
). (38)
We substitute the denitions of (
) and
(
(1
))
(
(1
)(1 2
(1 2
))
= (
(1
) +
)
(
(1 2
(1
)(1 2
)) . (39)
After the simplication on (39), we obtain
(1
)
2
= 0, (40)
which implies that
0.5 and
= 0.5. (41)
APPENDIX C
NON-DETECTABILITY OF BALANCED DEPENDENT
ATTACK: PROOF OF PROP. 3
Proof: We assume that the attacker knows the reports
of all other secondary users and thus launches a balanced
dependent attack. Suppose that the fusion center knows that
there is one and only one attacker and knows the strategy of
the attacker, as well as its swapping probabilities
1
and
2
.
Then, the Bayesian approach can be used to compute the a
posteriori probability of being an attacker for each secondary
user. We denote by
= X) =
(X
= )(
= )
(X)
, (42)
where
= ). It is easy
to verify that
(X
= ) =
#{:
=0,
=1
=0}
1
#{:
=0,
=1
=1}
2
#{:
=1}
3
(43)
, where # means the cardinality of set ,
=1
means
the OR of all {
}, and
1
=
(1
)
1
2
+
(1
(1
1
)
+
(1
)
2
+
(1
1
), (44)
and
2
=
(
1 (1
)
1
)
(1
)
+
(
1
1
, (45)
and
3
=
(1
)
1
(1
2
) +
(1
1
+
(1
)(1
2
) +
1
. (46)
Obviously,
1
denotes the probability that all secondary users
report 0,
2
is the probability that at least one secondary user
reports 1 while secondary user reports 0, and
3
is the
probability that secondary user reports 1. Note that
2
is
independent of
1
and
2
because, if secondary user is the
attacker, it will not launch the attack when another secondary
user reports 1.
Since the attacker uses the balanced dependent attack, the
probabilities
1
and
2
satisfy (21). Then, it is easy to verify
that
1
=
(1
, (47)
and
3
=
(1
)
1
(1
). (48)
Then, we notice that (X
= ) is independent of
1
and
2
. Therefore, the probability of generating the observations
X is the same regardless of whether secondary user is the
attacker or is honest. This implies that the reports provide no
information about the identity of the attacker. Therefore, the
attacker cannot be distinguished from other secondary users.
This concludes the proof.
REFERENCES
[1] S. Arkoulis, L. Kazatzopoulos, C. Delakouridis, and G. F. Marias,
Cognitive spectrum and its security issues, in Proc. 2nd International
Conference on Next Generation Mobile Applications, Services and
Technologies (NGMAST), 2008.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, Gambling in
a rigged casino: the adversarial multi-armed bandit problem, in Proc.
36rd IEEE Annual Symposium on Foundations of Computer Science
(FOCS), 1995.
[3] R. Chen, J. M. Park, and K. Bian, Robust distributed spectrum sensing
in cognitive radio networks, in Proc. IEEE Conference on Computer
Communications (Infocom), 2008.
[4] T. X. Brown and A. Sethi, Potential cognitive radio denial-of-service
vulnerabilities and protection countermeasures: a multi-dimensional
analysis and assessment, in Proc. 2nd International Conference on
Cognitive Radio Oriented Wireless Networks and Communications
(CrownCom), May 2007.
[5] T. Clancy and N. Goergen, Security in cognitive radio networks: threats
and mitigation, in Proc. 3rd International Conference on Cognitive
Radio Oriented Wireless Networks and Communications (CrownCom),
May 2008.
[6] R. Chen and J.-M. Park, Ensuring trustworthy spectrum sensing in
cognitive radio networks, in Proc. 1st IEEE Workshop on Networking
Technologies for Software Dened Radio Networks, 2006.
[7] R. Chen, J.-M. Park, and J. H. Reed, Defense against primary user
emulation attacks in cognitive radio networks, IEEE J. Sel. Areas
Commun., vol. 26, no. 1, Jan. 2008.
[8] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd
edition. Wiley-Interscience, 2006.
[9] A. Ghasemi and E. S. Sousa, Collaborative spectrum sensing for op-
portunistic access in fading environments, in Proc. IEEE International
Symposium on New Frontiers in Dynamic Spectrum Access Networks
(DySPAN), 2005.
[10] A. Ghasemi and E. S. Sousa, Opportunistic spectrum access in fading
channels through collaborative sensing, Journal Commun., vol. 2, no.
2, pp. 71-82, Mar. 2007.
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3565
[11] Z. Han and K. J. R. Liu, Resource Allocation for Wireless Networks.
Cambridge University Press, 2008.
[12] E. Hossain, D. Niyato, and Z. Han, Dynamic Spectrum Access in
Cognitive Radio Networks. Cambridge University Press, 2009.
[13] P. J. Huber, Robust Statistics. New York: Wiley, 1981.
[14] K. B. Letaief and W. Zhang, Cooperative spectrum sensing, Cognitive
Wireless Communication Networks. Springer, 2007.
[15] C. Sun, W. Zhang, and K. B. Letaief, Cluster-based cooperative spec-
trum sensing in cognitive radio systems, in Proc. IEEE International
Conference on Communications (ICC), 2007.
[16] C. H. Lee and W. Wolf, Energy efcient techniques for cooperative
spectrum sensing in cognitive radios, in Proc. IEEE Consumer Com-
munications and Networking Conference, 2008.
[17] G. Ghurumuruhan and Y. (G.) Li, Cooperative spectrum sensing
in cognitive radiopart I: two user networks, IEEE Trans. Wireless
Commun., vol. 6, no. 6, pp. 2204-2213, June 2007.
[18] G. Ghurumuruhan and Y. (G.) Li, Cooperative spectrum sensing in
cognitive radiopart II: multiuser networks, IEEE Trans. Wireless
Commun., vol. 6, no. 6, pp. 2214-2222, June 2007.
[19] P. Kaligineedi, M. Khabbazian, and V. Bhargava, Secure cooperative
sensing techniques for cognitive radio system, in Proc. IEEE Interna-
tional Conference on Communications (ICC), 2008.
[20] H. Li and Z. Han, Dogght in spectrum: jamming and anti-jamming
in cognitive radio systems, in Proc. IEEE Conference on Global
Communications (Globecom), 2009.
[21] H. Li and Z. Han, Blind dogght in spectrum: combating primary
user emulation attacks in cognitive radio systems with unknown channel
statistics, submitted to IEEE International Conference on Communica-
tions (ICC), 2010.
[22] F. Liu, X. Cheng, and D. Chen, Insider attacker detection in wireless
sensor networks, in Proc. IEEE Conference on Computer Communica-
tions (Infocom), 2007.
[23] S. M. Mishra, A. Sahai, and R. W. Broderson, Cooperative sensing
among cognitive radios, in Proc. IEEE International Conference on
Communications (ICC), 2006.
[24] A. Sampath, H. Dai, H. Zheng, and B.Y. Zhao, Multi-channel jamming
attacks using cognitive radios, in Proc. IEEE Conference on Computer
Communications and Networks (ICCCN), 2007.
[25] P. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining.
Addison Wesley, 2006.
[26] W. Wang, H. Li, Y. Sun, and Z. Han, Attack-proof collaborative
spectrum sensing in cognitive radio networks, in Proc. Conference on
Information Sciences and Systems (CISS), 2009.
[27] W. Wang, H. Li, Y. Sun, and Z. Han, CatchIt: detect malicious nodes
in collaborative spectrum sensing, in Proc. IEEE Conference on Global
Communications (Globecom), 2009.
[28] W. Xu, P. Kamat, and W. Trappe, TRIESTE: a trusted radio infrastruc-
ture for enforcing specTrum etiquettes, in Proc. 1st IEEE Workshop on
Networking Technologies for Software Dened Radio Networks, 2006.
[29] W. Zhang, S. K. Das, and Y. Liu, A trust based framework for secure
data aggregation in wireless sensor networks, in Proc. IEEE Conference
on Sensor, Mesh and Ad hoc Communications and Networks (SECON),
2006.
Husheng Li (S00-M05) received the BS and MS
degrees in electronic engineering from Tsinghua
University, Beijing, China, in 1998 and 2000, re-
spectively, and the Ph.D. degree in electrical engi-
neering from Princeton University, Princeton, NJ, in
2005.
From 2005 to 2007, he worked as a senior engi-
neer at Qualcomm Inc., San Diego, CA. In 2007,
he joined the EECS department of the University of
Tennessee, Knoxville, TN, as an assistant professor.
His research is mainly focused on statistical signal
processing, wireless communications, networking and smart grid. Particularly,
he is interested in applying machine learning and articial intelligence in
cognitive radio networks. Dr. Li is the recipient of the Best Paper Award of
EURASIP Journal of Wireless Communications and Networks, 2005 (together
with his PhD advisor: Prof. H. V. Poor).
Zhu Han (S01-M04-S09) received the B.S. de-
gree in electronic engineering from Tsinghua Uni-
versity, in 1997, and the M.S. and Ph.D. degrees in
electrical engineering from the University of Mary-
land, College Park, in 1999 and 2003, respectively.
From 2000 to 2002, he was an R&D Engineer
of JDSU, Germantown, Maryland. From 2003 to
2006, he was a Research Associate at the Univer-
sity of Maryland. From 2006 to 2008, he was an
assistant professor in Boise State University, Idaho.
Currently, he is an Assistant Professor in Electrical
and Computer Engineering Department at University of Houston, Texas. In
June-August 2006, he was a visiting scholar in Princeton University. In May-
August 2007, he was a visiting professor in Stanford University. In May-
August 2008, he was a visiting professor in University of Oslo, Norway
and Supelec, Paris, France. In July 2009, he was a visiting professor in
the University of Illinois at Urbana-Champion. In June 2010, he visited the
University of Avignon, France. His research interests include wireless resource
allocation and management, wireless communications and networking, game
theory, wireless multimedia, and security.
Dr. Han is an NSF CAREER award recipient 2010. Dr. Han is an Associate
Editor of IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS since
2010. Dr. Han was the MAC Symposium vice chair of IEEE Wireless Com-
munications and Networking Conference, 2008. Dr. Han was the Guest Editor
for Special Issue on Cooperative Networking Challenges and Applications
(IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS) Fairness of
Radio Resource Management Techniques in Wireless Networks (EURASIP
Journal on Wireless Communications and Networking), and Special Issue
on Game Theory (EURASIP Journal on Advances in Signal Processing).
Dr. Han is the coauthor for the papers that won the best paper awards in
IEEE International Conference on Communications 2009 and 7th International
Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless
Networks (WiOpt09).