Sie sind auf Seite 1von 12

3554 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO.

11, NOVEMBER 2010


Catch Me if You Can: An Abnormality Detection
Approach for Collaborative Spectrum Sensing in
Cognitive Radio Networks
Husheng Li and Zhu Han
AbstractCollaborative spectrum sensing is subject to the
attack of malicious secondary user(s), which may send false
reports. Therefore, it is necessary to detect potential attacker(s)
and then exclude the attackers report for spectrum sensing.
Many existing attacker-detection schemes are based on the
knowledge of the attackers strategy and thus apply the Bayesian
attacker detection. However, in practical cognitive radio systems
the data fusion center typically does not know the attackers
strategy. To alleviate the problem of the unknown strategy
of attacker(s), an abnormality-detection approach, based on
the abnormality detection in data mining, is proposed. The
performance of the attacker detection in the single-attacker
scenario is analyzed explicitly. For the case in which the attacker
does not know the reports of honest secondary users (called
independent attack), it is shown that the attacker can always
be detected as the number of spectrum sensing rounds tends
to innity. For the case in which the attacker knows all the
reports of other secondary users, based on which the attacker
sends its report (called dependent attack), an approach for the
attacker to perfectly avoid being detected is found, provided that
the attacker has perfect information about the miss-detection and
false-alarm probabilities. This motivates cognitive radio networks
to protect the reports of secondary users. The performance of
attacker detection in the general case of multiple attackers is
demonstrated using numerical simulations.
Index TermsCognitive radio, abnormality detection, spec-
trum sensing.
I. INTRODUCTION
S
PECTRUM sensing is a key issue in cognitive radio
systems. Single-user spectrum sensing may not be reliable
due to many uncertainties such as fast fading or shadowing.
Therefore, collaborative spectrum sensing, illustrated in Fig. 1,
is proposed for leveraging the observations from multiple sec-
ondary users in order to improve the performance of spectrum
sensing [9] [10] [14] [15] [16] [17] [18] [23]. Studies have
demonstrated that the collaboration can signicantly improve
the performance of spectrum sensing.
However, the collaboration incurs potential security vulner-
abilities. A malicious node, illustrated in Fig. 1, may send
Manuscript received March 10, 2010; revised June 2, 2010; accepted
August 26, 2010. The associate editor coordinating the review of this paper
and approving it for publication was F. A. Cruz-Perez.
H. Li is with the Department of Electrical Engineering and Com-
puter Science, the University of Tennessee, Knoxville, TN, 37996 (e-mail:
husheng@eecs.utk.edu).
Z. Han is with the Department of Electrical and Computer Engineering,
University of Houston, Houston, TX, 77004.
This work was supported by the National Science Foundation under grants
CCF-0830451, CNS-0953377, CNS-0905556, CNS-0910461, and ECCS-
0901425.
Digital Object Identier 10.1109/TWC.2010.091510.100315
Primary User
(Licensed User)
Secondary User
Secondary User
Secondary User
Attacker can send wrong information to prevent the
other second users from using the spectrum or reduce
the detection probability of fusion center.
1 - The SUs perform
local sensing of PU signal
2 - The SUs send their
local sensing bits to a
common fusion center
3 - Fusion center makes final
decision: PU present or not
Attacker
Common Secondary
Fusion Center
Fig. 1. Illustration of collaborative spectrum sensing.
out dishonest reports to degrade the performance of spectrum
sensing. Therefore, substantial studies have been focused on
attack-proof collaborative spectrum sensing schemes [1] [3]
[4] [6] [7] [24] [26] [27] [28]. The corresponding approaches
can be categorized into two types, namely passive and proac-
tive. Passive approaches apply the techniques in robust signal
processing, which limit the possible impact from attackers
[13]. Proactive approaches let honest secondary users detect
malicious users and then reject their reports. For example, in
[26], a data fusion center computes the a posteriori probability
of each secondary user being an attacker and determines the
potential attacker by the probabilities.
In many studies, a key assumption in the attack-proof
approaches is that the attackers strategy is known such that
the a posteriori probabilities of users being attackers can be
computed. This can be accomplished by reverse engineering
on captured attackers. However, this assumption may be too
strong because the strategy of attackers is usually unknown
and could be arbitrary in practical systems. Moreover, if honest
secondary users adopt an anti-attack strategy (quite often, the
algorithm will be published), an attacker can modify its own
strategy to combat the published anti-attack scheme. Even if
there is no attacker, a secondary user having a malfunctioning
spectrum sensor may send out unreliable reports, which is
unpredictable.
In this paper, to alleviate the challenge of the unknown
strategy of attackers, abnormality detection [25], a powerful
technique in the eld of data mining to detect abnormalities
(also called outlier detection), is applied to detect attackers,
1536-1276/10$25.00 c 2010 IEEE
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3555
without any a priori information about the attackers strategy.
The basic idea is to place the report history of each secondary
user in a high-dimensional space and detect possible abnor-
malities. In sharp contrast to the approaches in [26] [27], our
proposed approach is universal-i.e., it does not require any
knowledge of the attackers.
The key questions we want to solve in this paper include
the following:
Can the proposed algorithm detect attackers without a
priori information about their strategies, and can an
attacker disguise its type behind the randomness of
observations?
If the proposed algorithm can detect an attacker, how fast
is the detection procedure?
What is the optimal attacking strategy of the malicious
user?
To solve the rst question, we apply an abnormality-
detection algorithm based on proximity, which is widely
used in the eld of data mining [25]. Then, for the second
question, we analyze the performance of the attacker detection
asymptotically-i.e., the probability of detecting the malicious
user(s) when the number of rounds tends to innity. The third
question is answered by analyzing the time period during
which the attacker can launch an attack and the damage caused
by each attack. The main ndings in this paper include the
following.
Suppose that there is only one attacker. When the attacker
does not know the reports of other secondary users, it
will be detected almost surely as the round of spectrum
sensing tends to innity.
When an attacker knows the reports of all other secondary
users, it can disguise its attacks without being detected
by the data fusion center.
The remainder of this paper is organized as follows. The
related work is introduced in Section II. The system model
is explained in detail in Section III. An abnormality detection
algorithm is applied in Section IV, and its performance is
analyzed in Section V. Numerical results and conclusions are
provided in Sections VI and VII, respectively.
Below is the list of the main notations used throughout this
paper.
: the number of secondary users;
: the number of malicious users;

and

: the probabilities of the spectrum being idle


and busy, respectively;

: the distance between the histories of secondary


users and ;

1

and
2

: the metrics of secondary user for the


attacker detection;

1
and
2
: the indices of neighbors for history compar-
ison;

and

: thresholds used for the detection of attackers;


and

: the probabilities of false alarm and miss


detection of the local decisions, respectively;

1
and
2
: the probabilities of changing a report from 0
to 1 and from 1 to 0 for an attacker, respectively.

and

: the costs of false alarm and miss detection,


respectively.
II. RELATED WORK
The attacker detection in the collaborative spectrum sensing
of cognitive radio systems has been studied in [3] [19] [26]
[27]. In [3], a heuristic algorithm is proposed to compute the
weight of each secondary user, which is applied to balance
the likelihood ratios of different secondary users in the se-
quential probability ratio test. Note that the secondary users
are required to report real-valued observations in [3]. In [26],
it is assumed that the data fusion center knows that there
is at most one attacker, whose attacking strategy is known.
Then, the Bayesian rule is applied to compute the a posteriori
probability of each secondary user being an attacker. When
the a posteriori probability of a certain secondary user is
larger than a threshold, it is claimed to be an attacker and will
be excluded from the collaboration. A heuristic consistency
metric is proposed to alleviate the oscillation phenomenon
when there is no attacker. The assumption that there is at most
one attacker in [26] is relaxed in [27]. Again, the Bayesian
rule is applied to detect an arbitrary number of attackers.
To alleviate the large number of combinations of attackers
and honest users, an onion-peeling-based approximation is
used to reduce the computational complexity at a marginal
performance degradation. A survey of the possible attacks is
also provided in [5]. Different from these existing studies, the
novelty of this paper exists in the following aspects:
In contrast to the Bayesian approaches in [26] and [27],
this paper does not assume the a priori information about
the attackers strategy and is universal for various types
of attack strategies.
Although [3] and [19] also do not assume the a priori
information of the attack strategy, they do not consider
the theoretical detectability of the attacker. Moreover,
they rely on the soft weighting for the reports and do not
make a hard decision on the attacker(s). In this paper, we
will prove the detectability of the proposed algorithm of
attacker detection for any arbitrary attack strategy, which
is a signicantly progress in the algorithm development.
Other types of attacks are also studied for cognitive radio
systems. An important type of attack, called a primary user
emulation attack, is proposed in [7]. A collaborative detection
algorithm, based on the assumption that the attacker has less
power than primary users, is proposed in [7]. In [20], a game-
theoretic, random-frequency-hopping algorithm is proposed to
avoid the attacker passively. The assumption of perfect channel
knowledge is then relaxed in [21] by employing the adversarial
multi-armed-bandit algorithm [2].
Beyond cognitive radio networks, the mitigation of attack-
ers is also widely discussed in sensor networks [22] [29],
which applied the framework of trustworthiness to alleviate
the attackers. However, these studies did not answer the
fundamental question of whether the attacker can always
be detected. They also did not discuss possible attacking
strategies to combat the defense scheme. There is also no
analytic performance evaluation in these studies.
III. SYSTEM MODEL
The system is illustrated in Fig. 1. Consider secondary
users collaborating for spectrum sensing. For simplicity, we
consider single-channel systems. In each time slot, secondary
3556 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
users send their reports based on their local observations to a
data fusion center. We assume that the noises added to local
observations are mutually independent
1
for different secondary
users and different spectrum-sensing periods, conditioned on
the state of primary users. Note that it is more precise to model
the spectrum occupancies as a Markovian and semi-Markovian
chain. However, it makes the analysis much more complicated
and is beyond the scope of this paper. The fusion center makes
a decision and then feeds it back to the secondary users. All of
these information reports and decision feedbacks are carried
out in a dedicated and reliable control channel.
We assume that there exist at most malicious users and
that the remaining secondary users are honest. It is reasonable
to assume that since it is pointless to study a
network full of malicious users. The data fusion center has no
information about the number and identities of the malicious
users. Therefore, the data fusion center has to detect the
existence of attackers from the reports of secondary users and
incorporate the trustworthiness of different secondary users
into the decision-making procedure. However, we assume
that the data fusion center knows , the upper bound of
the number of malicious users. For the malicious users, we
assume that they do not communicate with each other and
that they make the decisions regarding attacks independently.
It is possible that the attackers can increase the damage via the
collaboration, e.g., different attackers take turns to attack, thus
lengthening the time of valid attack. Moreover, we assume
that the attacker(s)s decisions are independent in time, i.e.,
the decisions do not depend on the history. A history based
attack is more powerful and interesting; however, it is beyond
the scope of this paper.
We assume that each honest secondary user makes a local
decision, and we denote by 0 and 1 the decisions that there
is no primary user or there are primary users, respectively.
The decision at the fusion center is made using the following
rule: if one of the secondary users reports 1, the fusion center
makes a decision 1; otherwise (i.e., all secondary users report
0), the fusion center makes a decision 0. We call this decision
rule the OR rule. Such a scheme is simple, because only a
binary decision needs to be reported and so this scheme has
been adopted in many studies [3] [26] [27].
We assume that the spectrum occupancy is the same for all
secondary users. Otherwise, it is impossible to distinguish a
malicious user from an honest secondary user with different
spectrum occupancy. This assumption is reasonable if all the
secondary users are located in a small area and are impacted
by the same primary user(s). For the case that secondary users
are widely spread in space, we can also use the proposed
abnormality detection algorithm to nd outlier secondary users
having different spectrum occupancies and consider them as
malicious users in the decision making, although they are hon-
est. We denote by

and

the probabilities that the licensed


channel is busy or idle, respectively, where

= 1.
To simplify the analysis, we assume that the observations
at different secondary users are mutually independent given
the state of the primary user. Since the secondary users carry
out local detections before sending the reports, we assume
that the miss detection and false alarm probabilities of the
1
Note that this only means the independence of the noise, not the indepen-
dence of observations.
local detections are the same for different secondary users. In
practice, these assumptions may not be true. For example, a
common characteristic in the primary users signal may cause
correlated miss detection or false alarm at all secondary users.
We will relax these assumptions and study the corresponding
algorithms in the future work.
IV. ATTACKER-DETECTION ALGORITHM
In this section, we rst propose an attacker-detection algo-
rithm based on detecting abnormalities. Then, we discuss the
problem of threshold selection. As stated in the introduction,
the feature of the proposed algorithm is that it does not assume
any knowledge about the attacker, which makes it universal.
A. Double-Sided Neighbor Distance Algorithm
For detecting malicious users, we consider a -dimensional
space, where is the number of spectrum-sensing periods
that have been completed. Then, the history of reports of each
secondary user is represented by a point in the space, denoted
by x

= (

1
,

2
, ...,

) for secondary user , in which

is the report (0 or 1) of secondary user at spectrum-sensing


period . We denote by

the distance of reports between


two secondary users. If Euclidean distance is used,

=
x

. If Hamming distance is used,

is equal to the
number of different elements in x

and x

. We consider the

1
-th and the
2
-th neighbors in the report space for each
secondary user. Then, we propose a double-sided neighbor
distance (DSND) algorithm, where <
1
and

2
< . The steps are given in Procedure 1. Intuitively,
the algorithm aims to nd the outlier users that are far away
from most secondary users in the history space. Note that
the distance used in Procedure 1 could be either Hamming
distance or Euclidean distance.
Procedure 1 Procedure of Double-Sided Neighbor Distance
Algorithm
1: Set neighbor indices
1
and
2
(
1

2
), as well as
thresholds

and

.
2: for Secondary user do
3: for Secondary user = do
4: Compute the distance between x

and x

, namely

.
5: end for
6: Sort all distances {

}
=1,...,,=
in an ascending order.
7: Choose the secondary user
1
such that

1
is the
1
-th
smallest to secondary user . Choose the secondary user
2
such that

2
is the
2
-th smallest to secondary user .
8: Set metrics
1

1
and
2

2
for the attacker
detection.
9: If

1

>

or

2

<

, claim that user is a malicious


one, where

and

are thresholds.
10: end for
The intuition of the proposed algorithm is that, if a sec-
ondary users history is too far away from others histories or
too close to others histories, its behavior is abnormal and is
probably a malicious user. If an attacker wants to disguise its
identity, it must behave the same as other honest secondary
users, thus losing the capability of attacking. Note that, if
considering only the metric
1

, the algorithm is the same as


the -proximity algorithm for abnormality detection in data
mining [25]. However, the traditional -proximity algorithm
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3557
will miss the attacker that copies the report supported by most
honest secondary users for most of the time and launches the
attack occasionally. Then, for most time slots, the attackers
report is very close to the center of all reports, thus making
the distance to most reports small. This small distance will
counteract the large distance of report when launching the
attacks. Hence, the system cannot identify the attacker by just
checking the average distance. To prevent such attacks, we
also use
2

to detect such attackers. This is the reason that


the algorithm is called double sided.
B. Threshold Selection
We propose to use the following dynamic thresholds for the
attacker detection, which are given by

=
1
( 1)

, (1)
and

=
1
( 1)

, (2)
where is a predetermined value that represents an estimation
of the variance. The intuition of the thresholds in (1) and (2) is
to consider the average distance, namely
1
(1)

,
as the normal distance between any two honest secondary
users. The reason for the term is that the variance of the
distance between two honest secondary users is proportional
to
1

.
V. PERFORMANCE ANALYSIS
In this section, we analyze the performance of the proposed
DSND algorithm. For simplicity, we assume that there is only
one malicious user-i.e., = 1. For the general case of ,
we use numerical simulations to evaluate the performance in
Section VI.
For the performance analysis, we consider two cases of
the information available to the malicious user. In the rst
case, the malicious user does not know the reports of other
secondary users (called an independent attack). In the second
case, the malicious user knows all reports of other secondary
users, which can be achieved by letting all other secondary
users report rst and then decoding their reports (called a
dependent attack). To simplify the analysis of the performance
of Procedure 1, we assume that the observation distributions
are the same for all secondary users. We denote by

and

the false-alarm and miss-detection probabilities of the


local decision, respectively
2
. Due to the assumption of the
same distributions,

and

are common for all secondary


users. We consider Hamming distance in Procedure 1, which
is straightforward to extend to Euclidean distance.
2
When the false alarm and miss detection probabilities are different, the
proposed algorithm can still be applied if the difference in the probabilities is
small. When the difference is small, the attacker can hardly disguise its attacks
behind the discrepancy of the probabilities. However, if the difference is large,
then it is difcult to detect the attacker. Actually, in this case, there will be
little gain to carry out collaborative spectrum sensing since the spectrum
occupancies of different secondary users are signicantly different.
A. Independent Attack
1) Performance Analysis: Since the malicious user does
not know the reports of all other secondary users, it launches
attacks only based on its own decisions. A general approach
for attacking is to swap the decision randomly. We assume
that the malicious node changes the report from 0 to 1 with
probability
1
and changes the report from 1 to 0 with
probability
2
.
The probability that two honest secondary users make
different decisions is given by (assume that secondary users
and are both honest)

= 1,

= 0channel is idle)
+

= 1,

= 0channel is busy)
= 2

(1

) + 2

(1

), (3)
where the superscript means that the probability is with
respect to two honest secondary users. Recall that

and

denote the probabilities that the licensed channel is busy or


idle, respectively.
Similarly to the derivation of (3), the probability that the
malicious secondary user sends a report different than the
report of a generic honest secondary user is given by

(1

)(1
1
) +

2
+

(1

(1
2
) +

(1

)
2

1
+

(1

)(1
2
) +

1
+

(1

(1
1
) +

(1

)
2

2
, (4)
where the superscript means the probability is with
respect to an honest secondary user and a malicious user.
Therefore, the difference between the reports of two sec-
ondary users is a Bernoulli random variable with expectation

or

. The corresponding variances are equal to

(
1

)
and

(
1

)
, respectively.
Since all local decisions are mutually independent in differ-
ent spectrum-sensing periods, the normalized distance between
two honest secondary users converges, i.e.

, (5)
almost surely as , according to the strong law of
large numbers, if secondary users and are both honest.
Similarly, the normalized distance between the malicious user
and any honest secondary user converges to

.
Based on the above discussion, the following proposition
shows that the false-alarm and miss-detection probabilities for
the attacker detection decrease exponentially with respect to
as . The proof is given in Appendix A.
Proposition 1: As , the false-alarm and miss-
detection probabilities for the attacker detection, denoted by

and

, respectively, satisfy
(log

) / <

(
(
2
+ 1)(

) +
1
(

)
)
, (6)
and
(log

) / <

(
(
2
+ 1)(

) +
1
(

)
)
. (7)
3558 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
2) Detectability: Obviously, when

, the ma-
licious user can be detected with probability 1 when ,
if the thresholds are properly chosen. The miss-detection and
false-alarm probabilities for the attacker detection decrease
exponentially with respect to when is sufciently large.
We need to answer the rst question in the introduction
section-i.e., is the malicious user always detectable? From
the above analysis, the malicious user is non-detectable when

, which is equivalent to

1
= (

)
2
, (8)
where
(

)
=

(1 2

(1

)(1 2

(1

)(1 2

(1 2

)
. (9)
When (

) 0, the attacker can set


1
and

2
according to (8) such that the attacker cannot be detected
by the DSDN algorithm. However, we can still try to nd
another clue to catch the malicious user. We notice that honest
secondary users share the same probability of reporting 1 due
to the assumption of identical observation distributions. If the
malicious user reports 1 with a different probability, the fusion
center can always detect the malicious user by computing the
frequencies of reporting 1. The algorithm is summarized in
Procedure 2.
Procedure 2 Procedure of Frequency Check Algorithm
1: Set Threshold .
2: for Each spectrum sensing period do
3: Accumulate the times of reporting 1 for each secondary user.
4: end for
5: Compute the frequency of reporting 1, denoted by
1
for
secondary user .
6: If

1

1

=1

> , claim that secondary user is


malicious.
To avoid this check, the total number of 1s changed to 0
should be the same as the total number of 0s changed to 1.
This requires

1
= (

)
2
, (10)
where
(

) =

(1

(1

) +

. (11)
Therefore, the malicious user can avoid both the DSND
detection and the frequency check only when
(

) = (

). (12)
For the attacker, the hope to avoid the DSND detection and
frequency check is eliminated by the following proposition.
The proof is given in Appendix B.
Proposition 2: The attacker can avoid the DSND detection
and frequency check only when

= 0.5. (13)
Obviously, the condition

= 0.5 is impossible for


practical systems since it means a completely random guess
for the spectrum sensing. Therefore, the attacker will always
be detected by the system.
3) Detection Time: Now, we answer the second question
in the introduction: how fast is the detection procedure? For
simplicity of analysis, we assume that the fusion center knows
the miss-detection and false-alarm probabilities, and uses the
stationary threshold. It is difcult to analyze the case of small
or medium . For insight, we consider the case of large ,
which implies that the time needed to detect the attacker is
large and the error probability of attacker detection is very
small.
Suppose that the attacker is secondary user 1. For suf-
ciently large , we have

+(1), (14)
with large probability, according to the strong law of large
numbers. When the threshold is reached (i.e.

+(1), (15)
) we obtain that the time needed to detect the attacker is given
by
=

+(). (16)
Then, the time needed to detect the attacker is proportional
to . A small can reduce the detection time but increase
error probabilities. The detection time is also inversely pro-
portional to the difference between

and

, which is
determined by the behavior of the malicious secondary user.
B. Dependent Attack
1) Performance Analysis: Now, we assume that the ma-
licious user knows the reports of all other secondary users,
based on which it decides its report. We rst observe that
a malicious user can launch an attack only when all other
secondary users are reporting 0-i.e., when there is no primary
user. Otherwise, there will be a secondary user reporting
1-i.e. an alarm-and the fusion center will make a decision
1, regardless of how the malicious user changes its report.
Therefore, an intelligent malicious user launches attacks only
when all honest secondary users report 0. The fusion center
should check only the rounds in which at least 1 secondary
users report 0 because the malicious user does not attack in
other cases. Hence, in the following discussion, we ignore all
other cases of reports.
Note that this dependent attack is valid only when the OR
rule is used at the fusion center. If the fusion centers uses
a majority voting rule, i.e., the fusion center will take the
decision of the majority (suppose that there are odd number
of secondary users), then the decision does not change even
though the attacker swaps its decision. However, when the
majority rule is used, the attacker can change its strategy in
the dependent attack, e.g., swapping its decision when there
is a tie at all other secondary users. More detailed study is out
of the scope of this paper.
Denote by

the number of rounds in which secondary


user reports 1 while all other secondary users report 0 during
spectrum sensing periods. Without loss of generality, we
assume that, in all the rounds, only one secondary user
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3559
reports 1. Then, it is easy to verify (the verication is omitted
due to limited space)

. (17)
When , the normalized distance converges to

(n reports 1all other report 0)


+ (m reports 1all other report 0). (18)
When both secondary users and are honest, (18) is
equivalent to

2(n reports 0all others report 1)


= 2

(1

)
1

(1

(1

)
1
+

,(19)
where the numerator equals (n reports 1, all others report 0)
while the denominator equals (all others report 0). For ex-
ample,

(1

)
1

is the probability that the spectrum


is actually idle, secondary user makes a false alarm while
all other secondary users report the correct result. Other terms
follow the same argument. Note that when honest secondary
user reports 0, the malicious user does not attack and sends
only its real decision.
Slightly abusing the notation, we denote by
1
and
2
the
probabilities of changing the malicious users decision from 0
to 1 and from 1 to 0, respectively. Then, if secondary user
is malicious, we have

(1

)
1

(1

(1

)
1
+

(1

)
1

(1
2
) +

(1

)
1

(1

)
1
+

(1

)(1
2
) +

(1

)
1
+

. (20)
Similarly to the independent attack case, if (19) and (20)
are different, the fusion center can always detect the malicious
user. We can obtain approximations of false-alarm and miss-
detection probabilities similarly to (6) and (7).
2) Detectability: To avoid the detection, the malicious user
can equalize (19) and (20), which requires

1
=

(1

)
1

(1

(1

2
. (21)
Obviously, it is easy to choose
1
and
2
satisfying (21).
Particularly, when

is very small, which is reasonable for


cognitive radio systems, we have

2
. (22)
Then, the malicious user can avoid the DSND detection
by setting
1
and
2
according to (21). It can maximize
1
and
2
to maximize the performance damage of collaborative
spectrum sensing under the constraint that the probability must
be smaller than or equal to 1. Now, we wonder whether we
can apply Algorithm 2, namely the frequency check, again to
detect the malicious user, similarly to that in the independent
attack. Unfortunately, the answer is no. We notice that (21) is
equivalent to
(

=
(

(1

)
1

(1

)
)

2
.(23)
It is easy to verify that the left-hand side of (23) is equal
to the expected number of changes from 0 to 1, while the
right-hand side of (23) equals the expected number of changes
from 1 to 0. Therefore, the average numbers of 0s and 1s are
unchanged, and the fusion center is unable to detect the the
malicious user. Thus, we call the dependent attack using (21) a
balanced dependent attack. Moreover, the fusion center cannot
distinguish the deliberate swap of reports of the malicious user
from its false alarms and miss detections. Even if the fusion
center knows the strategy of the attacker (both
1
and
2
are
known), the following proposition still states that the attacker
cannot be detected, (proof is given in Appendix C). Hence,
the malicious user can completely disguise its attacks in the
randomness of detection and can never be detected by the
fusion center.
Proposition 3: When the attacker applies the balanced de-
pendent attack, the a posteriori probabilities of being an
attacker, i.e.,
(secondary user is an attackerall report history
) are the same for all secondary users. Therefore, the fusion
center cannot distinguish the attacker from other secondary
users.
One hope to detect the malicious user is from the fact that
the malicious user does not know

and

and is unable
to set
1
and
2
according to (21). It is even impossible
for the malicious user to estimate

and

since the real


hypothesis-i.e., primary users exist or not-is never revealed
to the cognitive radio network. However, the malicious user
need not estimate

and

. It is sufcient to estimate

and

(1

)
1

(1

)
directly. The former is equal to the probability that all sec-
ondary users report 0 while the latter equals the probability
that all secondary users report 0 except one reporting 1. Both
probabilities can be estimated from experience. Therefore,
the malicious user can estimate both probabilities without
attacking for a sufciently long period of time. Then, it sets
1
and
2
according to (21) and begins attacks. Thus, the fusion
center is still unable to detect the malicious user. When

is
small, an alternative simple approach for the attacker without
any estimation is called a swap conservation attack, whose
algorithm is given in Procedure 3 below. The principle is to
switch all decision 1s to 0s and then swap the same amount
of decision 0s to 1s. Both switches are carried out only when
all other reports are 0.
Based on the above discussions, we can draw the following
conclusion: if the malicious user is able to monitor the reports
of all other secondary users and make decisions based on
these reports, the malicious user can attack the collaborative
spectrum sensing without being detected by the fusion center
using the detection algorithm proposed in this papers. This
motivates cognitive radio networks to protect the reports of
all secondary users-e.g., using encryption so that the report
of a secondary user cannot be decrypted by other secondary
users.
3560 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
Procedure 3 Procedure of Swap Conservation Attack
1: Set counter = 0.
2: for Each time slot do
3: if All other secondary users report 0 then
4: if Its own decision is 1 then
5: Report 0.
6: c=c+1.
7: end if
8: if Its own decision is 0 then
9: if > 0 then
10: Report 1.
11: c=c-1.
12: end if
13: end if
14: end if
15: end for
C. Optimal Attacking
Now, we answer the third question: what is the optimal
attacking strategy of the malicious user? We rst analyze the
performance degradation of the collaborative spectrum sensing
due to the attacks. Then, we obtain the attacking strategy
by maximizing the performance damage. For simplicity of
analysis, we consider only the independent attack.
1) Performance Degradation of Spectrum Sensing: To
maximize the damage, the attacker needs to know how much
damage it brings when not being detected. Attacks increase
both false-alarm and miss-detection probabilities. It is easy to
verify that the increases of both probabilities are given by
{

= (1

1
(1

)
1

=
1

(1

)
2

1
. (24)
We know that the original false-alarm and miss-detection
probabilities are given by
{

= 1 (1

. (25)
Therefore, the relative increases of both error probabilities
are given by
{

=
(1

1
1(1


(1

)
1

2
1(1

=
(1

)
2

1


2

1
,
(26)
where the approximations are for the case of very small

and

. When the original false-alarm and miss-detection


probabilities are very small, the attack may cause a substantial
increase of errors.
2) Maximizing the Damage: Based on the analysis of the
performance damage and the time needed to detect the attacker
(assume that is very large), the optimal probabilities
1
and

2
are given by
(

1
,

2
) = arg max

1
,
2
(

(
1
,
2
) +

(
1
,
2
))

(
1
,
2
)

,
(27)
where

and

are the costs of miss detection and


false alarm. The purpose of the maximization is to maximize
the average number of attacks being detected. When
1
and

2
increase, the probability of attacks is increased, while
the period during which the malicious secondary user can
launch attacks is reduced since the detection time is also
0 50 100 150 200
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
time
m
n 1
attacker
honest user
honest user
Fig. 2. The evolution of metric
1

in the single-attacker and independent-


attack case.
0 50 100 150 200
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
time
m
n 2
attacker
honest user
honest user
Fig. 3. The evolution of metric
2

in the single-attacker and independent-


attack case.
reduced. Therefore, the attacker needs to nd the optimal
tradeoff between disguising its type and launching attacks
more frequently.
VI. NUMERICAL RESULTS
In this section, we use numerical simulations to demonstrate
the performance of the proposed detection algorithms. Unless
stated otherwise, we assume that = 20, meaning that there
are 20 secondary users. We assume

= 0.01,

= 0.05,

1
= 5, and
2
= 15. We also assume that the probability
of the channel being idle (i.e.

) is 0.9. We rst discuss the


independent attacks, then we provide an example illustrating
the undetectibility of dependent attacks. Thresholds in (1) and
(2) are used for all simulations.
A. Single-Attacker and Independent Attacks
We rst test the single-attacker case and consider indepen-
dent attacks. Figures 2 and 3 show the evolution of metrics

and
2

of the attacker and two honest secondary users,


respectively. Switching probabilities
0
and
1
are both set
to 0.2. We can observe the gap between the attacker and the
two honest secondary users. We also notice that the metric

of the attacker is larger than that of the honest secondary


users. Therefore, the metric
2

is useless in this example


LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3561
0 0.2 0.4 0.6 0.8 1
1
1.5
2
2.5
3
3.5
4
false alarm rate
a
v
e
r
a
g
e

a
t
t
a
c
k

t
i
m
e
s
=0.1
=0.2
=0.3
Fig. 4. The curves of the attacking time and false alarm probability in the
single-attacker and independent-attack case.
0 5 10 15
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
number of attacks
C
D
F
=0.1
=0.2
=0.3
Fig. 5. CDF curves of the attack times in the single-attacker and independent
case.
for Procedure 1. However, as have been explained before, the
metric
2

can be used to prevent the copy-and-past attack


when the attacker knows an honest secondary users reports.
The performance of the attacker detection is shown in Fig.
4. We use the adaptive threshold in (2) and test the cases of
ranging from 0.1 to 1, thus yielding a series of performance
metrics. Two metrics are used for evaluating the performance,
namely the number of attacks launched before being detected
and the false-alarm rate. Obviously, a smaller false-alarm rate
implies a longer time to detect the attacker, thus yielding more
attacks. The larger the average attack time is, the more damage
is caused to the cognitive radio system. The corresponding
curves of the attacking time and false alarm probability are
shown in Fig. 4 for the cases of
1
=
2
= = 0.1, 0.2, 0.3,
respectively. Note that the attacking time does not mean the
time slots before the attacker is detected. It is the number of
valid attacks the attacker launches. To assure the reliability, we
begin the attacker detection from the 10-th spectrum-sensing
periods, such that the metrics are reasonably reliable. We
observe that, for most cases, the average number of attacks
before being detected increases as increases. Therefore,
a better strategy for the attacker is to use larger switching
probabilities.
Figure 5 shows the cumulative distribution function (CDF)
of the attack times when there is one attacker and =
0 50 100 150 200
0
0.02
0.04
0.06
0.08
0.1
0.12
time
m
n 1
attacker
attacker
attacker
honest user
honest user
Fig. 6. The evolution of metric
1

in three-attacker and independent-attack


case.
0 50 100 150 200
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
time
m
n 2
attacker
attacker
attacker
honest user
honest user
Fig. 7. The evolution of metric
2

in three-attacker and independent-attack


case.
0.1, 0.2, 0.3, respectively. We observe that the variation is
signicant. Sometimes the attacker can launch more than 10
attacks before being detected when = 0.1.
B. Multiple Attackers and Independent Attacks
We assume that there are three attackers carrying out
independent attacks. In Figures 6 and 7, the evolutions of
the metrics
1

and
2

are plotted, respectively. Again, we


observe the gaps between the attackers and honest secondary
users.
The curves of the attacking time and false alarm probability
of the three-attacker case are shown in Fig. 8. The average
number of attacks is dened as the average attack times of
the rst detected attacker. Again, we observe that a larger
switching probability results in a larger average number of
attacks, given a xed false-alarm rate. Another observation
is that the average number of attacks is smaller than that of
the single-attacker case. This is because the attacker detection
is faster for the multiple-attacker case since more attackers
mean more opportunities for attacker detection. Note that this
conclusion is conditioned on the assumption that
1
=
2
=
.
Figure 9 shows the CDF of the attack times when there
are three attackers and = 0.1, 0.2, 0.3. We observe that
3562 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
0 0.2 0.4 0.6 0.8 1
1
1.5
2
2.5
3
3.5
4
false alarm rate
a
v
e
r
a
g
e

a
t
t
a
c
k

t
i
m
e
s
=0.1
=0.2
=0.3
Fig. 8. The curves of the attacking time and false alarm probability in
multiple-attacker and independent-attack case.
2 4 6 8 10 12 14 15
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
number of attacks
C
D
F
=0.1
=0.2
=0.3
Fig. 9. CDF curves of the average attack times in multiple-attacker case.
the variation is also signicant. Figure 10 shows the average
number of attacks, whose denition is the same as that of
Fig. 8, for multiple numbers of attackers. We observe that the
curves of the attacking time and false alarm probability are
very similar for different numbers of attackers when the false
alarm probability is large. When the false alarm probability is
small, the average attack times becomes smaller as the number
of attackers increases.
C. Dependent Attacks
In Fig. 11, we show the evolution of the metric
1

for the
single attacker employing a dependent attack. In particular, we
assume that the attacker uses the swap conservation policy.
We observe that the metric of the attacker is indistinguishable
from those of honest secondary users. During the 2000 time
slots, the attacker launched 692 attacks (half from 0 to 1 and
half from 1 to 0). Simulation shows that the metric
2

is also
indistinguishable if the attacker adopts the swap conservation
policy (the gure is omitted).
D. Optimal Attack
Figure 12 shows the optimal
1
and
2
obtained from (27)
via exhaustive search. We set

= 0.9 and

= 0.1. We
tested two sets of error probabilities,

= 0.03,

= 0.02,
0 0.2 0.4 0.6 0.8 1
1
1.5
2
2.5
false alarm rate
a
v
e
r
a
g
e

a
t
t
a
c
k

t
i
m
e
s
M=4
M=5
M=6
Fig. 10. The curves of the attacking time and false alarm probability with
different numbers of attackers.
200 400 600 800 1000 1200 1400 1600 1800 2000
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
time
m
n 1
attacker
honest user
honest user
Fig. 11. The evolution of metric
1

in dependent attack case.


and

= 0.01,

= 0.005. From Fig. 12, we observe that,


when the probabilities of false alarm and miss detection are
large,
1
decreases and
2
increases as

increases (recall
that

is the cost of miss detection). We x the cost of false


alarm,

, as 1. The trend implies that more 1s (primary users


exist) are converted to 0s (no primary user), thus causing more
miss detections because

is increased. When

and

are small, the optimal attack strategy is to convert almost all


1s to 0s. Such an intensive attack is in a sharp contrast to
the slow attack obtained in Fig. 8 when
1
=
2
= .
VII. CONCLUSIONS
We have proposed an abnormality-detection-based algo-
rithm for the detection of attackers in collaborative spectrum
sensing in cognitive radio systems. It does not assume any a
priori information about the strategy of attackers. A dynamic
threshold selection has been proposed. It has been shown that,
for the single-attacker independent attack, the attacker will
always be detected asymptotically. A surprising conclusion is
that, for the dependent attack, an attacker can disguise its own
identity under the randomness of reports, thus avoiding the
detection while launching attacks with a constant probability.
This motivates the protection of the reports of secondary users.
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3563
0 100 200 300 400 500
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
C
M
o
p
t
i
m
a
l

alpha1: p
f
=0.03, p
m
=0.02
alpha2: p
f
=0.03, p
m
=0.02
alpha1: p
f
=0.01, p
m
=0.005
alpha2: p
f
=0.01, p
m
=0.005
Fig. 12. Optimal
1
and
2
for attacks.
APPENDIX A
PROOF OF PROP. 1
Proof: According to the Central Limit Theorem, we have
that

converges to a Gaussian random variable,


with zero expectation and variance

(
1

)
, if
secondary users and are both honest secondary users.
Therefore, we can approximate the distribution of

with
Gaussian distribution having expectation

and variance

(1

. Then, the joint distribution of the statistics

and

of an honest secondary user is given by

(
1
,
2
)

(
2

(
1

(
1

(
1

(
2

(
1

1
1

(
1

(
1

1
1

(
1

(
1

(
2

(
1

, (28)
where () is the -function dened by
() =
1

2
2
, (29)
and () is the probability density function of standard
Gaussian random variables, which is given by
() =
1

2
2
. (30)
By applying the following expansion of (), which is
given by
() =

2
2

2
(
1
1

2
+
3

4
+...
)
, (31)

can be asymptotically approximated by

(
1
,
2
)

2
1
2

2
1

2
2
2

2
1

1
1

2
1
+
2
2
2
2
1

2
, (32)
where

1
=
(
1

(
1

), (33)
and

2
=
(
2

(
1

). (34)
Then, the false-alarm probability of honest secondary users
is given by

(
1
,
2
)
1

2
. (35)
Substituting (32) into (35), we obtain (6). The detailed ma-
nipulation is omitted due to limited space.
Using a similar analysis, the joint probability of

and

of the malicious user is approximated by

(
1
,
2
)

(
2

(
1

(
1

(
1

(
2

(
1

1
1

(
1

(
1

1
1

(
1

(
1

(
2

(
1

. (36)
Therefore, the miss detection probability of the attacker de-
tection can be approximated by

(
1
,
2
)
1

2
. (37)
The remainder of the proof is the same as that of (6). This
concludes the proof.
3564 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 9, NO. 11, NOVEMBER 2010
APPENDIX B
PROOF OF PROP. 2
Proof: As argued in Section IV.A, a necessary condition
for the attacker to avoid both DSND detection and frequency
check is
(

) = (

). (38)
We substitute the denitions of (

) and
(

) into (38), which is equivalent to


(

(1

))
(

(1

)(1 2

(1 2

))
= (

(1

) +

)
(

(1 2

(1

)(1 2

)) . (39)
After the simplication on (39), we obtain
(1

)
2
= 0, (40)
which implies that

= 1. The fact that

0.5 and

0.5 yields the equation

= 0.5. (41)
APPENDIX C
NON-DETECTABILITY OF BALANCED DEPENDENT
ATTACK: PROOF OF PROP. 3
Proof: We assume that the attacker knows the reports
of all other secondary users and thus launches a balanced
dependent attack. Suppose that the fusion center knows that
there is one and only one attacker and knows the strategy of
the attacker, as well as its swapping probabilities
1
and
2
.
Then, the Bayesian approach can be used to compute the a
posteriori probability of being an attacker for each secondary
user. We denote by

the report of secondary user at


spectrum sensing period . As we have explained, the fusion
center considers only the spectrum-sensing periods in which
there are at least 1 secondary users reporting 0. Therefore,
without loss of generality, we assume that there are at least
1 secondary users reporting 0 in spectrum periods 1, 2,
..., .
Then, the a posteriori probability of secondary user being
the attacker is given by
(

= X) =
(X

= )(

= )
(X)
, (42)
where

= means that the type of secondary user is


attacker and X is the set of all reports. Since probabilities
(

= ) and (X) are common for all secondary users,


we can discuss only the probability (X

= ). It is easy
to verify that
(X

= ) =
#{:

=0,

=1

=0}
1

#{:

=0,

=1

=1}
2

#{:

=1}
3
(43)
, where # means the cardinality of set ,

=1

means
the OR of all {

}, and

1
=

(1

)
1

2
+

(1

(1
1
)
+

(1

)
2
+

(1
1
), (44)
and

2
=

(
1 (1

)
1
)
(1

)
+

(
1
1

, (45)
and

3
=

(1

)
1

(1
2
) +

(1

1
+

(1

)(1
2
) +

1
. (46)
Obviously,
1
denotes the probability that all secondary users
report 0,
2
is the probability that at least one secondary user
reports 1 while secondary user reports 0, and
3
is the
probability that secondary user reports 1. Note that
2
is
independent of
1
and
2
because, if secondary user is the
attacker, it will not launch the attack when another secondary
user reports 1.
Since the attacker uses the balanced dependent attack, the
probabilities
1
and
2
satisfy (21). Then, it is easy to verify
that

1
=

(1

, (47)
and

3
=

(1

)
1

(1

). (48)
Then, we notice that (X

= ) is independent of
1
and

2
. Therefore, the probability of generating the observations
X is the same regardless of whether secondary user is the
attacker or is honest. This implies that the reports provide no
information about the identity of the attacker. Therefore, the
attacker cannot be distinguished from other secondary users.
This concludes the proof.
REFERENCES
[1] S. Arkoulis, L. Kazatzopoulos, C. Delakouridis, and G. F. Marias,
Cognitive spectrum and its security issues, in Proc. 2nd International
Conference on Next Generation Mobile Applications, Services and
Technologies (NGMAST), 2008.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, Gambling in
a rigged casino: the adversarial multi-armed bandit problem, in Proc.
36rd IEEE Annual Symposium on Foundations of Computer Science
(FOCS), 1995.
[3] R. Chen, J. M. Park, and K. Bian, Robust distributed spectrum sensing
in cognitive radio networks, in Proc. IEEE Conference on Computer
Communications (Infocom), 2008.
[4] T. X. Brown and A. Sethi, Potential cognitive radio denial-of-service
vulnerabilities and protection countermeasures: a multi-dimensional
analysis and assessment, in Proc. 2nd International Conference on
Cognitive Radio Oriented Wireless Networks and Communications
(CrownCom), May 2007.
[5] T. Clancy and N. Goergen, Security in cognitive radio networks: threats
and mitigation, in Proc. 3rd International Conference on Cognitive
Radio Oriented Wireless Networks and Communications (CrownCom),
May 2008.
[6] R. Chen and J.-M. Park, Ensuring trustworthy spectrum sensing in
cognitive radio networks, in Proc. 1st IEEE Workshop on Networking
Technologies for Software Dened Radio Networks, 2006.
[7] R. Chen, J.-M. Park, and J. H. Reed, Defense against primary user
emulation attacks in cognitive radio networks, IEEE J. Sel. Areas
Commun., vol. 26, no. 1, Jan. 2008.
[8] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd
edition. Wiley-Interscience, 2006.
[9] A. Ghasemi and E. S. Sousa, Collaborative spectrum sensing for op-
portunistic access in fading environments, in Proc. IEEE International
Symposium on New Frontiers in Dynamic Spectrum Access Networks
(DySPAN), 2005.
[10] A. Ghasemi and E. S. Sousa, Opportunistic spectrum access in fading
channels through collaborative sensing, Journal Commun., vol. 2, no.
2, pp. 71-82, Mar. 2007.
LI and HAN: CATCH ME IF YOU CAN: AN ABNORMALITY DETECTION APPROACH FOR COLLABORATIVE SPECTRUM SENSING IN COGNITIVE . . . 3565
[11] Z. Han and K. J. R. Liu, Resource Allocation for Wireless Networks.
Cambridge University Press, 2008.
[12] E. Hossain, D. Niyato, and Z. Han, Dynamic Spectrum Access in
Cognitive Radio Networks. Cambridge University Press, 2009.
[13] P. J. Huber, Robust Statistics. New York: Wiley, 1981.
[14] K. B. Letaief and W. Zhang, Cooperative spectrum sensing, Cognitive
Wireless Communication Networks. Springer, 2007.
[15] C. Sun, W. Zhang, and K. B. Letaief, Cluster-based cooperative spec-
trum sensing in cognitive radio systems, in Proc. IEEE International
Conference on Communications (ICC), 2007.
[16] C. H. Lee and W. Wolf, Energy efcient techniques for cooperative
spectrum sensing in cognitive radios, in Proc. IEEE Consumer Com-
munications and Networking Conference, 2008.
[17] G. Ghurumuruhan and Y. (G.) Li, Cooperative spectrum sensing
in cognitive radiopart I: two user networks, IEEE Trans. Wireless
Commun., vol. 6, no. 6, pp. 2204-2213, June 2007.
[18] G. Ghurumuruhan and Y. (G.) Li, Cooperative spectrum sensing in
cognitive radiopart II: multiuser networks, IEEE Trans. Wireless
Commun., vol. 6, no. 6, pp. 2214-2222, June 2007.
[19] P. Kaligineedi, M. Khabbazian, and V. Bhargava, Secure cooperative
sensing techniques for cognitive radio system, in Proc. IEEE Interna-
tional Conference on Communications (ICC), 2008.
[20] H. Li and Z. Han, Dogght in spectrum: jamming and anti-jamming
in cognitive radio systems, in Proc. IEEE Conference on Global
Communications (Globecom), 2009.
[21] H. Li and Z. Han, Blind dogght in spectrum: combating primary
user emulation attacks in cognitive radio systems with unknown channel
statistics, submitted to IEEE International Conference on Communica-
tions (ICC), 2010.
[22] F. Liu, X. Cheng, and D. Chen, Insider attacker detection in wireless
sensor networks, in Proc. IEEE Conference on Computer Communica-
tions (Infocom), 2007.
[23] S. M. Mishra, A. Sahai, and R. W. Broderson, Cooperative sensing
among cognitive radios, in Proc. IEEE International Conference on
Communications (ICC), 2006.
[24] A. Sampath, H. Dai, H. Zheng, and B.Y. Zhao, Multi-channel jamming
attacks using cognitive radios, in Proc. IEEE Conference on Computer
Communications and Networks (ICCCN), 2007.
[25] P. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining.
Addison Wesley, 2006.
[26] W. Wang, H. Li, Y. Sun, and Z. Han, Attack-proof collaborative
spectrum sensing in cognitive radio networks, in Proc. Conference on
Information Sciences and Systems (CISS), 2009.
[27] W. Wang, H. Li, Y. Sun, and Z. Han, CatchIt: detect malicious nodes
in collaborative spectrum sensing, in Proc. IEEE Conference on Global
Communications (Globecom), 2009.
[28] W. Xu, P. Kamat, and W. Trappe, TRIESTE: a trusted radio infrastruc-
ture for enforcing specTrum etiquettes, in Proc. 1st IEEE Workshop on
Networking Technologies for Software Dened Radio Networks, 2006.
[29] W. Zhang, S. K. Das, and Y. Liu, A trust based framework for secure
data aggregation in wireless sensor networks, in Proc. IEEE Conference
on Sensor, Mesh and Ad hoc Communications and Networks (SECON),
2006.
Husheng Li (S00-M05) received the BS and MS
degrees in electronic engineering from Tsinghua
University, Beijing, China, in 1998 and 2000, re-
spectively, and the Ph.D. degree in electrical engi-
neering from Princeton University, Princeton, NJ, in
2005.
From 2005 to 2007, he worked as a senior engi-
neer at Qualcomm Inc., San Diego, CA. In 2007,
he joined the EECS department of the University of
Tennessee, Knoxville, TN, as an assistant professor.
His research is mainly focused on statistical signal
processing, wireless communications, networking and smart grid. Particularly,
he is interested in applying machine learning and articial intelligence in
cognitive radio networks. Dr. Li is the recipient of the Best Paper Award of
EURASIP Journal of Wireless Communications and Networks, 2005 (together
with his PhD advisor: Prof. H. V. Poor).
Zhu Han (S01-M04-S09) received the B.S. de-
gree in electronic engineering from Tsinghua Uni-
versity, in 1997, and the M.S. and Ph.D. degrees in
electrical engineering from the University of Mary-
land, College Park, in 1999 and 2003, respectively.
From 2000 to 2002, he was an R&D Engineer
of JDSU, Germantown, Maryland. From 2003 to
2006, he was a Research Associate at the Univer-
sity of Maryland. From 2006 to 2008, he was an
assistant professor in Boise State University, Idaho.
Currently, he is an Assistant Professor in Electrical
and Computer Engineering Department at University of Houston, Texas. In
June-August 2006, he was a visiting scholar in Princeton University. In May-
August 2007, he was a visiting professor in Stanford University. In May-
August 2008, he was a visiting professor in University of Oslo, Norway
and Supelec, Paris, France. In July 2009, he was a visiting professor in
the University of Illinois at Urbana-Champion. In June 2010, he visited the
University of Avignon, France. His research interests include wireless resource
allocation and management, wireless communications and networking, game
theory, wireless multimedia, and security.
Dr. Han is an NSF CAREER award recipient 2010. Dr. Han is an Associate
Editor of IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS since
2010. Dr. Han was the MAC Symposium vice chair of IEEE Wireless Com-
munications and Networking Conference, 2008. Dr. Han was the Guest Editor
for Special Issue on Cooperative Networking Challenges and Applications
(IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS) Fairness of
Radio Resource Management Techniques in Wireless Networks (EURASIP
Journal on Wireless Communications and Networking), and Special Issue
on Game Theory (EURASIP Journal on Advances in Signal Processing).
Dr. Han is the coauthor for the papers that won the best paper awards in
IEEE International Conference on Communications 2009 and 7th International
Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless
Networks (WiOpt09).

Das könnte Ihnen auch gefallen