Beruflich Dokumente
Kultur Dokumente
1) Modeling Users’ Mobility: To credibly imitate the (1//SMMBi−1,i ), where SMMBi−1,i is the area of the MMB
mobility of users in LIA, we propose a CR-based MMBi−1,i . Conversely, given current location, the possibil-
mobility model, by modeling users’ mobility trajectories ity that the last location is lu,i−1 at time ti−1 is given by
based on CRs instead of exact locations. (1/SRu,ti−1 ), where SRu,ti−1 is the area of the CR Ru,ti−1 at time
2) Computing Mobility Similarity: Given users’ mobil- ti−1 .
ity model, we propose the metric—mobility similarity, Proof: When the high-risk user u issues LBS query at
to grasp the correlation between the high-risk users’ time ti , u must be in the MMB MMBi−1,i , as shown in
mobility and other users’s mobility. Fig. 4(a). That is to say, u locates at any point in MMBi−1,i
3) Generating the CR: Based on the mobility similarity, we with the equal probability in the view of attackers. Thus,
assign each user a credibility value. Then we propose a l ,ti 1
credibility-based k-cloaking algorithm, which cloaks the Plu,i
u,i−1 ,ti−1
= . (4)
SMMBi−1,i
high-risk user with the credible users, and cloaks other
users scoring approximation credibility in the same CR Given the location of u at time ti , u must locate in the max-
so that all users’ location privacy can be protected. imum arrival boundary (MAB)3 MABi−1,i at time ti−1 , as
shown in Fig. 4(b). Furthermore, u must locate in the CR
Ru,ti−1 at time ti−1 , since attackers have a knowledge of the
A. Mobility Model
CR Ru,ti−1 at time ti−1 . So,
We model users’ mobility as a time-dependent first-order
l ,ti−1 1
Markov chain on the set of locations. In his sense, the mobil- Plu,i−1
u,i ,ti
= . (5)
ity profile Pu , Lu for a given high-risk user u (or Pa , La for SRu,ti−1
a fake user) is a transition probability matrix of the Markov To sum up, Lemma 1 holds.
chain that characterizes the user’s visiting probability distribu- According to Lemma 1, we get the following results.
tion over his locations. Note that all users except for high-risk Theorem 1: The possibility that the high-risk user
users, i.e., trusted users and fake users, are regarded as fake locates at lu,i at time ti is given by (SRu,t1 /SMMB1,2 ) · · ·
users at first. Then we distinguish trusted users and fake users (SRu,ti−1 /SMMBi−1,i )(1/SRu,t1 ).
with the help of the mobility similarity defined based upon Proof: Combining with (2) and (3), we can get
our proposed mobility models. l ,t l ,ti−1
1) Mobility Model of High-Risk Users: Intuitively, we Plu,i i
u,i−1 ,ti−1
Plu,i−1 ,ti−1 = Plu,i−1
u,i ,ti
Plu,i ,ti . (6)
should model the mobility of high-risk users in the perspec- Combining Lemma 1 and (6), we get
tive of the attacker, in order to accurately grasp the relationship 1 1
between locations of high-risk users and fake users. Since the Plu,i−1 ,ti−1 = Pl ,t . (7)
SMMBi−1,i SRu,ti−1 u,i i
attacker only has prior knowledge of the previous CRs and
can infer the maximum movement boundaries (MMBs)2 of Then we can deduce
⎧ 1
SMMB1,2 Plu,1 ,t1 =
1
high-risk users, we propose the following CR-based mobility ⎪
⎪ Plu,2 ,t2
⎪
⎪
SRu,t
1
model for high-risk users. ⎪
⎨ SMMB
1
Plu,2 ,t2 = 1
Plu,3 ,t3
Denote Lu locations of a high-risk user u, and Tu the time 2,3
SRu,t
2
.. (8)
when u issues LBS queries. The possibility that the location ⎪
⎪ .
⎪
⎪
of u is lu,i (i ∈ (1, 2, . . . ) at time ti is denoted by Plu,i ,ti . The ⎪
⎩
l ,ti
1
SMMBi−1,i Plu,i−1 ,ti−1 = 1
SRu,t Plu,i ,ti .
possibility that u locates at lu,i at time ti is Plu,i u,i−1 ,ti−1
, given i−1
the location lu,i−1 at time ti−1 . Then we can get Therefore, the possibility that u locates at lu,i at time ti is
SRu,t1 SRu,t2 SRu,ti−1
Plu,i ,ti = P{Lu = lu,i , Tu = ti } (1) Plu,i ,ti = ··· Pl ,t . (9)
lu,i ,ti SMMB1,2 SMMB2,3 SMMBi−1,i u,1 1
Plu,i−1 ,ti−1 = P{Lu = lu,i , Tu = ti /Lu = lu,i−1 , Tu = ti−1 }. (2)
Since the CR Ru,t1 is known to attackers, the possibility Plu,1 ,t1
Similarly, given Lu = lu,i at time ti , the possibility that the that Lu = lu,1 when u issues LBS query at first time t1 is
location Lu = lu,i−1 at time ti−1 is Plu,1 ,t1 = SR1 . Therefore, we get
u,t1
l ,ti−1 P{Lu = lu,i , Tu = ti ; Lu = lu,i−1 , Tu = ti−1 } SRu,t1 SRu,t2 SRu,ti−1
Plu,i−1 = . 1
u,i ,ti Plu,i ,ti Plu,i ,ti = ··· . (10)
SMMB1,2 SMMB2,3 SMMBi−1,i SRu,t1
(3)
In summary, Theorem 1 holds.
Next, we focus on computing the transition probability It can be observed from (10) that a larger i results in less
l ,ti l ,ti−1
Plu,i
u,i−1 ,ti−1
, Plu,i−1
u,i ,ti
, and Plu,i ,ti . We first present the follow- possibility that u locates at lu,i at time ti , since SRu,ti−1 is obvi-
ing results. ously less than SMMBi−1,i . This is in accordance with the fact
Lemma 1: Suppose the last location is lu,i−1 at time ti−1 , that the attacker is not aware of the exact location of u when-
the possibility that the current location is lu,i at time ti is ever u issues LBS query, as he only has the knowledge of
previous CRs of u.
2 MMB is a circle that extends the CR by a radius of v
max t, where vmax
3 MAB is a circle that extends the CR by a radius of v
is the maximum moving speed. MMB MMBi−1,i is a circle that extends the max t. MAB
CR at time ti−1 by a radius of vmax (ti − ti−1 ). MABi−1,i is a circle that extends the CR at ti by a radius of (ti − ti−1 ).
ZHAO et al.: ILLIA: ENABLING k-ANONYMITY-BASED PRIVACY PRESERVING AGAINST LIAs IN CONTINUOUS LBS QUERIES 1037
Fig. 4. Illustration of computing the transition possibility for high-risk users. Fig. 5. Illustration of computing the transition possibility for attackers.
(a) The probability that u locates in MMBi−1,i , given the CR at ti−1 . (b) The (a) The effect of both MMBi−1,i and MMBa,j−1,j on the mobility of attack-
probability that u locates in MABi−1,i , given the CR at ti . (c) Legend. ers. (b) The geometrically formalization of the effect of MMBi−1,i and
MMBa,j−1,j .
2) Mobility Model of Attacker-Generated Fake Locations: Theorem 2: The possibility for an attacker to launch LIA
Despite the way of manipulating the fake locations that the is shown in (13) at the bottom of this page, where γ1 =
attacker follows, the trajectories of fake users consist of some 2
ru,t i−1
− ra,t
2
i−1
, γ2 = |la,j−1 ou,ti−1 |2 , ou,ti−1 is the center of u’
plausible trajectory fragments that deliberately imitate a user’s CR Ru,ti−1 at time ti−1 , |.| represents the Euclidean distance of
mobility pattern (e.g., meeting the constraints of physical fac- two points, and ru,ti−1 and ra,ti−1 are the radius of MMBi−1,i
tors and exhibiting consistent life style), and some obviously and MMBa,j−1,j , respectively.
fake trajectory fragments. Fortunately, it has been proved in Proof: According to Lemma 2, to conduct LIA, the
the existing work [14], [24], [30] that the fake trajectory frag- fake location should locates in the overlapped region of
ments are vulnerable to the location inference attacks that can MMBa,j−1,j and MMBi−1,i , namely the blurred region, as
easily filter out fake trajectories. In addition, as for the plau- shown in Fig. 5(a). Thus,
sible trajectories, to breach the location privacy of high-risk 1
l ,t
users, the fake locations in plausible trajectories should be Pla,j i
a,j−1 ,lu,i−1 ,ti−1
= (14)
SMMBj−1,j,i−1,i
cloaked in the same CR with the locations of high-risk users.
As a result, previous CRs of high-risk users have influence on where SMMBj−1,j,i−1,i is the area of the blurred region.
the plausible trajectories of fake users. In summary, we only According to Fig. 5(b), the area of sector fbcou,ti−1 and
need to model the plausible trajectories, considering the pre- triangle ou,ti−1 fc is
vious CRs of high-risk users and physical factors, which is γ 1 + γ2
introduced in detail as follows. Sfbcou,ti−1 = 2 arccos ru,ti−1 (15)
ru,t
Denote Pla,j ,ti the possibility that La = la,j (j ∈ (1, 2, . . . )) i−1
at time ti . The possibility that La = la,j at time ti is γ1 + γ2 2 (γ1 + γ2 )2
l ,ti Sou,ti−1 fc = √ ru,ti−1 − . (16)
Pla,j
a,j−1 ,lu,i−1 ,ti−1
, given La = la,j−1 and Lu = lu,i−1 at time ti−1 . 2 γ2 4γ2
Then we can get
Similarity, the area of sector fdcla,i−1 and triangle
Pla,j ,ti = P{La = la,j , Ta = ti } (11) fcla,i−1 is
l ,t
Pla,j i
= P{La = la,j , Ta = ti /La = la,j−1 , Ta = ti−1 / γ 2 − γ1
a,j−1 ,lu,i−1 ,ti−1 Sfdcla,i−1 = 2 arccos ra,ti−1 (17)
ru,ti−1
Lu = lu,i−1 , Tu = ti−1 }. (12)
γ2 − γ1 2 (γ2 − γ1 )2
Thereafter, we concentrate on computing the transition Sfcla,i−1 = √ ra,ti−1 − . (18)
l ,ti 2 γ2 4γ2
probability Pla,j
a,j−1 ,lu,i−1 ,ti−1
.
Lemma 2: The attacker can launch LIA to a high-risk user, So we can get the area of the shaded region
iff he is in the overlapped region of his MMB4 and the MMB
SMMBj−1,j,i−1,i = Sfbcou,ti−1 − Sou,ti−1 fc + Sfdcla,i−1 − Sfcla,i−1 .
of the high-risk user.
Proof: On one hand, the attacker must be in his MMB to (19)
satisfy the maximum moving speed constraint. On the other To sum up, Theorem 2 holds.
hand, the attacker must be in the MMB of the high-risk user Based upon Theorems 1 and 2, different spatiotemporal
so that he can be cloaked with the high-risk user. In summary, behaviors of high-risk users and attackers (i.e., the possibil-
Lemma 2 holds. ity of visiting a specific location deduced from Theorems 1
4 The attacker’s MMB MMB
a,j−1,j is a vmax (ti − ti−1 )-radius circle that
and 2, respectively) can be identified when LIA are launched.
centers at his fake location la,j−1 . Accordingly, we can take advantage of such spatiotemporal
l ,t 1
Pla,j i
a,j−1 ,lu,i−1 ,ti−1
= (13)
2 arccos γru,t
1 +γ2
ru,ti−1 − γ21√+γ
γ2
2 2
ru,t i−1
− (γ1 +γ2
4γ2
)2
+ 2 arccos γru,t
2 −γ1
ra,ti−1 − γ2√
−γ1
2 γ2
2
ra,t i−1
− (γ2 −γ1 )2
4γ2
i−1 i−1
1038 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 2, APRIL 2018
TABLE I
propose to cloak users owning the approximation credibility M OBILITY S IMILARITY IN VARIOUS DATA
in the same CR.
In summary, Algorithm 1 shows the pseudocode of the
credibility-based k-cloaking algorithm above.
D. Discussion
fake locations. Each fake user can randomly attack a high-risk
ILLIA can protect the location privacy against LIA, and
user whenever updating his locations.
guarantee the QoS in terms of the computation cost.
Other default settings are as follows: k is set randomly in
Theorem 3: ILLIA can protect users’ location privacy
the range of [2, 12], Amin is set [0.005, 0.01] percent of the
against LIA, regardless of the way of manipulating fake
space, ε = 0.05, Cre0 = 0.1, and ξ is set 0.05.
locations that the attacker follows.
Proof: Regardless of the way of manipulating fake loca-
tions, the obviously fake trajectories are susceptible to location B. Evaluation Metrics
inference attacks, and thus are first filtered out via incorpo- To demonstrate the effectiveness and efficiency of ILLIA,
rating the state-of-the art location inference attack [31]. In we compare ILLIA with OPT and EPK. OPT cloaks
addition, the plausible trajectories that deliberately imitate a users without considering LIA, while EPK straightforwardly
user’s mobility pattern are distinguished based on the credi- enlarges privacy parameter k to 2k. In addition, we focus on
bility that deduced from the mobility similarity. In summary, four metrics for performance evaluation: 1) the average attack
ILLIA can identify fake users without requiring in advance success rate δa ; 2) the average cloaking success rate δc ; 3)
knowledge of how fake locations are manipulated. the average processing time ω; and 4) the cumulative dis-
Theorem 4: The computation cost in ILLIA is at most tribution function (CDF) of CRs. δa and δc are defined as
O(N), where N is the total number of users. follows.
Proof: The first step is to model users’ mobility. It incurs Definition 4: A high-risk user or trusted user u suffers from
O(N(u)) computation cost to model the mobility of high-risk the successful LIA when N − N(a) < ku , where N is the total
users, where N(u) is the number of high-risk users. Then we number of users in the CR of u, and N(a) is the number of
model the mobility of other users, which incurs O(N(a)+N(t)) fake locations in the CR.
computation cost (N(a) and N(t) are the number of fake users Definition 5: Assume N1 (u) high-risk users and N2 (u)
and trusted users). trusted users are cloaked, and n1 (u) high-risk users and n2 (u)
The second step is to compute the mobility similarity to trusted users suffer from successful LIA, then the attack
search for qualified users that can be cloaked with high-risk success rate is δa = (n1 (u) + n2 (u))/(N1 (u) + N2 (u)).
N(u)
users. Thus the time complexity is O( i=1 ki ). Definition 6: Assume N1 (u) high-risk users, N2 (u) trusted
The last step is to generate CR for users, which is users and N3 (a) attackers are cloaked, and the total number
shown in Algorithm 1. The most expensive operation is to of users is N, then the cloaking success rate is δc = (N1 (u) +
search the qualified users for high-risk users (lines
N(u) 2–7) and N2 (u) + N3 (a))/N.
other trusted users (lines 8–10), which take O( i=1 ki ) and
N(a)+N(t)
O( i=1 ki ) computation cost at most, respectively. C. Mobility Similarity Analysis
In summary, the time complexity is O(N) at most, since
Before digging into the performance of ILLIA, we focus
N(u) < N, N(a) < N, and N(t) < N.
on characterizing the mobility similarity in default settings,
Overall, Theorem 4 holds.
which is shown in Table I. The mobility of high-risk users
and trusted users are not strongly similar to each other, with
IV. P ERFORMANCE E VALUATION the mobility similarity sim(u,t) less than 0.009. In contrast,
A. Evaluation Dataset high-risk users and attackers share a more similar mobility
sim(u,a) .
We employ the location-based social network dataset, loc-
Gowalla [32] to validate ILLIA. loc-Gowalla collects 6.4
million check-ins (i.e., locations) from Feb. 2009 to Oct. 2010, D. Performance Varies With k and na
consisting of 196 591 nodes (i.e., users) and 950 327 edges In this part, we investigate the impact of privacy parameter
(i.e., friendships). k and the number of attackers na . We vary each user’s k from
The statistics of high-risk users, trusted users, and fake users 2 to 12. We set the number of attackers na = 131 061 ( the
are shown in Table I. First, we randomly select ni users as the number of fake locations la = 3 865 734) and na = 98 295
high-risk users, and the remaining users are regarded as the (la = 2 577 156), respectively.
trusted users. Denote mi,t the number of friendship between 1) δa Varies With k and na : Fig. 6(a) shows that average
high-risk users and trusted users. Then na locations are ran- attack success rate δa varies with k and na . It can be observed
domly chosen as na fake users’ initial locations. Thereafter, that ILLIA yields lower δa than the other two schemes, and
during each updating of fake users’ locations, we randomly the superiority of ILLIA is even more prominent as k and na
select na locations as the current locations of fake users. increase. Specifically, δa in ILLIA is 2.83 times lower than that
Denote li and la the number of high-risk users’ locations and in OPT and 1.04 times lower than that in EPK at most. The
1040 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 2, APRIL 2018
Fig. 6. δa , ω, and δc vary with k and na ; σ1 = 98 295 and σ2 = 131 061 are the numbers of attackers (i.e., na ). (a) δa varies with k and na . (b) ω varies
with k and na . (c) δc varies with k and na .
Fig. 7. (a) CDF varies with na ; 2 and 12 in the legend are the values of k; σ1 = 98 295, σ2 = 131 061. (b) CDF varies with k. (c) CDF varies with ξ .
(a) CDF varies with na . (b) CDF varies with k. (c) CDF varies with ξ .
better performance of ILLIA benefits from the mobility sim- 3) δc Varies With k and na : In Fig. 6(c), it can be observed
ilarity between locations of attackers and high-risk users and that δc in these algorithms decreases with k and na . This
the credibility-based k-cloaking algorithm. In contrast, OPT is not much surprising as increasing both the privacy level
does not consider LIA, and EPK does not concern the credi- k and the number of fake locations will result in longer
bility of users’ locations. Additionally, it can be seen that δa ω, and therefore lead to more LBS queries expired and not
in these algorithms increases when increasing k and na . This successfully cloked. Besides, compared with OPT, only a
is not much surprising as a larger k or na means a higher pos- success rate of 10.13% at most in ILLIA is sacrificed for
sibility to cloak a fake location with locations of trusted users protecting against LIA, while 53.8% success rate in EPK is
or high-risk users, thus incurring much higher δa . sacrificed. This is mainly because ω in EPK is much longer
2) ω Varies With k and na : The processing time ω is shown than that in OPT and ILLIA, incurring more LBS queries
in Fig. 6(b). ω in ILLIA is less than that in EPK, and more expired.
than that in OPT. One reason is that EPK enlarges k and has 4) CDF Varies With k and na : In Fig. 7(a) and (b), we can
to cloaks more locations, and OPT does not consider LIA. see that CRs’ area in ILLIA is smaller than that in EPK, but
Also as expected, the overall trends in all cloaking algorithms larger than that in OPT. In addition, CRs’ area is positively
are similar: ω increases with the increase of k and na . That correlated with k [see Fig. 7(b)], as larger k means algorithms
is because increasing k means a more constrained privacy have to cloak further users to complete location k-anonymity,
requirement, and a larger na means all cloaking algorithms and thus results in larger CRs. Moreover, the size of CRs
have to cloak more locations of users. decreases with na [see Fig. 7(a)], because more users around
ZHAO et al.: ILLIA: ENABLING k-ANONYMITY-BASED PRIVACY PRESERVING AGAINST LIAs IN CONTINUOUS LBS QUERIES 1041
a specific user and algorithms can generate a smaller CR so [7] W. Enck et al., “TaintDroid: An information-flow tracking system for
that it contains no less than (k − 1) other users. realtime privacy monitoring on smartphones,” ACM Trans. Comput.
Syst., vol. 32, no. 2, pp. 1–29, 2014.
[8] Dark Net Linkedin Sale Looks Like the Real Deal. Accessed:
E. Performance Varies With ξ Dec. 12, 2016. [Online]. Available: http://www.theregister.co.uk/2016/
05/18/linkedin/
In this section, we study the impact of the threshold [9] Recently Confirmed Myspace Hack Could Be the Largest Yet.
of mobility similarity ξ , and set ξ 0.05, 0.06, and 0.07, Accessed: Feb. 27, 2017. [Online]. Available: https://techcrunch.com/
2016/05/31/recently-confirmed-myspace-hack-could-be-the-largest-yet/
respectively. Other parameters are as the default settings. [10] S. Gisdakis, T. Giannetsos, and P. Papadimitratos, “Security, privacy, and
1) δa Varies With ξ : We can see δa in ILLIA increases incentive provision for mobile crowd sensing systems,” IEEE Internet
when increasing ξ , while δa in OPT and EPK nearly keeps Things J., vol. 3, no. 5, pp. 839–853, Oct. 2016.
[11] B. Han, J. Li, J. Su, M. Guo, and B. Zhao, “Secrecy capacity
steady, which is shown in Fig. 8(a). The reason is that more optimization via cooperative relaying and jamming for WANETs,”
fake locations can be cloaked with locations of high-risk users IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 4, pp. 1117–1128,
and trusted users when we enlarge ξ in ILLIA. In contrast, Apr. 2015.
[12] J. Sun, R. Zhang, and Y. Zhang, “Privacy-preserving spatiotemporal
both OPT and EPK do not consider the credibility of loca- matching for secure device-to-device communications,” IEEE Internet
tions, and thereby δa in the two algorithms is not affected Things J., vol. 3, no. 6, pp. 1048–1060, Dec. 2016.
by ξ . Additionally, ILLIA outperforms EPK, resulting in a [13] M. Gruteser and D. Grunwald, “Anonymous usage of location-based
services through spatial and temporal cloaking,” in Proc. ACM MobiSys,
25%–35% improvement. The reasons are as explained in San Francisco, CA, USA, 2003, pp. 31–42.
Fig. 6(a). [14] V. Bindschaedler and R. Shokri, “Synthesizing plausible privacy-
2) ω Varies With ξ : ω in ILLIA decreases when increasing preserving location traces,” in Proc. IEEE S&P, San Jose, CA, USA,
2016, pp. 546–563.
ξ , as shown in Fig. 8(b). In contrast, ω in OPT and EPK are [15] J.-D. Zhang and C.-Y. Chow, “REAL: A reciprocal protocol for loca-
not affected. In addition, we can clearly see that ω in ILLIA tion privacy in wireless sensor networks,” IEEE Trans. Depend. Secure
is much lower than that in EPK, but a bit higher than that in Comput., vol. 12, no. 4, pp. 458–471, Jul./Aug. 2015.
[16] Y. Zhang, W. Tong, and S. Zhong, “On designing satisfaction-ratio-
OPT, since OPT does not consider LIA, and EPK enlarges k. aware truthful incentive mechanisms for k-anonymity location privacy,”
3) δc Varies With ξ : As we expected, δc in OPT and EPK IEEE Trans. Inf. Forensics Security, vol. 11, no. 11, pp. 2528–2541,
is not affected by simo(u,a) , because they do not consider the Nov. 2016.
[17] B. Niu, Q. Li, X. Zhu, G. Cao, and H. Li, “Achieving k-anonymity
credibility of users’ locations, which is shown in Fig. 8(c). in privacy-aware location-based services,” in Proc. IEEE INFOCOM,
While δc in ILLIA increases when increasing ξ , it is because Toronto, ON, Canada, 2014, pp. 754–762.
a larger ξ results in a less processing time, and thus a higher [18] X. Liu, K. Liu, L. Guo, X. Li, and Y. Fang, “A game-theoretic approach
for achieving k-anonymity in location based services,” in Proc. IEEE
cloaking success rate. INFOCOM, Turin, Italy, 2013, pp. 2985–2993.
4) CDF Varies With ξ : As shown in Fig. 7(c), CR’s area [19] J. Zeng et al., “Mobile r-gather: Distributed and geographic clustering
in ILLIA decreases when increasing ξ , but it is not affected for location anonymity,” in Proc. ACM MobiHoc, Chennai, India, 2017,
in OPT and EPK. Besides, CR’s area in ILLIA is obviously Art. no. 7.
[20] G. Ghinita, M. L. Damiani, C. Silvestri, and E. Bertino, “Preventing
less than that in EPK. velocity-based linkage attacks in location-aware applications,”
in Proc. ACM SIGSPATIAL GIS, Seattle, WA, USA, 2009,
pp. 246–255.
V. C ONCLUSION [21] G. Ghinita, M. L. Damiani, C. Silvestri, and E. Bertino, “Protecting
In this paper, we have presented ILLIA, the first work against velocity-based, proximity-based, and external event attacks in
location-centric social networks,” ACM Trans. Spatial Algorithms Syst.,
that can preserve location privacy against LIA in continu- vol. 2, no. 2, pp. 1–36, 2016.
ous LBS queries. ILLIA is appealing as it is a scalable and [22] FakeGPStracker. Accessed: May 7, 2017. [Online]. Available:
general solution with location privacy-guaranteed and QoS- http://fakegps.com/
[23] L. Jin, B. Palanisamy, and J. B. D. Joshi, “POSTER: Compromising
guaranteed simultaneously, requiring no in advance knowledge cloaking-based location privacy preserving mechanisms with location
of the way of manipulating fake locations used by attackers. injection attacks,” in Proc. ACM SIGSAC CCS, Scottsdale, AZ, USA,
Extensive simulations on loc-Gowalla dataset demonstrate the 2014, pp. 1439–1441.
[24] R. Shokri, G. Theodorakopoulos, C. Troncoso, J.-P. Hubaux, and
effectiveness and efficiency of ILLIA. J.-Y. Le Boudec, “Protecting location privacy: Optimal strategy against
localization attacks,” in Proc. ACM CCS, Raleigh, NC, USA, 2102,
R EFERENCES pp. 617–627.
[25] G. Wang, T. Wang, W. Jia, M. Guo, and J. Li, “Adaptive location updates
[1] H. Lu, J. Li, and M. Guizani, “Secure and efficient data transmission for for mobile sinks in wireless sensor networks,” J. Supercomput., vol. 47,
cluster-based wireless sensor networks,” IEEE Trans. Parallel Distrib. no. 2, pp. 127–145, 2009.
Syst., vol. 25, no. 3, pp. 750–761, Mar. 2014. [26] X. Pan, J. Xu, and X. Meng, “Protecting location privacy against
[2] J. Y. Koh, I. Nevat, D. Leong, and W.-C. Wong, “Geo-spatial loca- location-dependent attacks in mobile services,” IEEE Trans. Knowl.
tion spoofing detection for Internet of Things,” IEEE Internet Things J., Data Eng., vol. 24, no. 8, pp. 1506–1519, Aug. 2012.
vol. 3, no. 6, pp. 971–978, Dec. 2016. [27] H. Yin et al., “Joint modeling of user check-in behaviors for real-time
[3] C. Wang, H. Lin, and H. Jiang, “CANS: Towards congestion-adaptive point-of-interest recommendation,” ACM Trans. Inf. Syst., vol. 35, no. 2,
and small stretch emergency navigation with wireless sensor networks,” pp. 1–44, 2016.
IEEE Trans. Mobile Comput., vol. 15, no. 5, pp. 1077–1089, May 2016. [28] J. Habibi, D. Midi, A. Mudgerikar, and E. Bertino, “Heimdall: Mitigating
[4] S. Landau, “Control use of data to protect privacy,” Science, vol. 347, the Internet of insecure things,” IEEE Internet Things J., vol. 4, no. 4,
no. 6221, pp. 504–506, 2015. pp. 968–978, Aug. 2017.
[5] K. Fawaz, H. Feng, and K. G. Shin, “Anatomization and protection of [29] Y. Afek, A. Bremler-Barr, and L. Shafir, “Network anti-spoofing with
mobile apps’ location privacy threats,” in Proc. USENIX Security Symp., SDN data plane,” in Proc. IEEE INFOCOM, Atlanta, GA, USA, 2017,
Washington, DC, USA, 2015, pp. 753–768. pp. 1–9.
[6] X. Wu et al., “Social network de-anonymization with overlapping [30] L. Wang, J.-H. Cho, I.-R. Chen, and J. Chen, “PDGM: Percolation-based
communities: Analysis, algorithm and experiments,” in Proc. IEEE directed graph matching in social networks,” in Proc. IEEE ICC, Paris,
INFOCOM, May 2018. France, 2017, pp. 1–7.
1042 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 2, APRIL 2018
[31] R. Shokri, G. Theodorakopoulos, G. Danezis, J.-P. Hubaux, and Fu Xiao received the Ph.D. degree in computer
J.-Y. Le Boudec, “Quantifying location privacy: The case of sporadic science and technology from the Nanjing University
location exposure,” in Proc. Int. Symp. Privacy Enhanc. Technol. Symp., of Science and Technology, Nanjing, China.
Waterloo, ON, Canada, 2011, pp. 57–76. He is currently a Professor and the Ph.D.
[32] J. Leskovec and A. Krevl. (Jun. 2014). SNAP Datasets: Supervisor with the School of Computer, Nanjing
Stanford Large Network Dataset Collection. [Online]. Available: University of Posts and Telecommunications,
http://snap.stanford.edu/data Nanjing. His current research interests include
wireless networking and sensor networks.