Sie sind auf Seite 1von 36

Purdue University

Purdue e-Pubs
Computer Science Technical Reports Department of Computer Science

2006

SORT: A Self-Organizing Trust Model for Peer-to-


Peer Systems
Ahmet Burak Can

Bharat Bhargava
Purdue University, bb@cs.purdue.edu

Report Number:
06-016

Can, Ahmet Burak and Bhargava, Bharat, "SORT: A Self-Organizing Trust Model for Peer-to-Peer Systems" (2006). Computer Science
Technical Reports. Paper 1659.
http://docs.lib.purdue.edu/cstech/1659

This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for
additional information.
SORT:
SORT: A
A SELF-ORGANIZING
SELF-ORGANIZING TRUST
TRUST MODEL
MODEL
FOR
FOR PEER-TO-PEER
PEER-TO-PEER SYSTEMS
SYSTEMS

Ahmet Burak
Abmet Burak Can
Can
Bharat Bhargava
Bharat Bhargava

Departmentof
Department of Computer
Computer Sciences
Sciences
Purdue University
Purdue University
West Lafayette,
West Lafayette, IN
IN 47907
47907

CSDTR
CSD TR #06016
#06-016
August 2006
August 2006
SORT:
SORT: A Self-ORganizing Trust Model for Peer-to-peer
Systems *
Ahmet Burak Can and Bharat Bhargava
Department of Computer Science, Purdue University
West Lafayette, IN 47907
{acan, bb}
bb) @cs.purdue.edu

August 14,2006
14,2006

Abstract

Anonymous nature of peer-to-peer (P2P)


Anonymous (PZP) systems exposes them
themtoto malicious activity.
activity_ Establish-
ing trust among peers can mitigate attacks from malicious peers. This paper presents distributed
ing
ti-ustwoithiness of others based on the available local
algorithms used by a peer to reason about trustw0l1hiness
infol.mation which includes past interactions and recommendations received from others. Peers
information
collaborate to
coJlaborate ti-ust among each other without using a priori inforn~ation.
to establish trust information. Trust decisions
are adaptive
are adaptive to changes in i n trust among peers. A peer's trustworthiness in providing services and
giving recommendations is
giving is evaluated in service and reco~n~nendation
recommendation contexts. Defining trust met-
r i c ~in
rics in separate contexts makes possible to measure trustworthiness
trustworthiness of
of peers more precisely. A
peer may be a good service provider and a bad recommender at the same time. Interactions among
impol-tance. An interaction loses its importance with time. These effects are
peers have varying importance.
considered along with the satisfaction of peers while evaluating an interaction. A recommendation
recommendation
contains the recommender's confidence in the information provided. This factor is considered with with.
trustworthiness of the recommender when evaluating I-ecominendations.
recommendations. A file sharing application
is simulated to
is to understand advantages of the proposed algorithms in mitigating attacks related with
services and
services I-ecominendations. The results of several empirical studies are used to simulate peer,
and recommendations.
resource, and and network parameters. This enables us to study the effects of of external parameters
parameters on
the algorithms and the evolution of trust relationships among peers. Individual, collaborative and
the
pseudonym changing attack scenarios simulate nine different malicious behaviors. In most exper-
iments, we find
iments, find that malicious peers are isolated from other peers and their attacks are mitigated.
nrlitigated.
There are are cases
cases where they obtain a high reputation but their attacks are still contained.

1 Introduction
Introduction
P2P systems
P2P systems rely on collaboration of peers to accomplish tasks. Peers trust each other to perform op-
erations such
erations such as
as routing file
file search queries
queries and downloading/uploading files.
files. However, a malicious
peer can use the trust of others to gain advantage and can harm the operation of of a system. Detecting
Detecting
malicious behavior is is difficult without collaboration.
collaboration. However, feedbacks from peers might be decep-
tive, and
. tive, thus, identifying a malicious peer with high confidence becomes a challenge [[1].
and thus, I ] . In such an
unreliable environment,
environment, the ability to reason about trust may help a peer in determining trustworthy
[2]. Every
peers [2]. Every peer can retain long-term trust information about peers it has interacted with. This
reduces the risk and uncertainty in future [3].
future interactions [3].
'This research is
*This paltially supported
is pal1ially supported by NSF grants
Franls ANI
AN1 0219110. liS 0209059 and 11s
02191 10.11S liS 0242840.

1
Interactions among peers and feedbacksfeedbacks of peers about each other provides means to establish trust
[ 4 , 5 , 6 ] . Aberer and Despotovic [4]
among peers [4,5,6]. [4] use P-Grid [7]
[7] to provide decentralized and efficient
access to trust information. The authors assume that trust usually exists among peers and malicious
behavior is an exception. Peers file complaints about malicious peers. Trustworthiness of a peer is
measured according to complaints about it. it. Eigentrust [5]
[5] uses transitivity of trust which allows a peer
to calculate a global trust value for peers. A distributed hash table (CAN [8]) [8]) is used to store and
efficiently. Trusted peers are used to leverage the establishment of trust
access global trust information efficiently.
among peers. Their experiments show the impacts of individual and collaborative malicious peers on
a file sharing application and how trust can help to mitigate attacks. attacks. PeerTrust [6] (61 defines community
commiinity
context and transaction context parameters to address application specific features features of interactions.
interactions. P-
Grid is used as an efficient access method to trust information. Four different trust calculation methods
are discussed and studied in experiments. An important aspect of trust calculation is to be adaptive for
factors.
context dependent factors.
We propose a Self-ORganizing Trust model (SORT) that enables peers to create and manage trust
relationships without using a prio~i priOli information. Peers must be able establish trust among each other
without relying on trusted peers [5]. [5]. Because, a trusted peer can not observe all interactions among
peers, and might be source of misleading information. In SORT, SORT, a peer assumes other peers as un-
trustworthy when it does not know about them. Assuming the pre-existence of trust among peers [4] [4]
do not distinguish a newcomer and a trustworthy peer and makes easy that a malicious peer changes
its pseudonym to clear its bad history (Sybil attack [9]). (91). A peer must contribute in order to gain trust
of another peer. Malicious behavior easily destroys an existing trust relationship [5, [5, 10].
101. Thus, Sybil
attacks becomes costly for malicious peers.
The main difficulty in trust models is measuring trust. Trust is a broad social concept and hard to
explain with numeric metrics [11,2, [I 1: 2, 12].
121. Classifying peers as either trustworthy or untrustworthy [4] [4]
may not be a sufficient metric. Trust metrics should have sufficient precision so peers can be ranked
according to their trustworthiness [12, [12, 13].
131. It makes iti t possible to select better candidates for some op-
e.g., selecting the most trustworthy peer when downloading a large file.
erations, e.g., file. SORT's
SORT'S trust metrics
are normalized to take real values between 0 and 1 which is is similar to I;:igentrust's Eigentrust's normalization
operation. However, Eigentrust considers two peers equally trustworthy if they are assigned to the
same trust value even though one has more past interactions. SORT makes this distinction and prefers
the peer with more past interactions.
Using a service (e.g., (e.g., downloading a file) from a peer is called a service inTeracTion.itztet-actioii. A peer
becomes an acquaintance
acquait?taiice of another peer after providing a service to it. All peers are st/-atzget-s sTrangers to each
other at the start. A peer expands its set of acquaintances by using services from strangers. A peer
requests recommendations about a stranger only from its acquaintances. A recommendaTion reco~ii~iiendation represents
the acquaintance's trust information about the stranger. Recommendations from acquaintances are used
stranger. Reputation is the primary metric when deciding about
reputation value about a stranger.
to calculate a repuTation
strangers.
Measuring a peer's trustworthiness about different tasks with one metric [4,5,6, [4, 5 , 6 , 10]
101 may cause in-
correct decisions about the peer. Providing services and giving recommendations are different tasks and
should be considered in i n separate contexts. A peer can be good service provider but may give misleading
recommendations. For this reason, reason, SORT defines two contexts of trust: setvice recomniendatio17
service and recommendation
corzte.rts. To measure a peer's trustworthiness in these two contexts, sewice
contexTs. trust andrecomJ11endation
service TruST andrecommenddion
defined. When a peer gives misleading recommendations, iti t loses recommendation
trzrst metrics are defined.
trusT
trust of others but its service trust remains same. same. Similarly,
Similarly, a failed service interaction only decreases
the value of service trust metric.
Combining the information derived through interactions and recommendations recomlnendations in one metric may

2
information [4,
cause the loss of useful infonnation [4, 5].
51. While interactions represent a peer's definite infonnation
information
about its acquaintances, recommendations represent suspicious infOlmation infolmation about them. A peer has
information and make independent decisions about others.
to combine these two type of infonnation others. Suppose a
peer wants to get a service. If i t has no acquaintance, it simply chooses to trust any stranger providing
If it
If the peer has some acquaintances, it may select a stranger based on recommendations
the service. If
of acquaintances. As the peer gains more acquaintances, it becomes more selective. It may choose
acquaintances can deliver its service requests. Trust decisions about an
not to trust strangers if its acquaintances
acquaintance are based on its past interactions and recommendation of other acquaintances. As more
interactions happen with an acquaintance, the experience derived through interactions becomes more
important.
information about interactions helps a more precise calculation of trust metrics.
Using all available infonnation
satisfactorylunsatisfactory rating about an interaction [4,
Assigning only a satisfactory/unsatisfactory [4,5]
5] is not enough for a precise
calculation. A peer should be able to express its level of satisfaction about an interaction in more detail
[14.]. Service
[]4]. Service interactions might have varying imp0l1ance
importance [6],
[6], e.g.,
e.g., downloading a large file file is more
important than downloading a small one when the network bandwidth is an issue. The effect of an
interaction on trust calculation should fade as new interactions occur [IS, 115: 16].
161. Thus,
Thus. a peer can not
take advantage of its past good interactions for a long time and has to continue to behave consistently.
A recommendation is evaluated according to the value of recommendation trust metric about the
recommender [5, [5, 6,
6, 10].
101. A recommendation makes a clear distinction between the recommender's reconimender's
acquaintances. Each recommendation affects the
own experience and information collected from its acquaintances.
value of recommendation trust metric about the recommender. A recommendation also contains the
recommender's level of confidence in the information provided. If If it has a low confidence,
confidence. the recom-
mendation is considered weak. A weak recommendation has less effect on the calculated reputation
one. Furthennore,
than a strong one. Furthermore, a peer is no more liable than its confidence in the recommendation.
If a weak recommendation is false,
If false, the value of recommendation trust metric about the recommender
does not diminish quickly.
follows:
The main contributions of this research are outlined as follows: .. .

recomlnehdation contexts.
Trust metrics are defined in service and recommehdation contexts. Two contexts of trust distin-
distin-
guish capabilities of peers based on services provided and recommendations given.

Distributed algorithms have been defined to help peers about trust decisions based on trust met-
rics. A peer adaptively adjusts the necessary level of trust according to its trust relationships with
acquaintances.
acquaintances.

defined. Evaluation is based on recommendation trust


A recommendation evaluation scheme is defined.
recommender's confidence in the provided information.
metric and the recommender's information. This enables fair evalua-
reputation.
tion of recommendations and more accurate calculation of reputation.

For service interactions, a sample evaluation scheme is defined on a file file sharing application.
application.
onlineloffline ratio, file
Bandwidth, online/offline file size and popularity are some specific parameters to make a
precise evaluation. Thus, a better classification of peers can be achieved according to serving
capabilities.

Simulation of SORT has been presented on a file file sharing application.


application. To observe effects of ex-
ex-
ternal parameters on the proposed algorithms, peer capabilities (bandwidth, number of shared
files), peer behavior (online/offline
files), (onlineloffline periods, waiting time for sessions)
sessions) and resource distribution
sizes. popularity of files) are simulated according to some empirical results [17, ]8,
(file sizes, 18, 19].
191.
Nine different malicious behavior are studied.
studied. Attacks related with file
file upload/download
uploadldownload op-
erations are always mitigated. A malicious peer who performs collaborative attacks rarely but

3
behaves honest in other times is hardest to counter. Attacks on recommendations are mitigated
in most scenarios except in a type of malicious peer which performs collaborative attacks against
a small percentage of peers and stays honest to other peers.

Outline of the paper is as follows.


follows. Section 2 discusses the related research. The algorithms and
3. Section 4 presents the simulation of SORT on a
formal definitions of SORT are explained in Section 3.
file sharing application.
file application. The future work opportunities to extend the trust model is discussed in Section
5. Conclusions are presented in Section 6.
6.

2 Related Work
One of the first formal definition of trust is done by Marsh [II]. [ I I]. He defines a formal
formal model of trust
based on sociological foundations and defines defines trust as a metric. In this model, an agent uses own
experiences when building trust and does not use collective information of other agents. agents. Abdul-rahman
and Hailes' trust model [2]
[2] evaluates trust as an aggregation of direct experience and recommendations
of other parties. Discrete trust metl-ics
metrics are defined and recommendations are rated according to their
semantic distance from the final
final reputation value. Zhong [13J [I 31 proposes a dynamic trust concept based
on McKnight's social trust model [12] [ I 21 and defines uncertain evidence as an input when building trust
relations. Second-order probability and Dempster-Shaferian framework are used for the evaluation of
uncertain evidence.
Reputation systems first appeared as a method of building trust in e-commerce e-con~mercecommunities.
Resnick et al. [3][3] point out limitations and capabilities of reputation systems. systems. Ensuring long-lived
relationships, forcing
forcing feedbacks, checking honesty of reports are some of the main difficulties in repu-
tation systems. [ I ] explains two common attacks on reputation systems: unfairly highllow
systems. Dellarocas [1] high/low
ratings and discriminatory seller behavior. He proposes controlled anonymity and cluster filtering
methods as countermeasures. Despotovic and Aberer [20] [20] study trust establishment in an online trade
scenario among self-interested sellers and buyers. Trust-aware exchanges can increase economic ac-
scenmio
tivity since some exchanges may not no! happen without a trust establishment.
establishment. YLI Yu and Singh's model [21] [21]
propagates trust information through refeITal refei-ral chains.
chains. RefeITals
Referrals are the primary method of developing
trust in strangers. Terzi et al.
a]. [22]
[22] introduces an algorithm to classify users and assign roles to them
based on trust relationships.
relationships. Mui et al. a]. [23]
[23] present a good bibliography search of trust from social life
disciplines and propose a statistical model based on trust. trust, reputation and reciprocity concepts. In this
model, reputation can be propagated through multiple referral chains [21]. [21]. ]psang
J ~ s a n get al.
a]. [24]
[24] discusses
transitivity of trust and concludes that recommendations based on indirect trust relations may cause
incorrect trust derivation.
derivation. Thus, trust topologies should be evaluated carefully before propagating trust
information.
Reputation-based trust models are applied to P2P systems after the appearance of first first examples.
Some prominent ones, Aberer and Despotovic [4], [4], Eigentrust [5], [5]. Peertrust [6],
[6], are already mentioned
Cornel Ii et al.
in the introduction. Cornelli a]. [25,26]
[25, 261 describe how to make a Gnutella [27] [27] servant reputation-
aware with a polling protocol. Basically, peers floods floods reputation queliesqueries throughout the network to
learn about reputations of others.
others. Enhanced version polling protocol also verifies the identity of reply-
[26]. Although the polling protocol is discussed in detail, a computational
ing peers in reputation query [26]. coinputational
trust model and clear trust metrics are not defined. Selcuk et al. al. [10]
[lo] present a vector-based trust model
relying on interactions and reputation. If a peel- peer has necessary number of neighbors, only neighbors
are contacted for a reputation query. Otherwise, the query is flooded flooded throughout the network. Recom-
mendations are evaluated according
accordins to the credibility of recommenders. They simulate five five types of
application. However, they do not present evaluations for deceptive refer-
attackers on a file-sharing application.
attacks. Wang and Vassileva [14]
ence attacks. [ I 41 propose a Bayesian network model to represent different aspects

4
of interaction rating on a P2P file
file sharing system.
Most trust models require a query operation to learn about the reputation of peers. Ooi et a]. [28]
al. [28]
propose that each peer stores its own reputation using signed reputation cel1ificates.
certificates. Although this ap-
proach eliminates the need for reputation queries,
queries, it requires a public-key infrastructure. Additionally,
timely update of trust information in a certificate is a problem.
problem. Similarly, NICE [29]
[29] uses signed cook-
cook-
ies as a proof
proof of a peer's good behavior.
behavior. NICE forms
forms trust groups and introduces trust-based pricing
and trading policies to protect integrity of groups.
groups.
Trust has been applied to other problems on P2P networks. Moreton and Twigg [30] [30] aim to increase
collaboratio~~
routing security by enforcing collaboration with a trust protocol. The same authors define define a trust
trading protocol to create an incentive mechanism for P2P networks [31]. [31]. Gupta and Somani [32][32] use
reputation as a currency to get better quality of service. Peers dynamically form trust groups (TGrp)
(TGrp) to
protect themselves from malicious peers. TGrp peers watch each other closely and have a higher trust
other. When contacting a peer from the outside of a TGrp, the peer's own reputation and its
in each other.
TGrp's reputation is considered.
considered. Another interesting use of trust in [33]
[33] discusses trading privacy to
gain more trust in pervasive systems.
systems.

3 A Computational
Computational Trust Model for P2P Systems
In this research, we have the following assumptions. A P2P system consists of peers parity in terms
compiitational power.
of responsibility and computational power. There are no plivileged, centralized,
centralized, or trusted peers to
manage trust relationships among peers. Peers are indistinguishable in computational power, network
bandwidth and storage space. space. Although a small fraction of peers may behave maliciously, the majority
of them are expected to behave honest. Peers occasionally leave and join the network and provide
services to others and use services from others. For simplicity in discussion, one service operation,
e.g., file
e.g., file request/download is considered.
considered.
th
The iith peer is denoted by pi. Pi. When pi Pi uses a service of pi, Pi, e.g.,
e.g., downloads a file file from p,j,
Pi,
this is a service interaction for pi'. Pi 1. If pi
Pi had no service interaction with p j , pj
Pj, Pj is a stranger to pi. Pi.
acquaintanee of pi
An acquaiiztanee Pi is the one who has served pi Pi in a service interaction at least once. pi's Pi'S set
of acquaintances is denoted by A i . A peer stores a separate history of service interactions for each
Ai.
acquaintance. SSR
acquaintance. H iijj denotes pi's
Pi'S service history with p j . Since service interactions are added to the
Pj.
end of a service history, SSR H iijj is a time ordered linked list.list. shi,j
sh ij denotes the size (current
(current number of
interactions) of SSR,j'
Hij.
After finishing or cancelling a service interaction:interaction, pi
Pi evaluates service quality of the provider.
evaluation result of kkt"
The evaluation th
service interaction of pi Pi with pj <
Pj is denoted by 0 ::; e: <
e~j ::; 1. k is the
sequence number of the interaction in SR SH,?.
ij . A cancelled service interaction gets a 0 evaluation value. value.
file sharing application, authenticity of the downloaded file,
In a file file, average download speed,
speed, average
onlineloffline periods of the service provider are some of the
delay, retransmission rate of packets and online/offline
parameters to evaluate a service interaction.
interaction.
Service interactions might have varying importance. In a file file shming
sharing application, downloading a
file is more important than downloading a small one due to consumed network bandwidth. A
large file
popular filefile is more valuable than an ordinary one. one. Each service interaction is assigned a weight to
quantify importance of interactions. The weight of kt" k th service interaction of pi
Pi with pjPj is denoted by
<
o0 ::; Wfj <
w& ~ 1. seniantics to calculate e;j
1. The semantics wt
e l j and ut&.values depend on the application. In Section 4,
et wt
we define some methods to calculate efj and n,:. for a file
file sharing application.
A service interaction has a fading
fadins effect on trust level as new interactions are added to the history.
history.
th
The fading effect of kkt" service interaction between pi Pi and pj .fe.
Pj is denoted by fi~ and calculated as

'Interactions are unidirectional.


I Interactions Pi does not keep any record abollt
unidirectional. Thus, pj about p,'s
Pi'S upload

5
Table l' 1: Notations related with service trust metrics
Notation Description
Pi a peer with identifier i
f7i
F k,
, evaluation kt" interaction of p,
result of kill Pi with pj
Pj 1
u{
11:4 weight of kt"
kill interaction of piPi w i t hPj
with pj
1'1.-
.,f!7
slrij
sh ij
ij fading effect of kt"
kill interaction of pi
service history size between pi
Pi with pPjj
Pi and p, Pj
I ,
co,)
cb,, competence belief of pi Pi about pj
Pj
iOi)
Ibj, integrity belief of pPii about p,
belief of Pj
sf
stLi ij service trust value of pPii about pPjj
Tij
l'. ,. reputation value of piPi about pPjj

follows:
follows:
.k k
ji= -
. J sh.. 1<
_ k _< sh
1} (1)
lJ

After adding (or deleting) a service interaction to SH S H iiji ,, Pi fi~


pi recalculates f; values. The fading effect
can also be defined as a function of time.time. However, f$ j~~ has to be recalculated whenever its value is
needed.
ts}j
t s t , is a timestamp finishing time of kt"
tiinestai71pvalue which denotes the finishing k th service interaction of Pi
pi with PJ'
p,.
A service interaction is deleted after certain period timestamp value. The removal period should be
determined according to maximum size of histolies
histOlies and rate of interactions among peers.
Let T7 be a tuple representing the information about an interaction and defined as T;. Ti~ =
= (e}j,
(e;: w,",: ts~).
Ifj, tsFj).
f;; wt,
Then, we define S H Hiijj == {Tl
{T;. , TD'
i 7 ; . ... <t
. . . , 7 ii}.- S H H iijj stores only limited number of recent service inter-
pi with p,j.
actions of Pi Pi' sh.,,,
sh max denotes the upper bound of service history size known by all peers. When
adding a new service interaction to SH S H iijj 2, Tij
T ; is deleted if s h ijI j =
sh sh,,,.max .
= sh
SORT defines several trust metrics. Three of them are important: reputation, 1-eputatioiz, seivice trust, and
service trust,
recoi~~i~~eizdcltioi~
trust. Reputcltioiz
recommendation trust. Reputation metric is a value resulting from the evaluation of recommendations
of acquaintances.
acquaintances. Service trust metric represents a peer's trust in an acquaintance in service context
based on its past service interactions and reputation. Reputation and service trust values of Pi pi about Pj pj
are denoted by 0 ::; 5 Tij,
ri.7;stij 5 1 respectively. Service trust value is the primary metric to make trust
s t i j ::;
decisions when making decisions about a service provider. Reconznzeizdcltiorz Recommendation trust metric is analogous analogous
to service trust metric in recommendation context and is used when selecting acquaintances for repu-
queries and evaluating recommendations. Its value is calculated based on past recommendation
tation quelies
interactions and reputation. Recommendation trust value of Pi pi about Pj pj is denoted by 0 ::; _< Ttij 5 1.
r t i j ::; 1.
When piPi is a stranger to Pi, pi, we define that SH S H iijj == (/)8 and Tijrij == stij r t i j == 0.
s t i j == Ttij O. If Pi pi is
interested in a service provided by Pj, pj, it i t sends a reputation query about Pj pj to its acquaintances.
acquaintances. Then,
rij value based on the collected recommendations and makes a decision about Pj.
it calculates Tij p j . If pj has
If Pj
not interacted with any of Pi'S acquaintances in the past, pi
pi's acquaintances Pi does not get back any recommendations.
rij == 0.
pi still sets Tij
Then, Pi O. This is a protection against the Sybil attack [9] [9] since changing pseudonym
does not give any advantage to malicious peers.

3.1 Calculating Service Trust Metric


This section describes the calculation of service trust metric. Using service interactions in histories,
a peer first calculates competence and integrity belief values about an acquaintance.
acquaintance. Belief
Belief in an
acquaintance's ability to successfully satisfy needs of interactions on a particular task is called com-
coin-
yetence
petence belief [12, 13,
belief [12, 13?34, 3.51. The competence belief of Pi
34, 35]. pi about pi
Pi in service context is denoted by

6
cbij.
cb average value regarding all parameters available
ij . Obtaining an average available about service interactions can be a
way of measuring competence belief.
belief. Thus, an interaction's
interaction's evaluation result, weight and fading
fading effect
values are considered in the calculation of competence belief. pi calculates cb
belief. Pi cbii
ij as follows:
follows:
sh ij

cbij = 81 'L " (Wij'


k ik I> )
ij' eij (2)
, cb 1>=1

3 cb == .L~.~~ (1L't7
( . it;)
f & ) is the normalization coefficient. If pj has completed all service interactions
If Pj
perfectly (e~j k), the coefficient OCb
(ef'j== 11 for all I;), cbij
Pcb ensures that cb ij = cbij
1. Additionally, cb
= 1. always takes a
ij always
value between 0 and I since 0 :S 5 e7j,
e.: u !::ii)
w~j: 5 1.
f$. :S 1.
The level of confidence in the predictability of future future interactions is called integrity belief [ I 2,
belief [12,
13,
13, 34,
34, 35, 36].
361. ibibij
ij denotes the integrity belief of Pi pi about Pj
pj in service context. Competence of an
acquaintance does not measure its consistency in terms of the interaction quality. quality. A high competence
belief
belief value does not reveal an erratic behavior in interactions. Deviation from the average behavior
from
can be a measure of integrity belief. Therefore, pi calculates ib
Therefore, Pi ibij
ij as an approximation to the standard
deviation of interaction parameters:

shU
_1_ ' " (wI-' .
sh ' L
i
1)
r .e
1)
k -
1)
cb i ) / (3)
) k=l

A smaller value of ib ibij ~neansmore predictable behavior of Pj


ij means interactions. w:j
p j in future service interactions. wi>and
it; u;$ and /$
f"' 3 are the mean of w7j fi~ values in SSH respectively. We can approximate it;
H iijj respectively. f$ as follows:
follows:

r = sh .. L
7)
shij
_1_ ' " fl>
1)
= sh ij
2sh
+1 ~ ~
2
(4)
7) k=l 1)

pi expects that future


Pi pj are at least as good as the average of past interactions.
future interactions with Pj
p , needs to determine a confidence interval
Pi cbij
interval for the future interactions based on cb ibij
ij and ib ij values.
Assuming the evaluation results of interactions follow follow a normal distribution, Cbij cbij and ib
ibij
ij can be con-
con-
( p )and standard deviation (0)
sidered as approximations for mean (f.l) ( a )of evaluation results of interactions
respectively. According to the cumulative distribution function function of normal distribution, an interaction
cb,j with a <.1>(0)
has an evaluation result less than Cbij @(O) = 0.5 probability. If Pi pi sets stij = Cbij,
chij, this is
an over-estimation with 0.5
0.5 probability for future interactions of Pj p,i (assuming
(assuming future interactions will
follow the normal distribution). Selecting a lower stij
follow s t i j value is a safer choice for forpi.
Pi. Thus, Pjpj calculates
s t i j as follows:
stij follows:
s t '-3 . =
stij cb-ij' 3. -- ib
= cb i bij
.. / 2
' 3 /2 (5)
(5)
interaction's evaluation result will be less than stij with <.1>(-0.5)
future interaction's
In this case, a future @(-0.5) == 0.3185
probability.
probability. Therefore, adding integrity belief value into the calculation of service trust value forces Pj pj
to behave more consistently.
Equation 5 is not complete since reputation of considered. Reputation is especially
p j has not been considered.
o fPj
important in the early phases of a trust relation. When there is no (or few) few) interactions with an acquain-
tance, a peer relies on reputation of the acquaintance to decide about service provider selection. After
more interactions happens with an acquaintance,
acquaintance, Jirst-lzaizd experieizce derived through interactions
first-hand experience
becomes more important than reputation. Thus, Shij s h i j is a measure of Pi'S pi's first-hand
first-hand experience with Pj.pj.
The confidence in cbcbij
ij and ibij
ibij values are proportional to s h i
Shij. j . Therefore,
Therefore, s t
stiji j can be reformulated as
follows:
follows:
st.. = ? sh ' s h I n a .-~ s1zi.j
23 (chij - i b z j / 2 ) + Tij (6)
(6)
~hinar shmax

7
Table 2: Notations related with re~utation
reputation and recommendation trust metrics
Notation Description
er,) p;'s estimation for the reputation
of Pj derived from recommendations
ecb;j p;'s estimation for the competence beliefs
of Pj derived from recommendations
eib ij Pi'S estimation for the integrity beliefs
of Pj derived from recommendations
refk evaluation of zth recommendation interaction
of Pi with Pk
T"U'tk weight of zU' recommendation interaction
of Pi with Pk
riA fading effect of zth recommendation interaction
of Pi with Pk
rh ik recommendation history size between Pi and Pk
reb ik competence belief of Pi about Pk
in the recommendation context
rib;k integrity belief of Pi about Pk
in the recommendation context
rtik recommendation trust of Pi about Pk

Equation 6 balances the effects of interactions and reputation on stij s t i j value. When Pjpj is a stranger to
pi, sh
Pi, s h iijj == 0 and stij
sti,j == Tij.
r i j . As more interactions happens with Pjpj (sh
(shi,j rij value loses its
ij increases), Tij
effect on stij stij value. When sshh iijj = shm,ar,
= sh max , rij
Tij value has no effect on stij
stij value. This is the ultimate
level of first-hand experience between two peers.

3.2 Calculating Reputation


Reputation Metric
This section describes the calculation of the reputation metric. In the following two sections, we
assume that Pjpj is a stranger to Pi pi and Pkpk is an acquaintance of Pipi. To calculate Tij,
r i j , pi
Pi starts a rep-
utation query about Pj. pj. Before sending the query, Pi pi selects trustworthy acquaintances based on the
recommendation trust values. Thus, Pi pi may filter out some of misleading recommendations from less
trustworthy peers. Algorithm I1 shows how Pi pi selects trustworthy acquaintances and requests their rec-
qlnaXdenotes the maximum number of recommendations that can be collected during
ommendations. T)max
ommendations.
query. II11 represents the size of a set. To prevent excessive network traffic, the query stops
a reputation query.
qmar recommendations are collected or the required trust level drops under pTt
when T)max ~lrt - art.
- (Trt.
Tii == {PI,
Let T { p l :P2,
p a :...
.. . pPt;}
t i ) be the set of selected trustworthy acquaintances where ti ti is the number
set. pi
of peers in this set. Pi sends a reputation query about Pj pj to each trustworthy peer. IfIf pkPk E T Ti
i had at
least one service interaction with Pj, p,j, it replies a recommendation. The recommendation contains the
following information:

cbkj:iibbkj
Cbkj, k j :: These values are a measure of Pk pj
p k's7 sfirst-hand experience with Pj.

s h kj
sh k j :: The history size is a measure of PI.' cbkj and ib
pk's's confidence in Cbkj ibkj s h kj
If sh
kj values. If k j value is
large, pk pj. Thus, cb
PI.' had many service interactions with Pj. cbkj
kj and i b
ibkjk j values are more credible for
Pi.
Pi

rkj ::
Tkj pk had some service interactions with Pj,
If PI.'
If p j , it should have already calculated a reputation
pj. Tkj
value about Pj. rkj value is a summary of recommendations of PI.' pk's set of
p k's7 sacquaintances. Pk'S of

8
Algorithm 1 GETRECOMMENDATlONS(Pj)
GETRECOMMENDATIONS(P~)
1
Prt
I:1 : J-lrt ::: Cplia,
+ 1);1 LpkEA; rtik
2:
2: art
art + ');1 J~L-Pk-_E-A-i-(-Tt-i-k---f-lr-t)-2
1
E ~ ~ ~(rtii
::: - J A ,- PTL) 2

3:
3: thhigh+ 11
thhigh :::

4: thZo'W
4: + J-lrt
th~ovr::: Prt + art
5: rrset
5: set +0
:::

6:
6: while flrt crt:S th/
prt -- art <
th,lo,,. < 'rJmax
oll' and Irsetl < q,., do
7:
7: for all
for pk EE Ai do
all Pk do
8:
8: thlo,, :S rtik
if thzo'W < <
rtik :S thhigh
thhighthen
9:
9: ree -eRequestRecommendation(PbPj)
rec ::: RequestRecommendation(pk,pj)
10:
10: + rset U {ree}
rset ::: {rec)
11:
11: end
end if
12:
12: end for
endfor
13:
13: thhigh + th/
thhigh::: thlozc
01C

14:
14: thzo'W + th
thlou:::: thlolc r tl2
zow -- aart/:!
end while
15: end
15:
16: return rrset
16: set

acquaintances is probably different than that of other peers. Thus, pi


Pi can make a better approxi-
pj by aggregating reputation values from its acquaintances.
mation to global reputation of Pj acquaintances.

qkj :: 'rJkj
'rJkj qkj represents the number of Pk
pk7s
's acquaintances which provided
provided a recommendation during
the calculation of rkj value. This value is a measure of pk's
Pk 's confidence in rkj value. If 77kj value
If 11kj
is close to
is q,,,,
to 'rJmax, rkj value is more credible.
rkj

s h k j and 'rJkj
Including Shkj qij values in the recommendation will protect the credibility of of pk
Pk in pi's
Pi'S view.
pk's knowledge about Pj
If Pk'S insufficient, Pi
pj is insufficient, pi will figure
figure this out with small sh,kj
Shkj and qkj 'rJkj values. Thus,
Pi will not judge Pk pk harshly if eb cbkj; ibkj:
kj , ib kj , rkj values are inaccurate as compared to recommendations of of
acquaintances.
other acquaintances.
pi evaluates
Pi evaluates allall the information according to the recommendation trust value of Pk which is rtik.
of pk rtik'
The calculation of recommendation trust value is explained later in Section 3.3. If
The If pk
Pk never had a
recommendation interaction with Pi, pi, we set rtik rtik = rik (Since pk
= ri.k Pk is an acquaintance of of pi,
Pi, rik shoiild
should
already be computed.). Later,
already pi updates rtik
Later: Pi rtik value for each recommendation of of pk.
Pk.
pi first
Pi erij, an estimation of reputation of pj,
first calculates erij, Pj, by aggregating reputation values in the
recommendations. A reputation value collected from a large set of peers is more credible since more
peers agree on it. it. Therefore, rkj rkj value should be considered with respect to qkj 'rJkj value. With this
erij can be calculated as in Equation 7. (Jer
observation, erij
observation, Per == CpkET?
LpkET; (rtik
(rtik . 177kj)
1 ~ is) the normalization
erij .
for erij.
coefficient for
1
e r23. . = - (7)
Per C (rtit
PliETi
~ 1 .3r Y )

Then, pi calculates estimations of competence and integrity beliefs about pj


Then, Pi Pj which are denoted by
ecbij
eeb ij and eibij
eib ij respectively. When calculating these values, an acquaintance's first-hand experience
with Pjpj should be considered since of experience. ssh
since each acquaintance has a different level of h kj
k j value
is a measure of Pk'S
is pb7s level of first-hand experience with &.Pj. Thus, cbkj
eb kj and ibkj
ib kj values should be
evaluated in proportion to sh kj value. Equation 8 and 9 show the calculation of
slzkj of ecbij
eeb ij and eibij
eib ij values.

9
Pecb xpkETi( r t i k . sh
= LpkETi (rtik
(3ecb = s h kj
k j)) is the nonnalization
normalization coefficient.

(8)

(9)

Now, Pipi has two types of information to calculate rij r , ? value. While ecbij
e ~ b ?and
, ~ eib
eibij pi's
ij represent Pi'S
acquaintances own experiences about Pj, pj, erij represents their uncertain information. Pi pi calculates
psh
/Lsh = = zLpkET
p k E Ti i(sh
( s hkjk)jIti
) / t which
i is the average level of first-hand experience of Pi'S pi's acquaintances.
acquaintances. If If
i , h is close to sh
I-ish
~ s h Imax
n a , value, Pi'S
pi's acquaintances had many service interactions with Pj pj and their first-
hand experience about Pj pj is very good. In this case, ecb ecbij eib,;j
ij and eib ij values should be given more
importance than erij value. Otherwise, erij value should be more important. With these observations,
rij is calculated in a similar way as stij: stij:

(10)

3.3 Recommendation Trust Metric


Calculating Recommendation
After calculating rij value, Pi pi should update recommendation trust values of acquaintances according
to the accuracy of recommendations. This section explains how Pi pi evaluates P/.;'s pk's recommendation and
update rtik value.
information about recommendation interactions are stored in recommendation
The infonnation reconzi~~e~zdntior? histories. The
zth recommendation interaction of Pi
evaluation result of zth pi with Pkpk is denoted by 0 ::; 5 refk <
re:k ::; 1.
1. Similar
rsu,zk, rg
to service interactions, rwf/;;, r,f$k and rtsfk
rtstk denotes the weight, fading effect, and timestamp of zth zth
recommendation interaction of Pi pi with pk.
p/;;. Let y f '0
j = (re:k: ru:,j;. r t
= (ret/.;, nrtk' rfit.: rtst/.;)
7.f$.: s t k ) be a tuple representing
the information about zthzt" recommendation interaction.
interaction of Pi pi with pk.p/;;. Recommendation history of Pi pi
pk is denoted by RH
with Pk R H iikp = {y:k;
= {,lk,";k" .?;;hi*).
. . .,,;~ik}. rhik
r h i r is the size of RH R H ii r/;; and rhma,'f
rh,,,, denotes the
upper bound of recommendation history size known by all peers.
re& value, r/;;j,
To calculate retk r k J :Cbkj
cbkj and ib ibkj
kj values should be compared with erij, erij, ecb
ecbij eibi,,?
ij and eib ij
values. Therefore,
values. Therefore, Pi
pi calculates refl..
rerk as follows:

(II)

pk should be accountable according to the importance of its recommendation which we represent


Pk
r w $ value. From Equation 7,
with rwtk 7, 8 and 9,
9, PI.,'S
pk's recommendation affects rij value in proportion to
s h k j and 7]/;;j
sh/;;j s h kj
7k.j values. Additionally, the effect of sh k j and 7]kj
v k j values on rij value is proportional to LI-ish
L,~
due to Equation 10. 10. Thus,
Thus, Pi
pi calculates the importance of Pk'S pk's recommendation as follows:
follows:

rW:k
z
= -l/LshJ
- - -shkj
-- + shma:r - l/ishJ 7]kj
-- (12)
7 - sh maJ, sh max sh maJ. 1]ma.:r

s h kj
If sh k j and 7]1;j small, rwtk
v k j values are small, rwtk value will be small. L j i s l , ]J value balances the effects of sh/.;]
small. l/lsh shkJ
and 7]kj rwfk value.
v k j values on rwtk value. If Lp,,,]J is large, sh/;;j
If l/Lsh s h k j value is given more importance. Otherwise, 7]kj qkj
important.
is more important.

10
1.recommendation
,
,---
request for P,
request for p, ---__..
---..,
,/. - -\ /--"'.
/ ' \
p, re'
rez,,
,k Pk
\ A
L . 1
/
\
'
, 2.
2 recommendation
recornrnendat~onfor p
p ,
3.sewice
3.service e' IJ 4.service
request
request

Updates on p ;:
Updafes ,
\/" 2 RH
After 2: ,,
RH ik rt;k r,,
rt,, ';;
~ P After 4. SH,, sf;;
4: SH;; st ,
I

I : Operations during recommendation and service interactions


Figure I:

i t ii !,k is calculated in the same way as s


Tt t i k The competence and integrity beliefs of pi
stik' Pi about pk
Pk in
the recommendation context are denoted by reb rcb,k ribtk
i !, and rib ik respectively. With these parameters, p,
Pi
calculates rtil; follows:
rt,k as follows:

(13)

(14)

(15)

1,WIij
i andd 1r'j'/l
rui:; an he mean 0off rW
'fij are tthe k an
rw: ij andd Ti 'j'l;
f& va I ues In
ij values ' RRHij
in . Iy. fJTcb
Hij respectively.
respectIVe 1-1 --= ""TII;k
L...z=l ((rw Z .T fZ
r wik ik )
5 . T- f$)
??
is the normalization
normal~zationcoefficient for reb rcbik.
ik . If pi Pi had no recommendation interaction with pk, PI;, rtik == rikrik
according to Equation 15. 15.
Fig. I depicts the whole scenario
scenario briefly. pPjj is a probable service provider for pi's request. As-
Pi'S request.
suming pJPJ is a stranger to pi, Pi needs to start a reputation query to learn about pj's
Pi, pi reputation. pi
py's reputation, Pi
sends a reputation query to all trustworthy acquaintances. Assume that pPkk is an acquaintance of pi Pi
and had some interactions with pj. Pj' Next, pPI; k sends back a recommendation
recon~mendationto pi. Pi. After collecting all
recommendations, pi rij value. Then, pi
Pi calculates Tij Pi evaluates pk's recon~mendation,stores the results
Pk'S recommendation,
in RHik,
RHil;, and updates rtik rtik, value. Assuming p,j Pi is trustworthy enough, pi Pi requests a service from pj. Pj.
After having the service, pi Pi evaluates the service interaction, stores the results in S SHij,
Hij, and updates
stij
stij value.

Providers
3.4 Selecting Service Providers
When pi Pi queries the network for a particular service, it gets a list of service providers. pi Pi selects
one or several service providers according to their trustworthiness.
trustworthiness. Service trust metric is the primary
criterion for this selection process. For the rest of this section, considering a file sharing application,
pi
Pi is assumed to download a file file from an uploader.
Selecting best service provider. Pi usually can not check the authenticity of a file
provider. pi file until its down-
load finishes. If
If pi
Pi prefers to download from severaluploaders,
several uploaders, pi
Pi can not blame an uploader due to an
file. Because, pi
inauthentic file. Pi can not determine if the whole file or some parts downloaded from a ma-
inauthentic. To prevent such situations, pi
licious uploader is inauthentic. Pi may prefer to select one service provider.

II
pi
Pi sorts service trust values of uploaders by using the comparison function given in Algorithm 2 and
selects the uploader with highest trust value. Sometimes, a stranger to piPi might be selected due to its
good reputation. Suppose p,,, stranger. pi
Pm is a such stranger. Pi sets st,, 6. Thus, p,,
ri,,, due to Equation 6.
= Tim
stim = Pm can
be compared with other peers by using AlgOlithm
Algorithm 2.

Algorithm 22 COMPARLST(Pm,Pn)
COMPARE-ST(~,~,,~,)
I: stim > st,,
I : if sti,, stin then
2: return GREATER
stim <
3: else if stiln < stin
st in then
4: return LESS
5: else if stim =
5: = sti.n then
6:
6: shim > shin
if Shim> shin then
7:
7: return GREATER
8:
8: shin, <
else if Shim < shin then
9:
9: return LESS
10:
10: else if Shim
.5hiIn= =~ h , then
shin ~,
I I:
11: chi,
if cb im -- ib im /2 > cb
ibi1,/2 cb,.,, i bini n/2
in -- ib / 2 then
12:
12: return GREATER
13:
13: else if cbcbi,
im -- ibi bim /2 <
i m/2 < cbcbin
in -- ib ibi,
in //22 then
14:
14: return LESS
IS:
15: else if cbcbiln
im -- ibib,,,/2
un /2 = = cbcbin
in -- ib i bin
i n/2
/ 2 then
16:
]6: if cb im > cb
cbi, chi,,
in then
17:
17: return GREATER
18:
18: else if cb im <
cbiln < cb cb,,in then
19:
19: return LESS
20: end if
2I :
21: end if
22: end if
23: if pln .uploadspeed > == p,,
Pm .uploadSpeed .uploadspeed then
Pn .uploadSpeed
24:
24: return GREATER
25: else if p, .uploadspeed <
Pm .uploadSpeed < p, .uploadspeed then
Pn .uploadSpeed
26:
26: return LESS
27: end if
28: end if
28:
29: return EQUAL
EQUAL

Selecting several service provider.


provider. One uploader may cause slow downloads and high loads on
download, pi
reputable peers. For a faster download, Pi may prefer to select multiple uploaders 2. In this case, pi '. Pi
selects all uploaders whose service trust value is larger than a threshold value. With a high threshold, pi Pi
download. Thus, pi's
may never select an uploader and can not start a download. Pi'S can not build a trust relationships
hand, a low threshold value may lead to selection of possible malicious peers.
with others. On the other hand,
pi acquaintances. If
Pi needs to adjust threshold values according to its set of acquaintances. If pi
Pi has a low trust in its
acquaintances,
acquaintances, it sets a low threshold value.
value. This helps pi
Pi to start interactions with strangers so its set
grow. When pi
of acquaintances can grow. acquaintances, it sets higher threshold values
Pi has a high trust in its acquaintances,

'TO check integIity


"To integrity of downloaded files,
files: some complex methods that use Merkel Hashes [37] [37] or secure hashes and
cryptograpy[38] may be applied.
cryptograpy[38] applied. Otherwise,
Othe~wisc.a naive method may be as follows. If pi file, it may
Pi downloads an inauthentic file,
file segments from all or several trustworthy uploaders. According to their responses,
request hashes of the file responses. pi
Pi may identify
1.esearcl1.
the malicious uploader. In this paper. we do not study integrity checking in detail since it is beyond the scope of this research.

12
so
so the probability of selecting malicious uploaders drops. Algorithm 3 shows an adaptive selection
method.
method.

Algorithm 33 SELECTMuLTIPLEUPLOADERS(U)
SELECTMULTIPLEUPLOADERS(U)
I: Pst -= 11; 1L pj EA; stij
2: O"st -= 11;1 V'---L-'---PJ-E-A-i-(S-t-
ij---fJ-'s-t)-2
3: S -= 0
4: + Pst
t h -=
4: th +
pst + O"st
ost
5 : while lSI
5: IS/<< maxm,ax,,p~,,der
llp !oader and th > 0 do
tlz >
6:
6: + 5S U {Pm
5S -= st,,, 2'2: th}
{p,, EE U :: stim th)
7:
7: th + th
th -= t h -- 0"4 stl22
8: end
8: end while
9: if 5
9: S= =0 8 then
10:
10: 5S +<=S 5U U{ p{Pm h l l n== O}
n , ~E UU: :s Shim O)
I I . end
II: end if
12. return 5S
12:

U is Pi. S is the set of


is the set of uploaders that provide the service requested by p,. of selected uploaders.
trustwortl~yuploaders is selected (5
If none of the trustworthy (S = 8): strangers in U are selected. In this way, new
= 0),
relations. Such critical trust decisions may require human
peers can join the network and build trust relations.
intervention. Thus,
intervention. Thus, the user interface of P2P application should be able to provide input for critical
operations.

3.5 Further Issues


3.5 Issues
Storage space
Storage space forfor histories. Storing histories do not have excessive storage cost. Assume that
sh max =
sh,,,, = rh mox =
rh,,,, = 20 and size of a history entry is 40 bytes. If If a peer has 2000 acquaintances,
2.20.2000.40 = 3200I<B 3 2 0 0 K Bis
i s needed for both service and recommendation histories. Having 2000
acquaintances is is a rare case [17, 181 and history size with each acquaintance will be less than slz,,,,
[17, 18] sh ma :r
generally. Therefore, storage requirements are negligible compared to the benefits of
generally. of the trust model.
Repeating reputation query. A peer updates reputation values of its acquaintances by repeating
reputation queries periodically. Updated reputation values may help to increase confidence on good
peers and identify malicious peers before being attacked. attacked. If an acquaintance always stays honest, its
reputation increases with time or stays at a high level. Reputation of malicious peers decreases as
long as
long as they continue to attack.
attack. Knowing how an acquaintance behaves with others may help a peer peer
to understand possible threats about the acquaintance. For example, assume that pi
to Pi is a very reputable
and Pj
peer and is one
pj is one of its acquaintances. Pj pj attacks all peers but pi. Pi. Thus, piPi knows pj Pj as a good
and gives good recommendations about P'i'
peer and p , ~ .By taking advantage of of pi's
Pi'S reputation
reputation and good
recommendations, Pj p j can attract new victims.
victims. This also decreases other peers'
peers' recommendation trust
since its
pi since
in Pi its recommendations are misleading.
misleading. If pi Pi periodically repeats reputation queries, pi Pi can
about Pj'S
learn about pj's bad reputation.
reputation. Thus,
Thus, Pi'S
pi's recommendations can at least inform others about pj's Pj'S bad
reputation.
reputation.
Pseudonyms. A peer selects selects an arbitrary pseudonym
pseudonyln which is the identity of of the peer known by
others.
others. Peers have a level of privacy through selection of their own pseudonyms. However, a malicious
peer may try to use a reputable peer's pseudonym so it i t can use reputation to attract more victims. This
can be prevented by associating a pseudonym with a publiclprivate public/private key pair so a {yse~ldol7?an,
{pseudonym, public
key) pair becomes the identity of a peer. Peers exchange {pseudonym,
key} {pseudonym, public key) key} pairs before an

13
interaction and run a challenge-response protocol to make sure if the other peer has the corresponding
private key.
key. Thus,
Thus, no peer can use the pseudonym of another peer. Two peers may select the same
pseudonym as as long as
as their public/private
publiclprivate key pairs are different. Selection of of the same publiclprivate
public/private
key pair has a low probability considering the size of key space. To guarantee uniqueness, pseudonyms
pseudonyms
might be registered in a central peer.
peer.
Reacting to attacks. Assume that Pi pi is a good peer and pPjj is malicious. When pi Pi receives pj's
Pj'S
recommendation which deviates a lot from from other recommendations, it can not determine ifif pj Pj is telling
the truth or trying to mislead. Thus,
Thus, Pipi does the normal trust calculation. As fi Pj continue to give
misleading recommendations,
recommendations, Pi Pj. If
pi slowly loses its trust in pj. file, pi
If pPjj uploads a virus-infected file, Pi can
detect this right after the download completes.
completes. There are several methods to counter this situation:

p, can create
Pi permanent interaction
create aapenizn~zenr il~temctionwhich has an infinite life period, and is never deleted from
the history If Pj
pi continue to upload virus-infected files, Pi will add more permanent
files, pi permanent interactions
pj's history and lose its trust in pi
in Pj'S slowly. However, pPjj may perform more attacks until pi
]Jj slowly. Pi
looses iti t trust completely.
looses

interaction. Pi
After adding a permanent interaction, pi may delete pj's
Pj 's past interactions so its trust in pPjj drops
quickly.
quickly. However,
However, when Pi
p, gives a recommendation
recommendation about pPjj to another peer, the recommenda-
tion p; does not have many interactions with pj.
tion will be considered weak since Pi Pj.

pi adds
Pi adds a permanent interaction and sets evaluation results of pj's
Pj'S interactions to zero. In this
case, p/s
case, pi reflects its past experience with pPjj and will be considered
pi's recommendations about pj considered
case.
stronger than previous case.

The experiments in Section 4 uses the last method to counter inauthenticlinfected


The experiments inauthentic/infected file uploads.
Sending complaints to acquaintances. As a method of warning other peers, a peer may send com-
[4]. However, this can be a type of
plaints about a malicious peer to its acquaintances [4]. of attack. Because,
aa malicious
malicious peer may start to blackmail others after gaining a good reputation. Reputation queries
are more safer than complaints
are complaints in order to learn about trustworthiness of of other peers. A query may
iRaa misleading reputation value if the majority of recommenders maliciously collaborate to give
result in"
deceptive information
deceptive information about the queried peer.
peer. Probability of such a collaboration
collaboration is smaller than the
individiially sends a malicious complaint. Thus, querying all acquaintances
probability of that a peer individually
is more resilient to attacks.
is attacks.

Experiments and Analysis


44 Experiments Analysis
Objectives of Experiments. Experiments will be performed to understand how SORT is siiccessful successful
mitigating attacks on a file
in mitigating file sharing application. Distribution of trust metrics will be examined to
understand if malicious peers are isolated from other peers. If If a malicious peer is successful in a
scenario, the reasons will be investigated. How recommendations are (or not) helpful in correctly
scenario,
identifying malicious peers is a question to be studied.
studied.
Method. The The simulation program is implemented in Java programming language. Simulation
parameters are are generated according findings of several empirical studies [17, 18, 191.
according to the findings 19]. This
enables us to make more realistic observations on evolution of trust relationships and understand the
enables
effects of network specific parameters on the proposed methods.
Since a fifile
Since sharing application is simulated, uploading a file is a service interaction. A peer shar-
Ie sharing
files for
ing files for others zrl~loadel-.A peer downloading a file from an uploader is called a
others is called an uploader.
do~iriloa~ler.
down/oader. The set
The set of peers which a peer is an acquaintance with is called do~~~z1oader.s
downloaders ofof the peer.

14
In simulation,
simulation: a file
file search request returns all online uploaders in the network. As discussed in
Section
Section 3.4,
3.4, downloading a file from multiple uploaders requires a complex integrity checlung
checking method
method
[37, 381. A peer is assumed to download a file from one uploader. If
[37, 38]. If a simulation experiment does not
use SORT,
SORT, a down loader selects an uploader with the highest bandwidth. If
downloader If SORT is used, selection
is
is based on trustworthiness of uploaders.
trustwo~thiness uploaders. An uploader might reject an incoming upload request
request if
if it
has reached its maximum number of upload sessions.
has sessions. In this case, other uploaders are checked until a
suitable one
one is
is found.
found. A peer can check the integrity of a file only after finishing its download. A peer
peer
is
is assumed to have antivirus software so it can detect if a file is infected. At the start ofof each cycle, the
simulator does
does the following:
following:
The
The ongoing download sessions are checked. For each session, file segments downloaded in the
previous cycle is calculated and added to previously completed file segments.

Each finished
finished download session is recorded as an interaction. The downloader may decide to
share file. If the file is not shared, it is recorded as a past download This prevents
share the file. prevents from
downloading the same filefile again.
again.

peer. online/offline
For each peer, onlineloffline status is determined for the cunent
CUITent cycle. According to new status
of peers, ongoing sessions might be paused or a paused session might be resumed.

Online peers may create new download sessions. When creating a download session, a random
Online
file is
file selected. The network is searched for an online uploader of
is selected. of the selected file.

A completed download session is evaluated based on the following parameters:

Agreed Bandwidth. Before starting a session, the downloader and uploader makes a bandwidth
agrement. The average bandwidth during the whole session is compared with the agreed band-
width to evaluate the reliability of the uploader in terms of bandwidth
bandwidth provided.

O~zline/Oflineratio.
Online/Offline rrrtio. If an uploader goes offline frequently, the download will take more time.
The ratio of online and offline periods is a parameter for the availability of
The of the uploader.

pi can evaluate its k th


According to these parameters, Pi t h interaction with pJ
Pj as follows:

k
= ((
Online
Online AverngeBwidtk
AverageBwidth)/
e,?. = + AgreedBwidth 112
2 (16)
1) +
Online + Offline
0f f line +
(1 6)

Each download has a different weight (importance) according to following parameters:


Size. Downloading a movie file generally consumes more bandwidth
File Size. bandwidth and time than a text
file. Thus,
file. Thus, a large sized file
file is more important than a small one. However, files over a certain
size same. A ]100
size should be considered same. 00 Mb file size is set as the threshold. Files larger than 100
M b is
Mb is considered same.
same.

Populari@ Some
Popularity. Some files
files might be popular and peers are more willing to download them. We
assumed that number of uploaders is an indication of the popularity of of a file. To understand
understand how
file is, number of uploaders of a file is compared with the file shared by largest number
popular a file
uploaders.
of uploaders.
Uploader,,,,
Let Uploader mDx be largest number of uploaders for the most popular file and ff size is the size of
of a
th
pi can calculate the weight of its kkth
file. Pi
downloaded file. interaction with pj
Pj as follows:
follows:

w~).
w: = =
zz e ->
ff ss~ze
{ f size <
ioonrs
2 100MB
100.1\.{
/
B ( f
&r;ea::;:
Size
>
(1+ Up/ODder",,,,,,
(1 # Up/oDders )/2
+ #
#
12
Cploaders
Up/oaders )/2
(17)
~p/oadei,,,, )I2
< ' O o n l B (-1001"1 B f Up/oader",ax

]5
Table 3:
3: Some parameters used in simulation setup
1 Number of Peers I 1000 i
INumber of Resources I 10000 1
Number of Cycles 1
50000
Minutes in a cycle
cvcle 1 10
Number of Runs 5
Reputation Query Cache Expire(cycles)
Expire(cyc1es) 2000
Reputation Update Period(cycles)
Period(cyc1es) 5000
Report Period(cycles) 1000
Maximum Simultaneous
Simultaneous Downloads 5
Maximum Simultaneous Uploads 5
Maximum Interaction History
Historv 10
I0
1 Maximum Recommendation History 20
Maximum Reputation Query Size 20

Input Parameters.
Parameters. Table 3 shows shows some important input parameters of the simulation experiments.
Each experiment is run for five five times and results of these runs are averaged and stored as final values.
An experiment contains 1000 10000 unique files which are identified by ID numbers. At
1000 peers and 10000
the start of an an experiment, no peer has any acquaintances. The time is simulated as cycles where each
cycle represents
cycle represents a 1010 minutes period of time. An experiment runs for 50000 cycles. Peers cache results
queries for 2000 cycles to reduce network traffic. The caching strategy is explained in
of reputation queries
Section 4.1.
Section 4.1. Peers repeat reputation queries in every 5000 cycles to update reputation values of of their
acquaintances. Some
acquaintances. statistics about peer interactions are reported in every I000
Some statistics 1000 cycles. For each peer,
number of simultaneous upload (or download) sessions is limited to a maximum number between between 0a
and 5.
and 5. Additionally,
Additionally, a peer can start at most two downloads in a day period.
shows the distribution of some parameters used to model peer and resource characteristics.
Table 4 shows
Since the experiments do
Since do not simulate as many peers and resources as in a real life scenario, these
distributions are approximations to empirical results. File sizes are assigned according to Table 4(a).
distributions
example, 75% of all files
For example, files have a size between 1000 10000 kilobytes. A file is shared by
1000 and 10000
peers. While popular files
multiple peers. files are shared by many peers, others will be shared by a few peers.
Table 4(b) shows
Table 4(b) shows the popularity distribution of files.
files. 5% of all files are most popular
popular and are shared by
51 -100 source
51-100 source peers.
peers. Table 4(c) showsshows the distribution of download and upload bandwidths of of peers.
Each peer may stay online for a different period of time as shown in Table 4(d). A peer becomes online
once in a day.
once day. After going offline,
offline, a peer becomes online in the next day. 50% of of all peers stay online
only 60
only 60 minutes in a day. day. At the start of a simulation, peers are assigned to a number of of shared files
distribution given in Table 4(e). Since a peer downloads and shares files from others,
according to the distribution
its number of shared files
its files changes
changes continuously.
continuously. 25% of all peers never share a file but they download
files. A download session is suspended if the uploader goes offline. The downloader may catch the
files.
uploader online later and complete the session. session. This prevents unnecessary
unnecessary session cancellations. If If
an uploader does
an does not become online in a period of time, the suspended session is deleted. Table 4(f)
shows the maximum waiting periods for suspended sessions.
shows sessions. For example, waiting period for a file
size between 100
size 100 and 10001000 kb is 5 cycles.
cycles. After 5 cycles, the downloader restarts the session with
another uploader and records the terminated session as a failed interaction. A peer tends to wait more
for a large
for large file [I 71.
file [17].
Different types of malicious peers will be simulated in the experiments. Behavior of of a malicious
peer isis an
an input to the experiments.
Model. Two types of attacks are defined: Service-based
Attacker Model. Service-based and Recommendatio17-based.
Recommendation-based.
Uploading a virus infected or inauthentic file is called a service-basedservice-based attack. Giving misleading

16
Table 4: Simulation parameters to represent peer and resource characteristics
(a) File Size Distribution (b) Initial Uploader Distribution (c) Bandwidth Distribution of Peers
(c)
File Size (kb) Ratio Initial Uploaders Ratio Download-Upload
100 - 1000 0.10 ,
1 - 10 0.60 Bandwidth (kbps) Ratio
1001 - 10000 0.75 11 - 30 0.20 128 - 64 0.10
10001 - 100000 0.10 31 - 50 0.15 512 - 128 0.10
10000 I - 1000000 0.05 51 - 100 0.05 1024 - 256 0.40
3036 - 768 0.20
10240 - 5120 0.15
102400 - 10240 0.05

(d) Uptime Distribution of (e) Shared File Distribution


(e) (f) Maximum Waiting Times During a
(f)
Peers of Peers Download Session
Uptime (min) Ratio Shared Files Ratio (kb)
File Size (kb) Max Waiting
1 - 60 0.50 0-0 0.25 Period (cycle)
61 - 120 0.20 1 - 10 0.20 100 -
100 1000
1000 5
121 - 180 0.10 11 - 100 0.30 1000 -
1000 10000
10000 20
181 - 240 0.05 101 - 300 0.10 10000 --
10000 100000
100000 100
100
241 - 360 0.05 301 - 500 0.05 100000 --
100000 1 000000
1000000 1000
1000
361 - 600 0.05
601 - 720 0.05

recommendations is called a recommendation-based attack. There are two types of misleading rec-
recol1zl1zelzdatiol1-basedattack.
ommendations [I]:[I]: (i) Unfairly
Ulzfairly high recommendation:
I-ecoml~ze~zdation: Giving a positively-biased
positively-biased trust value about
the recommended peer where r, r , cb,
cb, ib values are set to I. (ii) Unfairly
Unfairly low recommendatiolz:
recommendation: Giving
negatively-biased trust value about the recommended peer where rr,, cb,
a negatively-biased cb, ib values are set to 0.
O. A fair
fair
reco17~1~le1zdarion
recommendation is the recommender's unbiased trust information about a peer.
A malicious
llzalicio~~s
peer
peer may upload infected/inauthentic
infectedlinauthentic files
files to others and give misleading recommen-
dations. A goodpeer
good peer always uploads authentic files and gives fair recommendations. A non-malicious
non-17zalicious
17etwork consists of only good peers. A malicious
network n7alicio~lsnetwork contains both good and malicious peers. In
our experiments, a malicious network is assumed to have 10% 10% malicious and 90% good peers. Ma-
licious peers are assumed to be more powerful. They are assigned longer online periods than good
peers. Therefore,
Therefore, the actual ratio of online malicious peers to online good peers is nearly 20% during
the experiments.
Malicious peers are classified according to the capability of their collaboration. If If malicious peers
do not know about each other and perform attacks independently, they are called individuali17dividualattackers.
attackers.
An individual
individuaJ attacker may attack other malicious peers since it can not identify them. If If malicious
peers know about each other and coordinate in launching attacks, they are called collaborators.
collaborators. Based
on the classification of attack behavior, there are three types of individual attackers:
1J.. Naive. An attacker always uploads infected/inauthentic
infectedlinauthentic files and gives unfairly
unfairJy low recommen-
[I].
dations to others [I].

2. Discriminatory. An attacker seJects


selects a group of victims and always uploads infected/inauthentic
infectedlinauthentic
files [ I , 10].
files to them [I, 101. It uploads authentic files to all other peers. Additionally, it gives unfairly
low recommendations about victims and fair recommendations about others.

3. Hypocritical.
Hypocritical. An attacker generally uploads authentic files and gives fair recommendations.
With x%
x% probability, it behaves maliciously by uploading infected/iauthentic
infectedliauthentic files and giving
[ 5 , 10].
unfairly low recommendations [5, 101.

17
Collaborators always upload authentic files to each other. When a collaborator requests a recom-
mendation from another collaborator, it always receives a fair recommendation. Collaborators always
give ilnfairly
unfairly high recommendations about each other when requested by a good peer. Thus, they try
to convince the good peer to download files from anyone
any one of the them. All collaborators behave same
in the situations described above. Three types of collaborators are defined according to their attack
behavior:
I. Naive collaborator.
collaborator. A collaborator always uploads infected/inauthentic
infectedlinauthentic files to good peers and
gives unfairly low recommendations about them.

2. Discriminato?y collnborator. A collaborator always uploads infected/inauthentic


Discriminatory collaborator. infectedinauthentic files to a group
of selected victims and gives unfairly low recommendations about them. It behaves with peers
other than a collaborator or a victim in a fair manner, i.e., with authentic files
files and fair recom-
mendations.

3.
3. Hypocritical collaborntor.
collaborator. A collaborator uploads infected/inauthentic
infectedlinauthentic files
files to good peers or
gives unfairly low recommendations about them with x% probability. In the other times, it
behaves with them fairly.
fairly.
A trust model should be resistant to Sybil attacks [9] [9] since changing pseudonym is easy in a P2P
system. A malicious peer which changes its pseudonym periodically
peliodically to escape from being identified
is called a pseudospoofer. We assume it is hard to achieve collaboration among pseudospoofers since
a tight synchronization and coordination is needed. Thus, a pseudospoofer is assumed to behave in
e.g., naive, hypocritical and discriminatory, even though it
one of the individual attacker behaviors, e.g.,
changes its pseudonym periodically.
Output Parameters. The number of service-based attacks with respect to time is the most impor-
tant output parameter. This value gives a measure for how SORT is successful in mitigating service-
based attacks.
attacks. Additionally, the rate of successful downloads will be observed to understand how much
mitigating attacks had a positive impact on download operations.
The number of recommendation-based attacks with respect to time is a measure of SORT's SORT'S success --
in mitigating recommendation-based attacks.
attacks. If many recommendation-based attacks are happening,
malicious peers are able to give misleading recommendations and able to affect decisions of other
peers.
The distribution of values of reputation, service trust and recommendation trust will be observed.
This helps to understand if good peers assign fair trust values to each other and malicious peers are
isolated from them. These distributions also give insights about the evolution of trust relationships.
These insights are very helpful in explaining the effects of malicious behavior on the success of attacks.
Message communication during reputation queries is a measure of overhead of SORT. SORT. Number of
recommendation requests and number of answers to them are some parameters to observed.

4.1
4.1 1: Understanding the parameters effective on trust
Experiment 1:
In this section, several parameters affecting the evolution of trust relationships are to be analyzed. To
attacks, experiments are performed on a non-
isolate the effects of service and recommendation-based attacks,
malicious network topology with and without using SORT. SORT. Analysis from these experiments will help
us to understand the evolution of trust relationships under attack scenarios.
Download Rate. Fig. 2 shows the number of successful 1000 cycles with
si~ccessfuldownloads in every 1000
respect to time. Interestingly, there is a small decrease in the number of downloads with SORT.
SORT. This is
due to the uploader selection method. Without using SORT,SORT? uploaders are selected based on their net-
work bandwidth. An uploader with the higher bandwidth is always preferred. With SORT, SORT, selection is

18
12.5 ,--~-----,-----.-----.----.,
w/oSORT ~ 0.72
0.72 64 kbps =-...:::J
04kbps 13
with SORT ----EJ.--- kbos c--=
128kbps
12K ?c
256 kbps .~,,~.,_.,
0.64
76X kbps c:s:::s:=s:::s;
12 0.56 5120kbps =
102~O kbps ~ ~
"
OAX r, i
8 ,
J1.5
~
OA
fr
~
c:< 0.32
0.14
II
,
'~,
1).16 ~
!
J ~
,
O.OK l , l
J
0 10000
10000 20000
20000 30000
30000 40000
40000 50000
50000 60
hfl 120
I20 I IXI)
XO 210
140 360
360 6(~)
(>(HI 720
720
Time(cycles)
Time (cycles) Online
Onlinelime
lime(min)
iniin)

Figure 2:
Figure 2: Successful
Successful downloads
downloads with
with respect
respect Figure 3:
Figure 3: Reputation
Reputation values
values inin aa non-
non-
to time
to time in
in aa non-malicious
non-malicious network
network malicious network
malicious network with
with respect
respect to
to bandwidth
bandwidth
and online
and online time
time

0.7 0.7

0.6 06

0.5 05

Ic- 0.4

0.3
.
fJ
fr
0.4

03
"
c< c<
0.2 0.2

0.1 0.1
,
t
O_.......L_-'--~_-'-_~.......L_-'--~_....J

0 200 400 600 SO0 1000 1200 o0 100 200


100 200 300
300 400
400 500
500 600
600 700
700 800
SO0 900
900
Number of Shared Files Number
Number of
of Downlonders
Doa.nloaderr

(a) Repulalion
(a) Reputation vs. Number of
vs. Number of Shared
Shal-edFiles
Files (b) Reputation
(b) Reputakion vs. Number of
v s . Number of Downloaders
Downloaders

Figure 4:
Figure 4: Distribution
Distributionof reputation values
of reputation values in
in aa non-malicious
non-malicious network
network

based on
based on trustworthiness
trustworthiness of of uploaders.
uploaders. An An acquaintance
acquaintance isis always
always preferred
preferred over
over aa stranger.
stranger. Download
Download
rate decreases
rate decreases due due toto this
this selection
selection since
since an an acquaintance
acquaintance withwith low
low bandwidth
bandwidth mightmight be be selected
selected even
even
though there
though there isis aa stranger
stranger with
with high
high bandwidth.
bandwidth.
Reputation vs.
Reputation vs. Bandwidth
Bandwidth and and Online
Online Period.
Period. SORT's
SORT'S designdesign minimizes
minimizes the the disadvantage
disadvantage in in
the download rate. A peer is more capable of completing file upload requests if
the download rate. A peer is more capable of completing file upload requests if it has a high bandwidth it has a high bandwidth
and stays
and stays online
online longer.
longer. This
This peer
peer isis more
more likely
likely to
to get
get high
high evaluation
evaluation results
results from
from itsits downloaders
downloaders
based on
based on Equation
Equation 16. 16. Additionally,
Additionally, such such aa peer
peer can
can complete
complete more more large
large file
file upload
upload requests
requests thanthan aa
peer with
peer with low
low bandwidth
bandwidth and and short
short online periods 3.? According
online periods According to to Equation
Equation 17,17, uploading
uploading aa largelarge file
file
has more
has more positive
positive effect
effect on
on reputation
reputation than
than uploading
uploading aa small
small file. Fig. 33justifies that
file. Fig. that aa peer
peer with
with high
high
bandwidth and
bandwidth and long
long online
online period
period tends
tends to to have
have aa higher
higher reputation
reputation valuevalue among
among its downloaders .. The
its downloaders The
abnormal values
abnormal values in Fig. 33 are
in Fig. are due
due to
to the
the other
other parameters
parameters such
such asas the
the number
number of of shared
shared files.
files.
Reputation vs.
Reputation vs. Shared
Shared Files.
Files. A
A peer's reputation
reputation also
also depends
depends on on its
its number
number of of shared
shared files.
files. Fig.
Fig.
4(a) shows
4(a) shows average
average reputations
reputations of of peers
peers among
among their downloaders.
downloaders. A A peer sharing
sharing large
large number of of files
files
(file-rich peer) gets
(file-rich gets more
more download requestsrequests andand has
has more
more downloaders.
downloaders. Reputation queries queries about
about aa
file-rich peer
file-rich peer generally
generally return
return many
many recommendations
recommendations due due toto its
its large
large set
set of
of downloaders.
downloaders. According
According
'~imitingIhe
3Limiling the number
number of
of simullaneous
si~nultaneousuploads
uploads helps
helps Lo maintain Ihis
10 maintain this behavior. Because, aa peer
behavior. Because, peer does
does not
not overload
ovel-load ilself
itself
by accepling
by accepting more
more download
download requesls
requests Ihan can handle.
than ili t can handle.

19
0.7 0.7

0.6 0.6
~.

~.
0.5
~ 05

~
"u
0.4
J 0.4

.~ 0.3 ~ 0.3

~
v:
0.2 0.2
cr:
0.1 0.1

0
0.1 0.2 0.3 0.4 05 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Repulation Repulation

(a)
(a) Service
Service Tmst
T n ~ s vs.
t Reputation
vs. (b) T ~ u s vs.
(b) Recommendation Trust vs.
t Reputation

Figure 5:
5: Service and recommendation trust values with respect to reputation values in a non-malicious
network

0.7

0.6 0.6

0.5
~
r'= 0.4
"u
.~ 0.3
v;
0.2

0.1
,t
T
0
0 2 4 6 8
Average Ser\.ice Hi~lory
Averape Service Hi.s~or).Size
Size A\'ernge History Size
AYerape Hislory Size

(a) Service
(a) Service Tmst
Trust vs.
vs. Average
Average Service
Service History
History Size
Size (b) Recommendation
(b) Reco~nmendationTrust
Trust vs.
vs. Average
Average Recommenda-
Recornmenda-
tion History Size
tion History Size

Figure 6:
6: Service
Service and
and recommendation trust values with respect
respect to
to history
history sizes
sizes in
in aa non-malicious
non-malicious
network

to
to Equation
Equation 7, 7, number
number of of recommenders
recommenders is is aa parameter
parameter in in reputation
reputation calculation.
calculation. Thus,
Thus, aa file-rich
file-rich peer
peer
generally
generally havehave aa good
good reputation.
reputation. Fig. Fig. 4(b)
4(b) shows
shows howhow thethe size
size of
of set
set of
of downloaders
downloaders affect
affect reputation.
reputation.
A
A peer
peer with
with large set of
large set of downloaders
downloaders tendstends toto have
have aa good
good reputation.
reputation. Being
Being known
known by by ]0%
10%of of all
all peers
peers
is
is enough
enough to to build
build aa good
good reputation
reputation forfor most
most peers.
peers.
Reputation vs.
Reputation vs. Service
Service andand Recommendation
Recommendation Trust Trust Values.
Values. Service
Service trust
trust values
values have
have aa high
high
correlation
correlation withwith reputation
reputation values
values asas shown
shown in in Fig.
Fig. Sea).
5(a). DueDue to to their
their calculation
calculation method
method in in Equation
Equation
6,
6, this
this correlation
correlation isis expected.
expected. However,
However, thisthis strong
strong correlation
correlation isis notnot observed
observed between
between recommen-
recommen-
dation
dation trust
trust and
and reputation
reputation values
values as as shown
shown in in Fig.
Fig. 5(b).
5(b). TheThe reason
reason cancan bebe clarified
clarified from
from Fig.
Fig. 6(a)
6(a)
and
and 6(b).
6(b). For
For most
most cases,
cases, aa peer
peer performs
performs only only one
one service
service interaction
interaction with with an acquaintance. The
an acquaintance. The
probability that a peer downloads two or more files from an acquaintance is low
probability that a peer downloads two or more files from an acquaintance is low since the acquaintance since the acquaintance
may
may notnot have
have aa requested
requested filefile or
or might
might be be offline
offline atat the time of
the time of download
download request.
request. Very
Very few
few service
service
interactions
interactions do do not
not disrupt
disrupt the
the strong
strong correlation
correlation between
between reputation
reputation and and service
service trust
trust values.
values. However,
However,
aa peer
peer generally
generally requests
requests recommendations
recommendations from from an an acquaintance
acquaintance before before every
every file
file download.
download. A A high
high
number
number of of recommendation
recommendation interactions
interactions cause
cause aa small
small deviation
deviation of of recommendation
recommendation trust trust values
values from
from
reputation
reputation values.
values.

20
450 250
~Rec.
e cRequests
: '--+-------
- ' Rec. Requests --+-----

z
400
100 Rec. Hits
Hils - ----8----0 ~ ~ ~
z
Rec. Hils ----0----

0:; Coche Hits


"i::
-'"
"'-
,50 -'"
u
0:
200

300
-'"
0 -t:0 150
~ 250 :<
0:;
"
z
~
c
200 z
~
0 100
~
150 ~

" "
.c
~ 100 E
50
i. i.
50
a a
a 10000 20000 30000 40000 50000 a 10000 20000 30000 40000 50000
Time (cycles) Time (cycles)

(a) Number of Network Packets vs.


(a) ys. Time (b) Number of Network Packets ys.
vs. Time (with Caching)

Figure 7:
7: Overhead of reputation queries in terms of network packets for a non-malicious network

Overhead of SORT. Main overhead of SORT comes from the reputation queries. Before starting
a download session, a peer queries its acquaintances about each possible uploader and gets back rec-
ommendations. Fig. 7(a) shows average number of recommendation requests for a download session
ommendations.
with respect to the time. Since a peer obtains more acquaintances with time, the average number of
recommendation requests monotonically increases. At 50000 th
50000th. cycle, a peer makes more than 400
requests for each download session but only 20% of them get back a recommendation.
Caching reputation queries. A caching strategy can reduce the reputation query traffic. traffic. In a
download session, only one uploader is selected and the reputation values calculated about unselected
uploaders are deleted. A peer may cache the reputation values about unselected uploaders. The cached
reputation values can be used in a future download so some of the reputation queries can be prevented.
A cache entry is deleted after 2000 cycles. Fig. 7(b) shows the effect of caching on reputation query
traffic. Since a peer's set of acquaintances grows with time, the number of cache entries and cache hit
ratio increase. Caching reduces the half 7(a).
half of reputation query traffic comparing to Fig. 7(a).

4.2 2: Analysis about individual attackers


Experiment 2:
This section presents experiments on individual attackers. SORT has to mitigate service and recommendation-
based attacks in a malicious network. However, mitigating service-based attacks is more important
since they consume more bandwidth than recommendation-based attacks. attacks. For each individual attacker
behavior, a separate network topology is created in which 10% of all peers have been assigned to the
same behavior. Attack probability is set to 0.3 0.3 for hypocritical
hypoClitical attackers. Each discriminatory attacker
selects a separate set of victims which cover 20% of all peers.
Service-based attacks. Fig. 8(a) 8(a) shows the number of service-based attacks in every 1000
I000 cycles
for naive attackers.
attackers. SORT mitigates the attacks more than 50% after 10000th 10000" cycle. After 20000 th
20000th
cycle, more than 80% of the attacks have been prevented. The attacks do not stop completely since
all good peers do not know about all malicious peers. Due to decrease in service-based attacks, the
average number of successful downloads increases nearly 25% as shown in Fig. 8(b).
Fig. 9 presents the number of service-based attacks in hypocritical and discriminatory behavior.
start to learn about attackers from the beginning of the simulation and 80% of service-based
Good peers stal1
th cycle.
20000th
attacks are mitigated after 20000 cycle. The number of successful downloads increase around 4-5%
with SORT.
SORT. The increase is not substantial as in naive attacker case. The reason is that naive attackers
try to attack every peer and the number of their attacks is much higher than in the case of hypocritical
attackers. Similarly, discriminatory peers only attack a selected group of victims and their attacks are
attackers.

21
3000
H'/OSORT - I0.5

10
r-----,------,----,-----"T--.--:.-~---,
w/o SORT -~
wilh SORT
dd~'b
l""AJ.',/'
,/.Ab
MJfd
A

2500

.
/'<
". ~/&W:o~
~ 2000
9.5
"
~4..b.i
<: :..
"~ 1500
.. 9

.D

~
"
..", ~ 8.5
OJ
-~ 1000 ~ii. U

'""
'~
.....~~"-.A ....
U
:J

'"
500 7.5
~.lloil..~~(>40A.o:l.
~~oIWlo"""""'-bA.b.4b6
o '---__ . L -_ _-'---_ _-'--- '---_ _--' 7 L -_ _-'--- ~ _ ____" '_____-----' I
o0 10000
10000 20000
20000 30000 40000
40000 50000
50000 o
0 10000
10000 20000 30000 40000 50000
Time
Time (cycles)
(cycles) (cycles)
Time (cycles)

(a)
( a ) Service-based attacks vs. Time
Time (b) Successful
(b) Successful ddownloads vs.
o w n l o a d s vs. Tirne
Time

8: Service-based attack and successful download statistics for naive attackers


Figure 8:

700 ,---,-----.------.------r-----,

600
4
-
2
500 a a ... b &A ~4,,&&

=
4
?
400

300
300
-w
-
0
.. "4 6
v'v
.we
b
lob
"
hypo-w/o SORT ~
hypo-w/o
hypo-with
hypo-wilh SORT ----D---
---
----E}--- -
.-U '1bo
v 3El disc.u./o SORT
disc-w/o SORT - - - - - * - - - -
* b
5
iT:
200 - \ . """" . dkc-will1 SORT
disc-with SORT P .

v'I;I;v. vv" . '


100
100 - vvv'V:"":~~e8c~ElG
v 'Vvv""vvvv~~~~v
0
0'------'-------'--------'-------"---'-'-'-'-'-'-'-=
o 10000 20000 30000 40000 50000
Time (cycles)

9: Service-based attacks with respect to time for hypocritical and discriminatory attackers
Figure 9:

22
0.7 0.7

0.6 06
0.5

.~"
0.4 zoo, peers
good
nj)ive
naive
+ I 1
"
.~
0.5

0.4
5. 0.3 5. 0.3
ce:" ce:"
0.2
0.2
0.1
0.1
0
0
0 lOO
100 200 300 400 500 600 700 800 0 100
100 200 300 400 500 600 700 800
SO0
Number of Downloaders Number of Downloaders

(a) Naive attackers


(a) (b) Hypocritical attackers

Figure 10:
10: Reputation values with respect to number of downloaders for naive and hypocritical attack-
ers.

0.6

Y
3

.
-
~

;.;*
6
a
,A'.-
;...
naive
naive
hypocritical
-
~
- D - - -

c:
0.5
-u 2.5 - 'fi.'
+ YA4
'.04 f,l.
"8'
a' ".0
n

:
discriminatory
di.wriminarory a - -

f- 0.4
Ci
73 ".&a. . .
% 2 - p a ann
.~
~ 0.3
C
-
.-'c 1.5
1.5- f
Lea' a,
'4r"".- ',
.. .
'. 4
.bnn4
-

"f3 - n I.
ri
E ",P":
0.2 "2- ;,F Lu
u :p ES a m,OpMI -eed~?~~+~~<,
ce:" *,' %%,d 0
0.5
0.1
good peer\
peers B - $
o <--_'----_~_ ___L_~,_d_isc_rL~m_in_a-,t~--,r)_'---L_~ 0 L--...--...............
o0 lOO
100 200 300 400 500SO0 600 700 800
SO0 0 lOOOO
10000 20000 30000 40000 50000
Number of Downloaders Time (cycles)
(cycles)

1 1: Recommendation trust with re-


Figure 11: 12: Recommendation-based attacks
Figure 12:
spect to number of downloaders for discrim- with respect to time for individual attackers
inatory attackers

in small numbers comparing to naive attackers.


Reputation.
Reputation. At the end of experiment, all naive attackers have zero reputation among other peers
10(a). Naive attackers can not gain trust of good peers since they do not successfully
as shown in Fig. 10(a).
complete a service interaction. However, some naive attackers having large number of shared files files
downloaders.
succeed to attack 600-700 downloaders.
10(b) shows the average reputation values for hypocritical attackers.
Fig. IO(b) attackers. Comparing to naive
attacker, these attackers have succeeded to build up some reputation. The low number of their attacks
reputation. Their average reputation is lower than most of the good peers.
affects their reputation. peers.
Discriminatory attackers have slightly larger average reputation values than hypocritical ones. The
reason is that they attack only victims, and non-victim good peers assign them good reputation values.
peers. The victims learn about them with time, and
Their average reputation is still lower than good peers.
eventually, their attacks can be mitigated as shown in Fig. 9.9.
Recommendation Trust.
Recommendation Trust. Recommendation trust values are need to studied to understand if all
recommendations are being evaluated correctly. Fig. 11 11 shows that discriminatory attackers have less
recommendation trust values than good peers. The reconlmendations
recommendations of good peers are more credible
attackers. For hypocritical attackers,
than that of discriminatory attackers. attackers, recommendation trust values have a

23
800 ,--------,------,-----.-----,---------,

700
II

10.8 -
W/O SORT ~
w/oSORT -
with SORT ----o---
----E>---

600 ".
-0
106
i5 10.4
500 ~c 10.2
Cl
400

300 hypo-with SORT - - - - Q - ~ ~ ~ 10

"1:: 98
~&isc-withSORT 7 ~ 96
9.4
9.2 I I
0 10000 20000 30000 40000 50000 0 10000
10000 20000 30000 40000 50000
Time (cycles)
(cycles) Time (cycles)
(cycles)

Figure 13:
13: Service-based attack with respect 14: Successful
Figure 14: Successfiil downloads with respect
to time for hypocritical and discriminatory at- to time for hypocritical attackers
tackers

attackers. Naive attackers have zero average for recommendation


lower average than discriminatory attackers.
trust values since they have zero reputation. Since all attacker types have a lower recommendation trust
values than good peers, recommendation-based attacks are mitigated as shown in Fig. 12. 12.
Another interesting observation from data in Fig. 11 11 is that having a large set of downloaders
might have a negative effect on recommendation trust values. A peer with many downloaders
downloaders will
get more recommendation requests. A peer giving many recommendations is more prone to giving
inaccurate information since any recommendation has some uncertainty. This might reduce a peer's
average recommendation trust value among its downloaders.
downloaders.

Experiment 3: Analysis
4.3 Experiment collaborative attackers
Analysis about collaborative attackers
In this section, the effects of collaborative attacks on trust relationships will be studied.
studied. Collaboration
among peers makes the detection of malicious peers more-difficult. For each collaborative behavior,
a separate malicious network topology is created. As in the case of individual attackers, hypocritical
collaborators attack with 0.30.3 probability. All discriminatory collaborators agree on the same group of
victims which contain 20% of all peers.
Service-based attacks. In naive collaborators, more than 80% of attacks are stopped after 20000th 20000 th
cycle, and thus, successful downloads increase 25% percent. Since naive collaborators upload only
infectedhnauthentic files,
infected/inauthentic files, good peers quickly identify their intention and assign a zero reputation value
to them. Since good peers do not request their recommendations, collaborators can not amplify each
other's reputation. Thus, naive collaborators do not benefit from collaboration.
Fig. 13
13 shows the successful attacks with respect to time for hypocritical and and discriminatory
collaborators. For the first 15000
15000 cycles, hypocritical collaborators take advantage of recommendations
to amplify each other's reputation and attract more good peers to get services from them. After 15000 th
15000th
cycle, good peers start to identify some collaborators and the rate of attacks starts to fall
cycle, fall down.
down. The
situation in hypocritical behavior is not observable in discriminatory behavior since victims start to
figure out collaborators from the very beginning of the experiment. Thus, attack rate monotonically
decreases in discriminatory behavior.
The situation for hypocritical collaborators in the first 1500015000 cycles affects successful download
rate as shown in Fig. 20000th cycle, rate of the successful downloads is lower with SORT.
14. Before 20000th
Fig. 14. SORT.
Good peers continue to download from collaborators due to their fake fake amplified reputation in the first
20000 cycles. As more collaborators are identified, successfiil downloads starts to increase.
identified, the rate of successful

24
0.7 0.7

0.6 0.6

0.5 0.5

1
5-
0.4 good peers
hypocritical 5-1 0.4
good peer\ peer~
0.3 0.3 ddi~crirninatory
~ . \ c r \~' ImC ~I In~a\ ~ o r ) ~
"
'" '"" victims
02 0.2

0.1 0.1

OJl'---~'----------'------L-----'------'-------'------'---------" o---'----------'------'------'---~--'------'----
o
0 100
100 200 300 400 500 600 700 800 o
0 lOO
100 200 300 400 500 600 700 800
SO0
Number of Downloaders Number of Downloaders

(a) Hypocritical collaborators (b) Discriminatory collaborators

Figure 15:
15: Reputation values with respect to number of downloaders for hypocritical and discrimina-
tory collaborators.
collaborators.

0.35 ,-------,---;-r--,----,----,-----,-------,---, 0.5


good peers
0.3 discrimimllory
0.4 victims
0.25 ~
0.2 .~ 0.3
;2
0.15 ~ 0.2
g
0.1 2J

0.05
'" 0.1

O+--~'----------'------'----'------'------'-----~'---------.J O--'--------'------'------'---~--'------'----
o0 100
100 200 300 400 500 600 700 SO0
800 o
0 100
lOO 200 300 400 500 600 700 SO0
800
Number of Downloaders
Downloader\ Number of Donnlonder\
Downloaders

(a) Hypocritical collaborators (b) Discriminatory collaborators

Figure 16: ~umber of downloaders for hypocritical and


16: Recommendation trust values with respect to number
collaborators.
discriminatory collaborators.

Reputation. Naive collaborators always have zero reputation values due to the reasons explained
Reputation.
earlier. 15(a) shows the reputation values for hypocritical collaborators at the end the experiment.
earlier. Fig. 15(a) experiment.
Collaborators gain very low reputation although they disseminate amplified recommendations about
Collaborators
each other. 25000 th cycle is very similar to Fig. 15(a).
other. The situation at 25000th 15(a). Considering the observations
from Fig. 1313 and 14,
14, it can be concluded that good peers identify collaborators between 15000 th and
15000th
th
25000~'~
25000 cycles and reduce their reputation quickly.
15(b).
Discriminatory collaborators are successful in gaining a good reputation as shown in Fig. 15(b).
Since they attack only victims, their reputation among non-victim good peers remain high. high. Further-
more, their unfairly low recommendations decrease the reputation of victims. When these recommen-
dations are gathered, they substantially affect the results of reputation calculation and victims can not
gain high reputation. This situation is not observed in individual discriminatory behavior. Individual
discriminatory attackers randomly select different sets of victims. Any good peer can be a victim for
attacker. Thus, unfairly low recommendations are almost evenly distributed on all good peers.
an attacker.
Recommendation trust. Naive collaborators always have zero recommendation trust value due
Recommendation
to their zero reputation values. Recommendation trust values of hypocritical collaborators are given

25
z
100
naive
hypocritical ----E}---
hypocritical ----D.--
~

."
.......
"'" discriminmory "
~l)AO'"

~ 80
SO discriminatory
(i."'~rii.
] 60
#>t>bb i
,;"
.0
C:
.~ _",Al)/J..A
.
"0 40
c:
"
....
E ~ " "
"ui3 20
~ .-
;""""''<'''<'''''''''
0 """"""
00 10000
10000 20000 30000 40000 50000
Time
Tinie (cycles)

Figure 17:
17: Recommendation-based
Recommendation-based attacks with respect to time for collaborators.

in Fig.
Fig. 16(a).
16(a). Interestingly,
Interestingly, hypocritical collaborators have smaller recommendation trust values than
attackers. This might appear to be an abnormality since collaborators praise
individual hypocritical attackers.
each other with unfairly high recommendations. However, a collaborator loses recommendation trust
of good peers after giving unfairly high recommendations. The reason is that an unfairly high rec-
ommendation substantially deviates from fair recommendations of good peers. Hence hypocritical
hypocritical
collaborators lose
collaborators lose recommendation trust of good peers faster than individual hypocritical attackers.
also pollute the pool of recommendations with unfairly high ones. Fair recommendations of
They also of
good peers get relatively low evaluations due to these unfairly high recommendations. Eventually,
good peers have slightly lower trust values than individual attacker scenario.
16(b) shows
Fig. 16(b)
Fig. shows recommendation trust values of discriminatory collaborators. Since they give
misleading recommendations collaboratively, good peers believe the trustworthiness
trustworthiness of
of their recom-
mendations. Good peers develop a high recommendation trust in collaborators. Since recommenda-
mendations.
are considered to be true, good peers loose recommendation
tions of collaborators are recommendation trust in each other.
give low recommendations about collaborators due to their attacks. Good peers are never
Victims give never
attacked by collaborators so they think that victims are giving misleading recommendations about col-
laborator. Thus,
laborator. Thus, good peers also loose recommendation
recommendation trust in victims. This abnormality does not
cause a problem when preventing service-based attacks. Victims can identify collaborators quickly
and protect themselves as discussed in Fig. 13.13. However, discriminatory collaborators can continue to
give misleading recommendations due to their high recommendation trust value. As shown in Fig. 17,
SORT was able
SORT able to stop
stop misleading recommendations of naive and hypocritical collaborators but not
ones.
discriminatory ones.

4.4 Experiment 4:
Experiment 4: Adapting Hypocritical
Hypocritical and Discriminatory Behavior
section, the attack probability in hypocritical behavior and the size of
In this section, of set of
of victims in discrimi-
natory behavior will be changed to observe the effects of attacks on trust relationships. Individual and
are studied separately.
collaborative attackers are
hypocritical attackers.
Individual hypocritical attackers. Figure 18(a)
18(a) shows the service-based attack rate for hypocrit-
ical attackers when the attack probability is changed to 10% and 20%. Good peers can not identify
malicious peers as 20000th cycle, attack rate
as quickly as in the case with 30% attack probability. After 20000~"
50% and 70%
dropped 50% 70% for 10%
10% and 20% attack probability, respectively.
Recommendation-based attacks present a different distribution as shown in Figure 18(b). l8(b). When
the attack probability is 20%,
the 20%, rate of misleading recommendations are higher in the first 10000 cy-
cles. Since
cles. Since attackers are identified faster in the case with 20% probability, the attack rate can not be

26
450
400
~

-
1.2
,. \,,~

I
350
-'" ~ 20%-no trust - - - - - t -
20%-no


300
250
". ~
ck-with trust ----,Qr.---
20%-with
20
10%-no lrust
1O%-no
--------
trust - - - - 0 ---
0.8
] GI _.
'.);i':l:l.

[]El'.&.,
10%-with lrust
IO%-with trust - - - -
200 ..,~~ ~GC! oEfl p
~
!'lGG": 0.6
i3vv'V~.. A..... ~ bEl ,'8GODB GQl:JGO[]G[m0[JEl
.~ 150
vvvv""., 'l\.:r..AA 0.4
"
til
100 v-vv:... ':t...:;...t>.,
'"''''vv~~~''l. 0.2
Vv ..."V~4a
50 10% ------
v"\\IV'~+0*"-te-~ 20% . . O . ~ .
0
0 10000
10000 20000 30000 40000 50000 o0
-.I'
10000 20000 30000
20%
O'-'l>-----'------L-----'-----'-----'
0
40000 50000
(cycles)
Time (cycles) (cycles)
Time (cycles)

(a) Service-based attacks vs. Time (b) Recommendation-based attacks vs. Time
(b)

18: Attack statistics for individual hypocritical attackers with 10%


Figure 18: 10% and 20% attack probability.

1200 ,------,-------,---,----..,--------,
100 ------
400 ----fr---
1000 2.5

800 - ~"
\, -
400-no trust - - - - - t - - 2

600
"9.b .. '... I100-no
[rust ----~--
400-with lrust
OO-no lrust
100-with lruS!
1OO-with trust
--------
trust - - - - - 0 - - - -
7 - 1.5

400 ". 4
a*
"''''Yo):.

200 ~_B~O.~OOGEl>~~~?Dd3QOOG[]8GG~O~DODl3QI3~-l
0.5
,",v"'VVVV'llV ~"t.A.A&.,,:o.A.4.
V'I1vV-V'6"vvVV<7VVV9VVV :::1>.6AA6AIMJ,.~.t>.,...4
oL------'---.....L..:..:..::===='"""'=~ 0'-"'-------'------'------'-----'-----'
o0 10000
10000 20000 30000 40000 50000 o0 10000
10000 20000 30000 40000 50000
(cycle>)
Time (cycles) (cycles)
Time (cycles)

(a) Service-based attacks vs. Time vs. Time


(b) Recommendation-based attacks vs.

19: Attack statistics for individual discriminatory attackers in the 100


Figure 19: 100 and 400 victims cases.
cases.

Thus, attack rate is better for 10%


maintained and dropped. Thus, 10% attack probability after 10000th
1 0 0 0 0 ~cycle.
~
Individual discriminatory attackers. Figure 19(a) 19(a) shows the service-based attack rate for indi-
vidual discriminatory attackers when there are 100 100 and 400 victims. 80% of attacks are stopped after
th cycle since victims identify attackers quickly.
20000th
20000
19(b) shows the situation in recommendation-based attacks.
Figure 19(b) attacks. In the 400 victims case, the
attack rate made a peak at first then dropped quickly.
quickly. The reason is that misleading recommendations
about 400 victims made a high conflict with the recommendations of good peers and victims. For the
100 victims case, the misleading recommendations are targeted to a small group of peers and do not
100
make a high conflict with other peers. Thus, misleading recommendations drops gradually in the 100 100
victims case.
Hypocritical collaborators. Changing attack probability causes interesting results for hypocritical
collaborators. Figure 20(a) shows the service-based attack rate for hypocritical collaborators. As the
attack probability decreases, detecting collaborators takes longer time. Collaborators take advantage
of SORT for a longer period comparing to the 30% attack probability case shown in Figure 13. 13. After
30000 th cycle, attack rate in the 10%
30000th 10% probability case is higher than the 20% probability case. Because,
collaborators are undetected for a longer time in the 10%10% probability case.
case.
This situation also affects the rate of recommendation-based attacks as shown in Fig. 20(b) .
Fig. 20(b).
th cycle, good peers do not detect the collaborators and request more recommendations
15000~"
Before 15000

27
~
-'"
u
"
.<'
"~
d;u
.~
OJ
V)
600

500

400

300

200

100

0
'+,
20%-no tru\l
20%-al~htru\l
10/o-no trust
10%-unh t r u ~ t
I0000 20000
-
0
=

Time (cycles)
30000
""-J"-v~,

40000
' ~ ~ ~ 2 ~ .
50000
~

~<
]
~
.~
"5
"
E
0
ij

'"
20
18
16
14
12
10
8
6
4
2
0
0 10000
I0000 20000 30000
(cycles)
Time (cycles)
10%
20%

40000
-><--

50000

(a)
(a) Service-based attacks vs.
vs. Time (b) Recommendation-based attacks vs. Time

Figure 20:
20: Attack statistics
statistics for hypocritical collaborators with 10% and 20% attack probability.

1000 ,--------,-----,------,----,-----, 90
100~
900
900 - ~
-'" 80 400 ----8----
~
800
700 400-no trust ~ -
400-no
400-with
400-w~thtrust
-
truct ----~--
<
~
70
60

-
---*--
600 -4
600 100-no
100-no trust
Iru\l
IOO-with
100-wlth trust
tru\t
0
~ 50
500 - ~.&.'ll.4A"l:. -
500
400 bdli
......,,_
.~ 40
400 -
"~ 30
300
300 - ......
4.
200 . "~R'boG"~O_G"~oElOO<3.D" . is 20
\P~o &'a.~ ij
I 00 ~V'v-v'Qvvvv ......... v ~""-A.404. 10
OL-_---'-_ _---'--_-'-.:..I."-'====~
vV'v'Vvvvvv"Yv""v:::.l.I.'::~:::~~.ll.~~4J,""'-Al>.a. '"
0
o 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000
Time
Time (cycles)
(cycles) Time (cycles)
(cycles)

(a) Service-based
(a) Sel-vice-based attacks vs.
vs. Time (b) Recommendation-based attacks vs. Time

Figure 21:
Figure 2 1 : Attack statistics for discriminatory collaborators in the cases with 100 and 400 victims.

from them.
from collaborators, they lose recommendation trust in the
them. As more good peers start to figure out coIlaborators,
collaborators. Then,
collaborators. Then, they request less recommendations from the collaborators. The attack rate in the
10% probability case is higher, since the collaborators are not detected quickly.
10%
Discriminatory collaborators. The service-based attack rate for discriminatory collaborators is
shown in Figure 21
shown 21 (a).
(a). The attacks
attacks fall
fall down quickly from the beginning of of the experiments. In the
100 victims case, good recommendations of non-victim good peers about collaborators motivates the
100
victims to get services from
victims from collaborators. Thus, attack rate drops slower than the 400 victims case.
The
The recommendation-based attacks are as shown in Figure 21
21 (b).
(b). Since the collaborators dominate
100 victims,
100 victims, they can continue to give misleading recommendations. In 400 victims case, victims
succeed to affect the decisions of each other and other good peers so misleading recommendations can
level.
be contained in a certain level.
To understand the situation in discriminatory collaborators better, changes in the distribution of
To of
reputation and recommendation trust is studied. The 100 victims case presents a similar distribution
to Figure 15(b)
to 15(b) but victims gain a lower reputation than the 200 victims case. Figure 22(a) shows the
distribution of reputation values in the 400 victims case. Since the victims
distribution victi ms can counter bad recommen-
dations
dations of collaborators,
collaborators, they gained as much reputation as collaborators. This has some interesting
implications on recommendation trust value as shown in Figure 22(b). Victims gain the highest rec-
ommendation trust values. There are two reasons for this situation: (i) Misleading recommendations

28
0.7 ,--------.----,----.--~-~____.-___.-, 0.3 ,--------,-----,--.,-----~---,---.--~_

025

0.15

E 0.1
c
~
'" 0.05 good peers
discrimillalory
victims
O__ ----'----'-----'---~-~-----"~----'---.J O_----'-~---'----~----'----'-----'--------.J

o0 100
100 200
200 300
300 400 500500 600 700 800
SO0 o
0 100 200 300 400 500 600 700 800
SO0
Number of Downloaders
Downloaders of Do\vnloaders
Number of Downloaders

(a)
(a) Reputation vs.
vs. Number of DownJoaders
Down1oadel.s (b) Recommendation wusr
trust vs. Number
Number of
of Downloaders
DownJoaders

Figure 22:
Figure 22: Reputation and recommendation trust values for discrin~inatory
discriminatory collaborators in the 400
victims case.
case.

3000 10000
9000
2500 ;:;
~
t: 8000
-'" "
U
5 2000 if.
,.
7000
;;:
-
t; 6000
"
~
~
.<0
1500
I500 - h,Q, wlo SORT
wilh SOKf "---
.-.-
- g"-" 5000
b. <f;
!.> 4000
" C
.~ 1000 n 3000
.<0
<f;
2000
500 -
500 i
O D E O W O C . ~ ~ 1000
00 1 I 0
00 10000
10000 20000
20000 30000
30000 40000 50000 0 10000 20000 30000 40000 50000
Time
Time (eye les)
(cycles) Time (eycleS)

23: Service-based attacks with respect


Figure 23: Figure 24: Number of of strangers selected with
to time
to time for
for naive
naive pseudospoofers. respect to time for naive pseudospoofers.

100 collaborators
of 100 collaborators can not dominate the fair recommendations of 400 victims. Thus, average trust
collaborators drops and victims gain a high average trust among each other. (ii)Non-victim
values of collaborators
give good recommendations
good peers give recommendations about collaborators which conflict with the recommendations
Thus, average trust value of non-victim good peers decreases.
of victims. Thus,

5: Analysis about pseudospoofers


4.5 Experiment 5:
4.5
Changing pseudonym is an easy way of clearing its bad history for a malicious peer. In this section,
success rate
success rate of pseudospoofing attacks
attacks will be studied. The decrease in attack rate is a measure of of
attacks. In the experiments, a pseudospoofer
resistance against Sybil attacks. pseudospoofer will change its pseudonym
pseudonym after
each 10000
10000 cycles.
cycles.
Service-based attacks.
Service-based attacks. Fig.
Fig. 23
23 shows that SORT reduces service-based attack rate for naive
pseudospoofers. After each 10000
pseudospoofers. 10000 cycles, the attack rate increases a little bit, but then it drops quickly.
To understand how the attack rate drops, we can examine Fig. 24. This figure shows the number of
To of
1000 cycles.
strangers selected in each 1000 cycles. Since peers gain more acquaintances with time, they have less
to select strangers.
tendency to strangers. Therefore,
Therefore, a pseudospoofer gets less service requests and the attack rate
drops with time.
drops time.
attackers, a similar situation is observed from data in Fig. 25.
For hypocritical and discriminatory attackers,

29
900
800 1
2.5 , - - - - , - - - - - - , - - - , - - - - - . - - - - - - - - ,
A.
. "
naive
hypocritical --------
hypocritical ----&---
discriminatory - -a
discriminatory
-
-----++----

~ 700 2
-'" Q
E 600 i~ .
<:
- 1.5 :: !'iI;

"~
L

~
.~
500
400
300 -
' v-
".n
b9
hypo-wlo SORT
hypo-wlth SORT
dlac wlo SORT
d1.r-wnh SORT
-
A
- - 1 - i \
l~ '

0.>
u: 200
b v -

0.5
J \~
.~ ~,A'~'
AMA
~

o l!!!>f:pi~~l~. ~:~o~""~OO~Ef~\!!!!,..~_;;.A;;
..A~ '~'~'! '!.! i A' ' ' ~f! ' ' _!!!!!!~
100

0 10000 20000 30000 40000 50000 o


0 10000 20000 30000 40000 50000
Time (cycles)
(cycles) (cycles)
Time (c)'cles)

Figure
Figure 25:
25: Service-based attacks
attacks with respect Recommendation-based attacks
Figure 26: Recommendation-based
to
to time for
for hypocritical and discriminatory with respect to time for all pseudospoofers.
pseudospoofers.

The
The attack rate decreases with time since pseudospoofers are not selected by good peers.
peers, A slight
increase
increase in attack rate can still be observed after the pseudonym changes at 10000~'~
10000th 20000 th
and 20000th
cycles.
cycles.
Recommendation-based attacks.
Recommendation-based attacks. The rate of recommendation-based attacks also decreases with
time. Since
time, Since pseudospoofers are considered as strangers, they are not asked for any recommendations.
Fig. 26
Fig. 26 shows
shows the decrease in misleading recommendations.

Discussion on Future
5 Discussion Future Work
system dynamics.
P2P system dynamics. Deletion of resources which lose popularity, additions of of new peer/resource to
an existing topology, multi-uploader sessions and flash crowds [39]
an [39] are some of
of the situations that may
relations. Studying such dynamics may help to design better trust models.
affect evolution of trust relations.
Privacy. Reputable service
Privacy. service providers are good victims for DOS attacks. Protecting privacy
privacy of
of a ser-
vice provider is is harder than protecting privacy of a service requester. Promoting reputation
reputation of
of a service
provider and protecting its identity are adversarial tasks [40, [40, 16].
161. A peer needs privacy when giv-
ing recommendations about malicious peers. Otherwise, it might become a target of of malicious peers.
SORT needs
SORT needs to be extended with a privacy scheme which protects the identity of of service providers and
recommenders. However, such a scheme needs an authentication method to prevent forgery of of fake
recommendations and the identity of peers.
Reputation storing/collection
Reputation storing/collection method.
method. Collecting reputation information from acquaintances is
limiting factor in the proposed trust model. Broadcasting reputation queries may cause excessive
a limiting
traffic. DHT structures may be used to access trust information efficiently 14,
network traffic. [4, 51.
5]. A trust
holder isis assigned for each peer.
peer. This approach requires that peers rely on trust holders instead of of
acquaintances.
acquaintances. This may cause problems if trust holders behave maliciously. How a peer can develop
its trust holders is a question to be answered. Note that in SORT, a peer develops trust in
trust in its
its acquaintances through past interactions and recommendations. A compromise is needed between
its
querying acquaintances and using DHT structures.
Incentives. SORT does not force
Incentives. force a peer to provide services such as sharing files for others. Repu-
tation can be used as as a currency when exchanging services [29,[29,31,
31, 321.
32]. An incentive mechanism can
increase the benefits of good peers so they would be willing to remain honest and continue to contribute
services.
servIces.

30
6 Conclusion
A self-organizing trust model for P2P networks is presented in which a peer can develop trust rela-
tions without using any priori information. Trust metrics defined on service and recommendation trust
contexts help a peer to reason more precisely about capabilities of other peers in providing services
and giving recommendations. In a non-malicious network, reputation of a peer is proportional to its
capabilities such as network bandwidth, average online period on the network and number of shared
resources. In a malicious network, service and recommendation-based attacks affect the reputation.
Three individual attacker, three collaborator and three pseudospoofer behavior are studied in ex-
periments. SORT reduces service-based attacks in all scenarios. For individual attackers, hypocritical
ones take more time to identify. Identification of collaborators usually takes longer than identification
of an individual attacker. At the start of the experiment, hypocritical collaborators succeeded to launch
more attacks with SORT than the case when SORT is not used. The reason is that they were able take
advantage of unfairly high recommendations in order to mislead and attract more good peers. Good
peers eventually identify them and their attacks are mitigated. Discriminatory collaborators succeed
in maintaining a better reputation than hypocritical ones since they do not attack 80% of the peers.
files.
However, their attacks are mitigated faster since victims identify them and do not download their files.
They gain a better recommendation trust value than good peers. They also cause the victims to have a
low recommendation trust value so they put victims into liars class. Pseudospoofers are more isolated
from good peers after each pseudonym change.change. Since good peers get more acquaintances with time,
they do not prefer to interact with strangers and leave pseudospoofers isolated.
Defining a context of trust and its related metrics increases a peer's ability to identify and mitigate
tasks. Therefore, various contexts of trust can be defined to enhance
attacks on the context related tasks.
tasks. For example, a peer might use trust metrics in order to select
security of P2P systems for specific tasks.
better peers while routing P2P queries, checking integrity of resources, and protecting privacy of peers.

7 Acknowledgements
Acknowledgements
The authors thank to Leszek Lilien at Western Michigan University and Sanjay Madria at University
of Missouri-Rolla. Their comments were a great help to improve this paper.

References
References
[ I ] C. Dellarocas, "Immunizing online reputation reporting systems against unfair ratings and dis-
[1]
criminatory behavior," in Proceedings of
of the
tlze 2nd
2ndACM colzference on Electl-onic
ACM conference comlnerce (EC),
Electronic commerce (EC),
2000.
2000.

[2] A. Abdul-Rahman and S.


[2] S. Hailes, "Supporting communities," in Proceedings of
"Supporting trust in virtual communities," of the
33rd Hawaii bzterrzatiorzal Confere/zce On
International Conference Systenz Sciences (HICSS),
0 1 2 System (HICSS), 2000.

[3] P. Resnick, K.
[3] Comnzunica-
"Reputation systems," Communica-
K. Kuwabara, R. Zeckhauser, and E. Friedman, "Reputation
of ACM, vol.
tions ofACM, vol. 43, no. 12,
12, pp. 45-48,
45-48, 2000.
2000.

[4] K. Aberer and Z. Despotovic, "Managing trust in a peer-2-peer information system," in Proceed-
[4]
ings of IOtlz international
of the 10th ir7terlzarional conference
colzference on I~zfonnation ma~zagenient(CIKM),
Information and knowledge management (CIKM),
2001.

31
[5] S.
[5] S. Kamvar,
Kamvar, M. Schlosser, and H. Garcia-Molina, "The eigentrust algorithm for reputation man-
agement in p2p networks,"
networks," in Proceedings of
of the 12th international
inter~zationalconference on World
World Wide
Wide
Web (WWWj,
Web (WWW),2003.

[6] L. Xiong and L. Liu, "Peertrust: Supporting reputation-based trust for peer-to-peer ecommerce
[6]
communities," IEEE Transactions on Knowledge and Data Engineering, vol. 16, 16, no. 7, pp. 843-
843-
857.2004.
857,2004.

[7] K. Aberer, A. Datta, and M.


[7] M. Hauswirth, "P-grid: Dynamics of self-organization
self-organization processes in
structured p2p systems," Lecture Notes in Computer
Coinpzlter Science: Peer-to-Peer Systems
Systenzs and Applica-
tions, vol. 3845, 2005.
tions, vol. 2005.

[8] S.
[8] S. Ratnasamy, P. Francis, M. Handley, R.
R. Karp, and S.
S. Shenker,
Shenker, "A scalable content addressable
of the ACM
network," in Proceedings of SIGCOMM, 2001.
ACM SIGCOMM,

[9] J. Douceur, "The sybil attack," in Proceedings of


[9] of the 11st
st Internariorzal Worksl7op on Peer-to-Peer
International Workshop
(IPTPS),2002.
Systems (IPTPSj, 2002.

[lo] A. A. Selcuk, E. Uzun, and M. R. Pariente, "A reputation-based trust management system for p2p
[10]
networks," in Proceediizgs of the 4th IEEE/ACM I~ztenzatio~zal
Proceedings of Sy~nposiumon Cluster Computing
International Symposium Conzputing
(CCGRID),2004.
and the Grid (CCGRIDj,

11 I] S.
[11] S. Marsh, For~nalisiizg Comnp~~tational
Formalising Trust as a Computational Co~zcept.PhD thesis, Department of Mathemat-
Concept.
Science, University of Stirling, 1994.
ics and Computer Science, 1994.

[12] D. H. McKnight, "Conceptualizing


[12] "Conceptualizing trust: A typology and e-commerce customer relationships
of the 34th Annual
model," in Proceedings of Annual Hawaii Internatioiznl Conference on System Sciences
International Conference
(HICSS),2001.
(HICSSj,2001.

[ I 31 Y. Zhong, Fonnalizatio~z
[13] of Dyna~nicTrust
Formalization ofDynamic Trust and Uncertain
Uncertaiiz Evidence for UserAutl7orization.
for User Authorization. PhD
thesis, Department of Computer Science, Purdue University, 2004.

[ I 41 Y. Wang and J. Vassileva,


[14] Vassileva, "Bayesian network trust model in peer-to-peer networks," in Proceed-
of 2nd International
ings of Workshop on Agents and Peer-to-Peer Computing
International Workshop C o ~ n p ~ ~ tat
i n gthe Autonomo~~s
Autonomous
Agents and Multi Agent Systems
Systenzs Conference (AAMASj,
(AAMAS), 2003.

[15] M.
[15] M. Srivatsa, L. Xiong, and L.
L. Liu, "Trustguard: Countering vulnerabilities in reputation manage-
ment for decentralized overlay networks," in Proceedings of World Wide
of 14th World Wide Web
Web Conference
(WWW), 2005.
(WWWj,

[I 61 y.
[16] Y. Lu, W.
W. Wang,
Wang, D. Xu, and B. Bhargava, "Trust-based
"Trust-based privacy preservation for peer-to-peer data
sharing," in Proceedings of Workshop on Secure Knowledge Managenze~zr
of the Workshop (SKM),2004.
Management (SKMj, 2004.

[I 71 S.
[17] S. Saroiu, P. Gummadi, and S.S. Gribble, "A measurement study of peer-to-peer file
file sharing sys-
of the Multin~edia
tems," in Proceedings of Co~nputingand Networking, 2002.
Multimedia Computing 2002.

[ I 81 M.
[18] M. Ripeanu, I.I. Foster, and A. Iamnitchi,
lamnitchi, "Mapping the gnutella network:
network: Properties of large-scale
peer-to-peer systems and implications for system design," IEEE Inter~zet C o ~ i i p ~ ~ tvol.
Internet Computing, ~i ~0z1g6,
. ,6no.
, I,
pp. 50-57,
50-57,2002.
2002.

[19] S.
[19] S. Saroiu, K.
K. Gummadi, R. Dunn, S.
S. D. Gribble, and H. M. Levy, "An analysis of intemet
internet content
delivery systems," in Proceedings of tlze 5th
of the 5t11 USENIX Symposium
S~~nzposiunz Opemting Systems Design
on Operating
& Iinpleii~e~ztatio~z
& (OSDI),2002.
Implementation (OSDl),

32
[20] Z. Despotovic and K. Aberer, "Trust-aware delivery of composite goods," in Proceedirzgs
[20] Z. of the
Proceedings of
2nd Intenzational Workshop on
International Workshop 017 Agents and Peer-to-Peer Colnputi~zgat the Autolzolnous
Peer-to-Peer Computing Autonomous Agents
and Mulri
Multi Agent
Agent Systenzs Colzfere~zce(AAMAS),
Systems Conference (AAMAS), 2002.

[21] B.
[21] B. Yu and M. Singh,
Singh, "A social mechanism of reputation management in electronic communities,"
communities,''
in Proceedings of
of the Cooperative Ilzfonnatio~z (CIA),2000.
Infonnation Agents (CIA),

[22] E. Terzi, Y. Zhong, B. Bhargava, Pankaj, and S.


[22] S. Madria, "An algorithm for building user-role
profiles in a trust environment," in Proceedings of
of the 4th 11zter17atiorzal
Co~zfere~zce
International Conference on Data
Warehousing Discovery, (Da WaK), Lecture Notes in Computer Science, Vol2454,
Warehousing and Knowledge Discovery,(DaWaK), Vol 2454,
2002.

[23] L.
[23] L. Mui, M. Mohtashemi, and A. Halberstadt,
Halberstadt, "A computational model of trust and reputation
for e-businesses," in Proceedings of
of the 35th Annual Hawaii 11zter17ational
Colzfere~zceon
International Conference Systern
017 System

(HICSS),2002.
Sciences (HICSS),

[24.] A.
[24] A. Jgsang,
Jsang, E. Gray,
Gray, and M.
M. Kinateder, "Analysing topologies of transitive trust," in Proceedings
of 1st bzter~zatiorzal
of Workshop 011
International Workshop on Formal Aspects in Security and Trust (FAST),2003.
Tr~rst(FAST),

[25] F.
[25] F. Comelli,
Comelli, E. S. D. C.
E. Damiani, S. C. di Vimercati, S.
S. Paraboschi, and P.
P. Samarati, "Implementing
a reputation-aware gnutella servent," in Proceedings of
of the NETWORKING 2002 Workshops
Worksholx on
Web E~zgineerirzg
Web Engineering and Peer-to-Peer 2002.
Colnpr~ti~zg,
Peer-to-Peer Computing,

[26] F. Comelli,
[26] Cornelli, E. Damiani, S.
S. D.
D. C. di Vimercati, S.
S. Paraboschi,
Paraboschi, and P. Samarati, "Choosing rep-
utable servents in a p2p network," in Proceedings ofof the 11th inter~zationalconference
I Jth international confere~zceon World
World
Wide Web
Wide (WWW),2002.
Web (WWW),

[27] "Gnutella website,"


[27] website," http://www.gnutella.com.
http://www.gnutella.com.

[28] B.
[28] B. Ooi,
Ooi, C.
C. Liau, and K. Tan,
Tan, "Managing trust in peer-to-peer systems using reputation-based
techniques," in Proceedirzgs
techniques," of the 4th I~zter~zatiorzal
Proceedings of Web Age information
International Conference on Web i~zforr~zatio~z
Man-
agement, 2003.
agement,2003.

[29] S.
[29] S. Lee, R. Sherwood,
Sherwood, and B. Bhattacharjee, "Cooperative peer groups in nice,"
nice," in Proceedings of
of
the INFOCOM,
INFO COM, 2003.

[30] T.
[30] T. Moreton and A. Twigg,
Twigg, "Enforcing collaboration in peer-to-peer routing services,"
services," in Proceed-
ings of rlze J1st
of the st Interrzational Confere~zceon
International Conference 012 Trust Marzagemelzt (iTrust),2003.
Management (iTrust),

[31] T.
[31] T. Moreton and A. Twigg, "Trading in trust,
trust, tokens, and stamps,"
stamps," in Proceedings of
of J1st Workshop
st Workshop
orz Econor~zics
on Economics of
of Peer-to-Peer
Peer-to-Peer Systems,
Systems, 2003.

[32] R. Gupta and A. Somani,


[32] Somani, "Reputation management framework and its use as currency in large-
of the 4th IEEE Il~terr~ational
scale peer-to-peer networks," in Proceedings of Confererzce 0on
International Coriference 1 1 Peer-
Peer-
Comnputing, 2004.
to-Peer Computing, 2004.

[33] B. Bhargava, L.
[33] L. Lilien, and M. Winslett, "Pervasive trust,"
trust," IEEE Intelligent 19,
Systems, vol. 19,
Intelligent Systems,
5, pp. 74-76,2004.
no. 5,pp. 74-76, 2004.

[34] K.
[34] K. Chopra and W.
W. A. Wallace, "Trust in electronic environments,"
environments," in Proceedings of
o f the 36th
Alz~zzral
Annual Hawaii 117ter1zatio1zal
Colzfererzce on Systenz
International Conference (HICSS),2003.
System Sciences (HICSS), 2003.

33
[35] S.
[35] S. Xiao and 1.
I. Benbasat, "The formation of trust and distrust in recommendation agents in re-
interactions: A process-tracing analysis," in Proceedings of
peated interactions: of the 5th international
i~zter~zatio~zal
Confer-
Confer-
ence on Electro~zic Co~nmerce.
Electronic Commerce, 2003.

1361 A.
[36] Con7l17~1nicato1zs
A. Salam, L. Iyer, P. Palvia, and R. Singh, "Trust in e-commerce," Commllnicatons of ACM,
of
vol. 48, no. 2, pp. 72-77,2005.
vol. 72-77, 2005.

[37] A. Habib, D. Xu, M. Atallah, B.


[37] B. Bhargava, and J. Chuang, "A tree-based forward digest protocol
to verify data integrity in distributed media streaming," IEEE Transactions
Tra~zsactio~~s
on K1zo~:ledge
Knowledge anda17d
Data E~zgineeri~zg (TKDE), vol. 17,
Engineering (TKDE), 17, no. 7, pp. 1010-1014,2005.
1010-1014, 2005.

[38] G.
[38] G. Caronni and M. Waldvogel, "Establishing
"Establishing trust in distributed storage providers," in Proceed-
of 3rd IEEE I~zternatio~znl
ings of Conjerence on Peer-to-Peer Computing
Intemational Conference Conzpl~ti~zg
(P2P), 2003.
(P2P),

1391 T. Stading, P. Maniatis, and M.


[39] M. Baker, "Peer-to-peer caching schemes to address flash
flash crowds,"
in 1st I~zternationalWorkslzop on
International Workshop orz Peer-to-Peer Systems
Sysrelns (lPTPS),
(IPTPS),2002.

[40] S.
[40] S. Marti and H. Garcia-Molina, "Identity crisis:
crisis: Anonymity vs.
vs. reputation in p2p systems,"
systems,'' in
Proceeditzgs of the 3rd International Conference on Peer-to-Peer Computing,
Proceedings of Conzputing: 2003.
2003.

34

Das könnte Ihnen auch gefallen