Sie sind auf Seite 1von 60
D EPARTMENT E LECTRICAL AND C OMPUTER E NGINEERING Senior Design Project II 0402492 T

DEPARTMENT ELECTRICAL AND COMPUTER ENGINEERING

Senior Design Project II

0402492

TRUST MODEL FOR AD HOC NETWORKS

Spring 2008/2009

By:

Osama Khaled Al-Koky

(20510076)

Waleed S. Hammad

(20510327)

Supervisor:

Dr. Ibrahim Kamel

1

Trust Model in Ad hoc Networks

Abstract

Ad Hoc Networks’ fast growing popularity urged the research into countering the security concerns arisen from malicious nodes taking part in the construction of a trusted network. This project highlights some of the efforts that have been done to achieve trustful ad-hoc network. Then, the project illustrates how its proposed model is different from the related work. The project proposes a collaboration model that utilizes past interactions to identify malicious peers in ad-hoc networks. Peers might seek recommendations about an unknown node from other trusted peers before they decide to collaborate with them. The model takes into account oscillating peers which exhibit honest behavior for a period of time and then later become malicious. Also, in this project, we simulate the proposed model and conduct several experiments in verification of the model’s robustness.

2

Table of Contents

Abstract

 

1

Table

of

Figures

4

Table

of

Equations

5

Table of Acronyms and Symbols

5

1. Introduction

 

6

1.1.

Purpose and Motivation

6

1.2.

Ad hoc Network

7

1.3.

Centralized vs. Decentralized Trust

8

2. Problem definition

 

8

2.1.

Scope

8

2.2.

Attacks in ad-hoc networks

8

3. Related Work

 

10

3.1.

Centralized

Trust

10

3.2.

Decentralized Trust

12

4. Model and Assumptions

14

4.1.

Collaboration Model

14

4.2.

Attack Model

15

4.3.

Details of the Proposed Model

15

4.4.

Bootstrapping

19

4.5.

Redemption

19

5. Performance measures

19

5.1.1. Percentage of risky interactions

19

5.1.2. The speed of detecting a malicious node

20

6. Experiments

 

20

6.1. Assumptions and Relaxations

20

6.2. Simulation Scheme

20

6.3. Defenseless vs. fortified

21

6.4. Speed of detecting a malicious node

22

6.4.1. Speed of detecting a non-cooperating malicious node

22

6.4.2. Speed of detecting a cooperating malicious node

23

6.5.

Speed of detecting a malicious oscillating node

24

6.5.1. Speed of recovering an honest oscillating node

25

6.5.2. IDS error rate vs. number of risky interactions

26

6.5.3. Weight of second hand experience vs. percentage of risky interactions .27

3

References

30

Appendix I: Simulator code

32

Class

ToT

32

Class

node

33

Class

network

39

Class

Main

50

4

Table of Figures

Figure 1: Collaboration between different devices in a public place

6

Figure 2: Use of Ad-hoc networks in disasters recovery

7

Figure 3: Categorization of trust models

10

Figure 4: Table of Trust

16

Figure

5: Collaboration flow diagram

19

Figure 6 Defenseless vs. fortified

22

Figure 7: Speed of detecting a malicious node

23

Figure 8 Speed of detecting a collaborating malicious node

24

Figure 9 Speed of detecting an oscillating malicious node

25

Figure 10 Speed of recovering an honest oscillating node

26

Figure 11 IDS error rate vs. percentage of risky interactions

27

Figure 12 Weight of second hand experience vs. percentage of risky interactions

28

5

Table of Equations

Equation

(1)

17

Equation

(2)

18

Equation

(3)

18

Table of Acronyms and Symbols

PIE

Past Interactions Experience

T acc

Trust Acceptable

IDS

Intrusion Detection System

ToT

Table of Trust

α

Weight of Second Hand Experience

δ

Initialization Distance

6

1.

Introduction

1.1. Purpose and Motivation

People in public places such as airports and train stations can share resources among each other, using their portable PCs, PDAs or any portable or stationary communication device in the area. For example a person with a mobile phone that has no internet access could ask for the information from another person whose PDA has internet coverage, or a person with a PDA needing an operation that requires heavy processing power, (e.g. video editing or heavy image editing) can ask another person using a laptop to carry on the operation (see Figure 1). This type of environment is called an ad-hoc network - a brief description of this is in the following section.

- a brief description of this is in the following section. Figure 1: Collaboration between different

Figure 1: Collaboration between different devices in a public place

The problem arising with an ad-hoc network is that it brought up the need to deal with strangers or anonymous people in open public places with the absence of central systems that can govern those interactions. This may result in privacy breeches or may infect systems with viruses. Still, the ease of forming a network under ad-hoc networks makes it a popular method for interacting with others, thus urging research to provide better methods to make ad-hoc networks a safe communication environment.

7

1.2. Ad hoc Network

An ad-hoc (or "spontaneous") network is a local area network such as small networks, especially one with wireless or temporary plug-in connections, in which some of the network devices are part of the network only for the duration of a communication's session or, in the case of mobile or portable devices, while in some close proximity to the rest of the network. In Latin, ad hoc literally means "for this", a further meaning is "for this purpose only" and thus usually temporary [1]. The network is created, operated and managed by the nodes themselves, without the use of any existing network infrastructure or centralized administration. The nodes assist each other by passing data and control packets from one node to another, often beyond the wireless range of the original sender. This union of nodes forms arbitrary topology. The nodes are free to move randomly and organize themselves arbitrarily; thus, the network's wireless topology may change rapidly and unpredictably. The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes cannot be relied on, and may improve the scalability of networks compared to wireless managed networks. Such a network may operate in a standalone fashion, or may be connected to the larger Internet. Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like disasters recovery (see Figure 2) or military conflicts. The execution and survival of an ad-hoc network is highly dependent upon the cooperative and trusting nature of its nodes.

dependent upon the cooperative and trusting nature of its nodes. Figure 2: Use of Ad-hoc networks

Figure 2: Use of Ad-hoc networks in disasters recovery

8

1.3. Centralized vs. Decentralized Trust

In centralized trust we need a party that is trusted by all nodes in the network and this party is responsible for calculating the trust for each and every node in the network. Systems like eBay and Amazon can use a centralized trust authority since users are stationary and they have a fixed account. This scheme has some disadvantages. First, there is only a single authority which its failure means the failure of the whole system. Second, trust is something relative which means that the trust of a certain node may differ from one node to another, and since ad-hoc network are established on the fly, it is difficult to create a centralized authority.

In decentralized trust, on the other hand, nodes themselves have to organize the network and each node will compute the trust of its neighbors by monitoring their behaviors and taking recommendations from other nodes that have experience with the node we are seeking collaborating with.

2. Problem definition

In ad-hoc networks nodes do not necessarily have to know each other, so they cannot know whether they are collaborating with a legitimate node or a malicious one. This leads to the need of a trust model to organize the network without having a centralized unit.

2.1. Scope

In our model, the main concern is to provide trustful community that can find and eliminate malicious nodes. We also focus the efforts on the level of services provided by or required by the node rather than forwarding or receiving packets. Thus, our model is not considering attacks on routing and network layer, but rather attacks on the services level.

2.2. Attacks in ad-hoc networks

There are a wide variety of attacks that target the weakness of ad-hoc network. The attacks in ad-hoc network can possibly be classified into two major categories according to the means of attack: Passive attacks and Active attacks. A passive attack obtains data exchanged in the network without disrupting the operation of the communications, while an active attack involves information interruption, modification, or fabrication.

9

Examples of passive attacks include eavesdropping, traffic analysis, and traffic monitoring. Examples of active attacks consist of jamming, impersonating, modification, denial of service (DoS), and message replay [13].

Attacks in networks can also be either external attacks and/or internal attacks. External attacks are carried out by nodes that do not belong to the domain of the network. Internal attacks are from compromised nodes, which are actually part of the network. Internal attacks are more severe when compared with outside attacks since the insider knows valuable and secret information, and possesses privileged access rights.

More sophisticated and subtle attacks have been identified in recent research papers. The black hole, Byzantine, and wormhole [14] attacks are the typical examples. But those types of attacks are targeting the network layer, and thus routing, which is not the scope of our trust model. Rather, attacks on application layer are the sort of attacks concerning our model.

The application layer communication is vulnerable in terms of security compared with other layers. The application layer contains user data, and it normally supports many protocols such as HTTP, SMTP, TELNET, and FTP, which provide many vulnerabilities and access points for attackers [13].

The application layer attacks are attractive to attackers because the information sought ultimately resides within the application and it is direct for them to make an impact and reach their goals.

There are mainly two types of attacks on the application layer: Malicious code attack and Repudiation attacks. Malicious codes, such as viruses, worms, spywares and Trojan Horses, can attack both operating systems and user applications. The malicious programs usually can spread themselves through the network and cause the computer systems and the network to slow down or even be damaged.

In the network layer, firewalls can be installed to keep packets in or keep packets out. In the transport layer, entire connections can be encrypted, end-to-end. But these solutions do not solve the authentication or non-repudiation problems in the application layer. Repudiation refers to a denial of participation in all or part of the communication. For example, a selfish node could deny conducting an operation on a credit card purchase, or deny any on-line bank transaction, which is the proToTypical repudiation attack on a commercial system [13].

10

3. Related Work

A lot of researches have been carried out in the field of network trust. These researches can be categorized based on network topology or on implementation of the trust. Based on the network topology, there are trust for peer-to-peer networks and trust for ad hoc networks. Several papers have been published in P2P trust [11] [12]. However, these models have high overhead which is not suitable for ad-hoc network since mobile devices have limited energy and low processing capabilities. Based on the implementation of trust, we can categorize trust models as centralized and decentralized models which is what we are going to focus on in our project. A detailed discussion about centralized and decentralized trust is in the following sections.

Trust Models
Trust Models
trust is in the following sections. Trust Models Web Services Peer-to-Peer Decentralized Centralized Ad-hoc
trust is in the following sections. Trust Models Web Services Peer-to-Peer Decentralized Centralized Ad-hoc
trust is in the following sections. Trust Models Web Services Peer-to-Peer Decentralized Centralized Ad-hoc
trust is in the following sections. Trust Models Web Services Peer-to-Peer Decentralized Centralized Ad-hoc
Web Services
Web Services
Peer-to-Peer
Peer-to-Peer
Decentralized Centralized
Decentralized
Centralized
Ad-hoc
Ad-hoc
Routing
Routing
Services (our model)
Services
(our model)

Figure 3: Categorization of trust models

3.1. Centralized Trust

In centralized trust we need a party that is trusted by all nodes in the network and this party is responsible for calculating the trust for each and every node in the network. Systems like eBay and Amazon are using centralized trust authorities since users are stationary and they have fixed accounts. In eBay after each interaction the user gives a value for the satisfaction of the interaction (1) for positive, (-1) for negative and (0) for neutral. When deciding whether to interact with a certain user or not has to decided by a human. This also makes the system not fully automated since there is a human intervention in making a decision based on the reputation of a user. This scheme has some other disadvantages as well.

11

First, there is only a single authority which suffers from a single point of failure. This might not be severe in Internet environment but it is more severe in ad-hoc network where nodes can be highly mobile and connect and disconnect from the network more frequently. Another problem of the centralized trust model is that there will be a global trust value for each single node. However trust is usually subjective [5] that is A may have a trust value for C that is different from that of B. For example a malicious user A may not be interested in attacking a normal user B which means that B will have a high trust value for A. On the other hand A might be interested in attacking user C (e.g. a bank, financial institute etc.) which means that C will have a low trust value for A.

In ad-hoc networks, on the other hand, nodes themselves have to organize the network and each node will compute the trust of its neighbors by monitoring their behaviors and taking recommendations from other nodes that have experience with the node we are seeking collaborating with. Centralized trust models are difficult to be applied in ad-hoc networks since the luxury of having centralized unit is very difficult to achieve.

Trust models can also be categorized based on the application or network topology. There are three main categories which are: internet applications, peer-to-peer and ad hoc networks.

Peer-to-peer networks are very similar to Ad-hoc networks in that both are self- organized and decentralized. Most peer-to-peer systems work on the wired internet [6], which makes the topology somehow known. Since nodes are connected to the internet most of them are static and have somehow unlimited power and processing capabilities which makes a heavy trust model feasible. In peer-to-peer we can assume the existence of some pretrusted nodes that are trusted by most peers in the network. We can even have a central authority to keep users’ profiles and manage the trust since everything is on the known internet.

A good example of internet applications is web services. Trust in web services could be either from the point of view of the web service itself or from the point of view of the user of this service. Web services can evaluate users’ trust based on their profiles and their behavior. They also use a third party central authority for trust management. This model cannot be applied to ad hoc networks because of the use of user profiles. The dynamic nature of ad-hoc network makes keeping user profiles difficult since users can be part of several networks and the profile of each node need to be stored at every other node in the network

12

due to absence of a centralized unit. Users can trust web services using any authentication techniques like the X.509 [1].

3.2. Decentralized Trust

Pretty Good Privacy (PGP) encryption uses public-key cryptography and includes a system which binds the public keys to a user name and/or an e-mail address. It was generally known as a web of trust to contrast with the X.509 system [3]. It became a popular trust system replacement for centralized trust. However, for access restricted ad-hoc networks, it had been shown to be unsuitable for the following reasons:

Assume sufficient density of certificate graphs

o

Problem at network initialization.

o

Time delay to setup.

Certificate chains provide weak authentication:

One or more compromised nodes in certificate chain can lead to unsuccessful authentication [15].

The EigenTrust [6] mechanism aggregates trust information from peers by having them perform a distributed calculation approaching the eigenvector of the trust matrix over the peers. EigenTrust relies on good choice of some pretrusted peers, which are supposed to be trusted by all peers. This assumption may be over optimistic in a distributed computing environment. The reason is that pretrust peers may not last forever. Once they score badly after some transactions, the EigenTrust system may not work reliably.

Prakash V. and Vikram L. [7] made a collaborative trust model for secure routing. Their model is based on monitoring neighbor nodes for routing actions and issuing a single intrusion detection (SID) in case of observing a malicious action. Routing behaviors that are monitored to detect a malicious behavior are:

SIDS issued against a node.

The difference between the number of beacons a node is expected to send and the number of beacons it actually sends.

The difference between the number of acknowledgement a node is expected to send and the actual number of acknowledgement it sends.

13

The monitoring mechanism is the one used in SCAN [8], which monitors the routing updates and the packet forwarding behaviors of neighbor nodes. When a node observes a malicious activity from one of its neighbors it issues an SID against it. However, other nodes will not accept the SID blindly; instead it will check the trust of the node that issued the SID, request recommendations from other nodes as well and if the compromised node is in the radio range it will monitor it for a period and the decide whether to accept the SID or not. When a node wants to calculate the trust for a remote neighbor, the node requests the trust value from its neighbors who are in the range of that node. The node then calculates the bad trust value as a weighted average then it computes the route as a shortest path problem based on the trust values of the nodes. The shortcomings of this model are that they did not address the problem of the oscillating nodes and the processing and memory overhead of the model are very high.

In TOMS [5], Y. Ren and A. Boukerche established a trust management system that allows only trustworthy intermediate nodes to participate in the path construction process while also providing anonymity to the communicating parties. The trust model is distributed to each node in the network and all nodes update their own assessments concerning other nodes accordingly. The routes set up in this way will traverse the most trustworthy nodes in each hop. The multicast mechanism used among nodes utilizes the trust as a requirement in order to choose the most satisfactory neighbors for conveying messages. The main factors in calculating the trust of a node in this model is the time this node spends in the community and the past activity record of the node. In most trust computation models, the trust value is computed generally based on the linear function. However, in TOMS they propose a trust model that will update the trust value based on different increase-shapes.

The mentioned above related work is concerned with how to implement trust through routing related algorithms. And this is where our model comes with different scope in mind; to secure the services on the node from being under attack by malicious nodes.

The PowerTrust [9] system dynamically selects small number of power nodes that are most reputable using a distributed ranking mechanism. In this model, peer feedbacks are following a power-law distribution. By using a look-ahead random walk strategy and leveraging the power nodes, PowerTrust significantly improves in global reputation accuracy and aggregation speed. PowerTrust is adaptable to dynamics in peer joining and leaving and robust to disturbance by malicious peers.

14

The previous models suffer from several shortcomings which makes them either weak

or more suitable for peer-to-peer networks rather than ad-hoc networks. The shortcomings of

these models can be summarized in the following points. They assume the existence of some pretrusted nodes which means that they have a silent assumption that the network exists and it is running for some time. Also, they did not address the issue of bootstrapping the network which is very crucial.

Our proposed model is different than previous models in several points. First, we will implement our model on the application layer, that is, we care only about attacks on this layer. Second, we assume that there is no network running and we do not rely on pretrusted

nodes and we will consider the issue of bootstrapping the network. Finally, our biggest contribution is that we will take care of nodes that exhibit oscillating behavior. Oscillating node issue can be very dangerous for the network. For example a malicious node can join the network and behave honestly for a certain period of time until it gains high trust value from most of the nodes and then it starts issuing attacks. Another scenario is of an honest node being compromised by a malicious node or software. This will make its trust go down until

no one interacts with it anymore. Nevertheless, the node can recover, but due to its trust being so low, the other nodes will still deny future interactions. This brings the need for a mechanism to deal with oscillating nodes efficiently.

4. Model and Assumptions

Our model is based on the past interactions between nodes and the recommendations taken from neighbor nodes. We are not considering routing, however, we are considering using the model for any type of interactions.

4.1. Collaboration Model

We are considering wireless ad hoc network which consists of an unconstrained number of nodes. Nodes can be mobile phones, PCs, PDAs or any other portable communication device. Nodes can join and leave the network at any point of time and they can be stationary or mobile nodes. We assume that each node has a unique MAC address that

cannot be altered. The network also might have an unconstrained number of malicious nodes.

A node may ask a neighbor node for a certain service such as using the internet, shared

printer, computing power, routing a packet to a certain destination, or any other distributed service. We will consider the existence of a protection system in the nodes of the network

15

which can be an antivirus, an IDS or other protection and detection software. We will model this protection system as a probabilistic value which means that the system will detect a malicious activity in a certain probability.

4.2. Attack Model

In this work we are considering attacks only on the application and transport layer. We are not concerned with attacks on the network layer (routing attacks). Nodes are divided into several categories according to their behavior. Selfish nodes are nodes which are not willing to collaborate with others nodes. Careless nodes are nodes which give random recommendation instead of the true ones. Malicious nodes are nodes which might attack other nodes and cause damage. Oscillating nodes are nodes which have oscillating behavior. for example behaving well for some time to gain high trust and then start attacking, or a trusted node which is being hacked or compromised by a virus or a malicious node. Trusted nodes are nodes which behave exactly as others expect them to behave. The most important of all are the malicious nodes and the oscillating nodes because they are the most dangerous ones. In our model we have several activities that we consider as attacks:

Sending a malicious code to a node asking for collaboration.

Overusing the collaborating node’s resources.

Usage of an unauthorized resource from the collaborating node.

Privacy breech or compromising the collaborating node’s privacy.

4.3. Details of the Proposed Model

This section describes the proposed trust model and provides the metrics for assessing the trust in the network. The trust value computation is based on peer recommendations and past interactions experience. Past interaction experience is an aggregate measure for the quality of prior collaborations. The following discussions do not differentiate between service provider and service requester because any of them can be malicious or honest at some point of time. Each peer i maintains a Table of Trust (ToT) in which each row corresponds to one peer that i interacted with in the past. The table stores:

The peer-ID.

Past Interaction Experience (PIE) value that corresponds to the quality of past interaction with hosting peer.

16

Number of times N peer i interacted with peer j.

ID

PIE

# Interactions

19AFC34

0.765

3

12DE542

0.456

17

345FAD1

0.823

9

Figure 4: Table of Trust

The trust value of peer i in peer j is not necessarily the same as the trust value of peer j in peer i. To compute how much trust peer i has in peer j, peer i asks other peers about their trust in j. Each peer k submits its recommendation (opinion) as a trust value then the collected values are weighted by how much i trust the submitting peers. The trust table ToT will be used by the hosting peer i in one of the following cases:

If peer j offers a service that peer i requested and if the requested service is available at more than one service provider, in this case, peer i chooses the most trusted peer to deal with.

If the peer i receives a request from peer j to use the services available at peer i; in this case, peer i would consult its ToT to decide whether it will offer its service to peer j or not.

Peer k requests an advice from peer i whether it should collaborate with peer j or not. Peer i sends the trust value that corresponds to peer j from its ToT.

The trust value is affected and updated when one of the following occurs:

After the completion of a service request from peer j, peer i might change the trust value of peer j depending on the experience. Peer i demotes the trust in peer j if the performance of the system gets affected during the service or right after the service, e.g., if one of the resources slows down or become unavailable. Peer i can have an IDS which detects unusual behavior or attempts to overuse or access resources without permission.

After peer i asks for service access from peer j. peer i value of trust in peer j will be updated to reflect the quality of the collaboration. If the service was successful, the trust value for peer j will be promoted. If the service was not complete or the service has resulted in damage (e.g., virus), that value will be demoted. In a more formal way,

17

a peer i uses T i (j) to decide whether to collaborate with peer j or not. T i (j) is computed as follows:

() = (1 ) × () +

Where:

×

2

=1

() × ()

2

=1

()

(1)

T i (j) is a value that reflects the trust value of peer i in peer j.

PIE i (j) is a value that peer i maintains and reflect past interaction with peer j.

α (weight of second hand experience) is a weight for others’ recommendations and it is a real number between 0 and 1.

M is the number of peers which are contacted by peer i to give their recommendations. The parameter α can be tuned to give more or less importance to peer recommendations with respect to past interactions. Note that T i (j) is not stored in ToT but it is used to update the value of PIE and to decide whether to collaborate or not. If T i (j) is greater than a pre-defined threshold, T acc , collaboration is accepted, otherwise it is rejected. Another important parameter is the initialization of the trust values in ToT, i.e., bootstrapping issue.

The model initializes the PIE values for all the peers to T acc + δ, where δ is a positive parameter. After each collaboration experience, an Intrusion Detection System (IDS) determines if the collaboration was successful or not. Since IDS might fail to detect some of the attacks, it is modeled by a probabilistic function. After every collaboration, the value of PIE is updated using a reward-punishment heuristic to limit the impact of malicious peers and eventually expel them. The value of PIE i (j) which is maintained by peer i and reflects the trust of peer i in peer j is updated as follows:

Case of unsuccessful collaboration:

Let S, N denote the number of successful collaborations and the ToTal number of collaborations between peer i and peer j respectively. Let PIE i (j) old be the value that corresponds to all peer i’s prior experiences with peer j. We define PIE i (j) old as the fraction N/S . In case of unsuccessful collaboration, the number of successful collaborations remains the same while the ToTal number of collaborations N increases.

18

This means that:

Hence, we have:

Finally, we get:

() =

+ 1

() =

× + 1

() = () ×

+ 1

Case of successful collaboration:

(2)

In this case, the number of successful collaborations and the ToTal number of

collaborations increase, thus:

This means that:

() =

+ 1

+ 1

Finally:

() =

+ 1

× + 1

() = () ×

1

+ 1 + + 1

(3)

Note that after any kind of collaboration N increases by 1. If T i (j) < T acc then PIE i (j)

is demoted according to Equation (2), and no collaboration takes place between peer i and

c c then PIE i ( j ) is demoted according to Equation (2), and no

19

peer j. The decision whether to demote or promote PIE is made by the intrusion detection system depending on the quality of collaboration. Note that in general, T acc can be different from one peer to another.

4.4.

Bootstrapping

When a node joins the network and it does not have a previous knowledge about any of the nodes in the network it may face some bootstrapping issues. If it gives all other nodes a low PIE it will not interact with anyone. If it gives all nodes a high PIE it will be in a high risk. To solve this problem we introduced an initialization parameter we call it delta (δ) which will be used to initialize PIE. PIE of all nodes will be initialized to T acc + δ and it has to be set carefully so that interactions will happen and the situation will not be too risky.

Figure 5: Collaboration flow diagram

4.5.

Redemption

If an honest node was compromised and then recovered it will be very difficult to make people trust it again. We proposed a method to solve this issue by listening to the recommendations of the oscillating node without interacting with it. If the recommendations given by the oscillating nodes its PIE will be promoted that is if the oscillating nodes says someone is good and he is really good it will be promoted. If it says that someone is bad and he is really bad its PIE will promoted also.

5. Performance measures

The most important measure for such a model is how fast the model can detect all the malicious nodes in the network. The other important measure is that can it detect all the malicious nodes with detecting non-malicious nodes as malicious (false rejection rate) and that the model does not miss a malicious node (false acceptance rate). We consider the number of interactions with malicious nodes as our measure of how fast the model can detect the malicious nodes (risky interactions). We would love to detect all malicious nodes with the minimum number of interactions with them to say that the network is as safe as possible.

5.1.1. Percentage of risky interactions

Risky interactions are defined as the number of interactions with malicious nodes until the detection of all malicious nodes.

20

The percentage of risky interactions is calculated as the ratio between the number of

interactions with malicious nodes until detection of all malicious nodes and the number of

ToTal interactions until detection.

5.1.2. The speed of detecting a malicious node

We define the speed of detecting a malicious node as the slope of the change of its

average PIE from all honest nodes. If we take one malicious node and observe the change of

its PIE in all honest nodes it eventually will go below T acc . The slope of this change indicates

how fast honest nodes can detect a malicious node.

6.

Experiments

A node is modeled as a data structure which consists of a table to store PIEs (ToT), a

table to store the number of interactions for the corresponding nodes, and an ID for the node.

The network is modeled as a collection of nodes and it will be initialized such that no two

nodes have interacted with each other before.

6.1. Assumptions and Relaxations

First of all we will not consider the mobility of the nodes in our experiments as it will

not affect the performance of the trust in the application layer. We assume that every node

has an IDS software installed in it that can detect malicious activities with some probability

thus we model it as a probabilistic variable. Attacks that we consider are the attacks

addressed in the IDS systems such as privacy compromise, sending a malicious code and

unauthorized access.

6.2. Simulation Scheme

- Initialize all the nodes in the system:

o

Set PIE with all nodes to T acc + δ.

o

Set interactions with all nodes to 1 (not zero because it will cause an error in the trust calculation).

o

Set the success with all nodes to zero.

- Randomly choose nodes to be malicious (m malicious nodes).

- Start simulation.

o

Choose two nodes randomly to interact with each other.

o

Update the trust:

Take advice and calculate trust.

If trust is less than T acc don’t interact and demote PIE.

21

If trust is greater than T acc interact and then consult IDS.

If the collaboration was successful promote PIE.

Else demote PIE.

o

Check for nodes which their PIE is less than T acc according to all trusted nodes

and append them to an array.

o

If the caught node is really malicious increment k.

o

Loop until k is equal to the number of malicious nodes in the system.

- Check for false positives.

6.3. Defenseless vs. fortified

In this experiment we are trying to see how implementing the system in a network

would improve its performance. The measure used here is the number of risky interactions

and the defenseless network is stopped at the same number of ToTal interactions where the

fortified network detects all malicious nodes. The IDS was responsible for deciding whether

the interaction was successful or not. This experiment was done on a network with 50 nodes.

The weight of the second hand experience was 0.5 and the initialization distance was 0.5 as

well. IDS error rate was set to zero which is not realistic but we needed to isolate the effect of

IDS to focus on the detection rate.

22

% Malicious vs. Risky interactions

100 90 80 70 60 50 Without our model 40 With our Model 30 20
100
90
80
70
60
50
Without our model
40
With our Model
30
20
10
0
0
10
20
30
40
50
60
70
80
90
Risky interactions

% malicious nodes

Figure 6 Defenseless vs. fortified

6.4. Speed of detecting a malicious node

In this section two experiments were carried out. The first experiment measures the speed of detection of a malicious node in an environment where malicious nodes do not cooperate. The second experiment measures the speed of detecting a malicious node in an environment where malicious nodes cooperate. Malicious nodes cooperate in a fashion where they give their fellow malicious high recommendations when they are asked about their opinions. Malicious nodes also give low recommendations to other honest nodes when they are asked about their opinions in these nodes.

6.4.1. Speed of detecting a non-cooperating malicious node

This experiment was done on a network with 50 nodes, 5 of them are malicious. T acc was set to 0.5 and the weight of the second hand experience was set to 0.5 as well. PIEs in the table of trust of all nodes were initialized to T acc + 0.2. The IDS error rate was set to 0 which is not realistic but we need to isolate other parameters to get a benchmark for the slope of changing PIE. In this experiment we assume that malicious nodes attack but they do not lie when they are asked about their recommendations.

23

Speed of detecting a malicious node

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
Avg PIE

number of interactions

Figure 7: Speed of detecting a malicious node

As seen from the graph above after around 30 interactions the average PIE of the malicious node dropped below T acc . Since we have 50 nodes and 5 of them are malicious then we have 45 honest nodes and only 30 of them had to interact with this malicious node in order for all of them to detect it.

6.4.2. Speed of detecting a cooperating malicious node

24

This experiment is measuring the speed of detecting a cooperating malicious node. The measure used is the average PIE of one of the cooperating malicious nodes. The cooperation of malicious nodes is that they give other malicious nodes high recommendations

Speed of detecting a collaborating malicious node

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
Avg PIE of a malicious node

number of interactions

and they give honest nodes low recommendations. We define a parameter which is called lying distance where malicious nodes give recommendations as T acc + lying distance or T acc – lying distance. This experiment was done on a network with 50 nodes 5 of them are malicious. Weight of the second hand experience was 0.5 and T acc was 0.5 as well. The initialization distance was set to 0.2, the lying distance was set to 0.4 and the IDS error rate was zero.

As noticed from the graph above the malicious node is detected after about 40 interactions. The number of interactions needed in this case is more than the previous because when a node asks about a malicious node it will get a higher rating because of its friends’ help. However even with this malicious collaboration all malicious nodes are detected and detecting each node by all honest nodes needs each node to collaborate with it only once.

6.5. Speed of detecting a malicious oscillating node

This experiment measures the speed of detecting and oscillating malicious node which is a malicious node that acts honestly for a while and then it starts attacking. It was modeled as a node which starts as honest and when its average PIE goes above 0.8 it turns malicious. The experiment was done on a network with 25 nodes 1 of them is oscillating. The weight of the second hand experience was set to 0.5 and the T acc was set 0.5 as well. The

Figure 8 Speed of detecting a collaborating malicious node

25

initialization distance was 0.2 and the IDS error rate was zero.

Speed of detecting an oscillating malicious node

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
100
120
Avg PIE

Number of risky interactions

Figure 9 Speed of detecting an oscillating malicious node

As noticed from the graph above the oscillating node got an average PIE above 0.8 after about 20 interactions. After that it started acting maliciously and it was detected after less than 40 interactions which is very good since it got a very high rating before attacking.

6.5.1. Speed of recovering an honest oscillating node

This experiment measures the speed of recovering an honest node which was compromised and then got recovered. The speed of recovering is measured by the change of the average PIE of this node in the point of view of all other honest nodes. This experiment was done on a network with 25 nodes 1 of them was an oscillating node. The weight of the second hand experience was set to 0.5 and T acc was 0.5 as well. The initialization distance was 0.2 and the IDS error rate was zero.

26

Speed of recovering an honest oscillating node

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80 100
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
100
120
140
160
Avg PIE

number of risky interactions

Figure 10 Speed of recovering an honest oscillating node

As noticed from the graph above the node’s average PIE goes below T acc after about 50 interactions. Then when the node recovers it needs about 30 interactions in order for other nodes to trust it again.

6.5.2. IDS error rate vs. number of risky interactions

This experiment measures the change of the IDS error rate versus the percentage of risky interactions. Our model highly depends on the accuracy of the IDS because it is the component that measures the quality of the interaction. This experiment was done on a network with 25 nodes 4 of them are malicious. The weight of second hand experience was set to 0.5 and T acc was 0.5 as well. The initialization distance was set to 0.2 and the IDS error rate was set to zero.

27

IDS error rate vs. percentage of risky interactions

900 800 700 600 500 400 300 200 100 0 0 0.05 0.1 0.15 0.2
900
800
700
600
500
400
300
200
100
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Risky interactions

IDS error rate

Figure 11 IDS error rate vs. percentage of risky interactions

As noticed from the above graph when the IDS error rate goes increases the number of risky interactions increases exponentially. However it stays reasonable up to 15% IDS error rate.

6.5.3. Weight of second hand experience vs. percentage of risky interactions

This experiment measures the effect of changing the weight of the second hand experience on the percentage of risky interactions. This experiment is done on a network with 50 nodes 10 of them are malicious. The initialization distance was set to 0.2 and T acc was set to 0.5. IDS error rate was set to zero and the weight of second hand experience was changed from 0 to 1 incremented by 0.05. The malicious nodes are cooperating by giving their friends T acc + 0.4 when they are asked about their opinion and they give others T acc – 0.4. If the malicious nodes do not lie when they are asked about their opinion the weight of second hand experience will not have an effect on the system because all the opinions are honest and it does not matter how much weight is given to each one.

28

weight of second hand experience vs. % risky interactions

20.00% 18.00% 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 4.00% 2.00% 0.00% 0.00 0.10 0.20 0.30
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
% risky interactions

Weight of second hand experience

Figure 12 Weight of second hand experience vs. percentage of risky interactions

As noticed from the graph above when the node depends more on others’ opinions and a lot of them are lying the percentage of risky interactions increases. This means in such environment it is better to choose a lower weight of second hand experience.

7.

Conclusion

In our proposed model, we developed a trust model for ad-hoc networks. The main concern of this model is to provide trustful community that can find and eliminate malicious nodes. We also focus the efforts on the level of services provided by or required by the node rather than forwarding or receiving packets. This makes our work different from the main stream of trust models found in the literature.

29

Our model encourages honest collaborations and has a very high detection rate of malicious nodes. Through conducting several experiments, we could proof the robustness of our model. Our main contribution to the literature is our method of detecting oscillating nodes and collaborative attackers.

30

References

[1] http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci213462,00.ht

ml

C. Adams, S. Farrell, "Internet X.509 Public Key Infrastructure: Certificate Management Protocols", March 1999.

[3] P. R. Zimmerman. Pretty Good Privacy. In The Official PGP User’s Guide. MIT Press, April 1995. [4] G. Zacharia and P. Maes, Trust Management through Reputation Mechanisms Applied Artificial Intelligence, vol. 14, no. 8, 2000. [5] Y. Ren, A. Boukerche, Modeling and managing the trust for wireless and mobile ad hoc networks, in: Proceeding of the IEEE International Conference on Communications (ICC), 2008. [6] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The eigentrust algorithm for reputation management in P2P networks. In Proceedings of the 12 th International World Wide Web Conference, pages 640–651, 2003. [7] P. Veeraraghavan and V. Limaye, Trust in mobile Ad Hoc Networks. In Proceedings of the IEEE International Conference on Telecommunications, 2007. [8] H. Yang, H. Shu, X. Meng, and S. Lu. SCAN: Selforganized network-layer security in mobile ad hoc networks. IEEE J. Selected Areas in Commun., 24(2):261--273, Feb. 2006. [9] R. Zhou and K. Hwang, “PowerTrust: A Robust and Scalable Reputation System for Trusted P2P Computing”, IEEE Trans. on Parallel and Distributed Systems, accepted to appear 2007. [10] T. Anantvalee, J. Wu, “A Survey on Intrusion Detection in Mobile Ad Hoc Networks”, Wireless/Mobile Network Security: Springer 2006, Ch7, pp. 170 – 196. [11] K. Aberer , Z. Despotovic, “Managing trust in a peer-2-peer information system”, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA [12] R. Chen and W. Yeager. Poblano: A distributed trust model for peer-to-peer networks. Technical report, Sun Microsystems, Santa Clara, CA, USA, February

[2]

2003.

[13] B. Wu, J. Chen, J. Wu, and M. Cardei, “A Survey of Attacks and Countermeasures in Mobile Ad Hoc Networks” in “Wireless Network Security”,

31

Y. Xiao, X. Shen, and D.Z. Du , Springer, Network Theory and Applications, Vol. 17, 2006, ISBN: 0-387-28040-5. [14] Y. Hu, A. Perrig, and D. Johnson, Packet Leashes: A Defense Against Wormhole Attacks in Wireless Ad Hoc Networks. Proc. of IEEE INFORCOM, 2002. [15] J. van der Merwe, D. Dawoud, and S. McDonald, A Survey on Peer-to-Peer Key Management for Military Type Mobile Ad Hoc Networks, in proc. Military Information and Communications Symposium of South Africa (MICSSA'05), 2005.

32

Appendix I: Simulator code

Class ToT

import java.util.Hashtable;

public class ToT {

private class pair { double PIE; int numOfInteractions;

public pair(double PIE, int inter) { PIE = PIE; numOfInteractions = inter;

}

}

private Hashtable<Integer, pair> table;

public ToT() { table = new Hashtable<Integer, pair>();

}

public void add(int id, double p, int in) { table.put(id, new pair(p, in));

}

public double getPIE(int ID) { return table.get(ID).PIE;

}

public int getNumOfInteractions(int ID) { return table.get(ID).numOfInteractions;

}

public void update(int id, double newPIE, int newNumOfInt) { table.remove(id); table.put(id, new pair(newPIE, newNumOfInt));

}

}

33

Class node

import java.util.Random;

public class node {

/**

* weight of the second hand experience

*/ private static double a;

/**

* Table of trust as a hashtable where the keys are the nodes' IDS

*/ private ToT ToT;

/**

* number of neighbors

*/ private static int N;

/**

* Minimum accepted trust

*/ private static double Tacc;

/**

* boolean to set malicious nodes

*/ private boolean mal;

/**

* Mode of lying: n - no lying r - random recomendations d - distance lying (+

* for freinds) (- for others)

*/ private static char lying;

/**

* distance of lying in case of lying mode d

*/ private static double dl;

/**

* initialization distance

*/ private static double d;

/**

* the node's ID

*/ private int ID;

/**

* IDS error rate

*/ private static double IDSError;

/**

* honest oscillating honest node that was compromised

34

*/ private boolean osHon;

/**

* malicious oscillating act well to gain trust then attack

*/ private boolean osMal;

public node(int id) { ID = id; ToT = new ToT();

}

/**

*

initializes the table of trust of the node

*

*

@param neighbors

*/ public void initializeToT(node[] neighbors) {

Random rand = new Random(System.currentTimeMillis()); int i = 0; if (mal) { if (lying == 'n') { for (i = 0; i < neighbors.length; i++) if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), Tacc + d, 1);

}

}

}

else if (lying == 'r') { for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), rand.nextDouble(), 1);

}

else if (lying == 'd') { for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != this.ID) { if (!neighbors[i].isMal()) { ToT.add(neighbors[i].getID(), Tacc + dl, 1); } else if (neighbors[i].isMal()) { ToT.add(neighbors[i].getID(), Tacc - dl, 1);

}

}

}

} else if (!mal) { for (i = 0; i < neighbors.length; i++) if (neighbors[i].getID() != this.ID) ToT.add(neighbors[i].getID(), Tacc + d, 1);

}

}

/**

*

interact with node i and consult IDS

*

*

@param i

*

@return true if interaction was successful and false otherwise

*/ public boolean interact(node i) {

if (i.getID() == this.ID) { System.err.println("Error: cannot interact with self"); return false;

35

}

// double IDS; // Random rand = new Random(System.currentTimeMillis()); // IDS = rand.nextDouble(); /*

* if (i.isMal()) { if (IDS >= (1 - IDSError)) return true; else

return

* false; } else

*/

return true;

}

/**

*

promote node i and incerement number of interactions and update ToT

*

*

@param i

*/ private void promote(node i) {

double newPIE; int newNumOfInt;

newNumOfInt = this.ToT.getNumOfInteractions(i.getID()); newPIE = this.ToT.getPIE(i.getID())

* ((double) newNumOfInt / (double) (newNumOfInt + 1)); newPIE += (1.0 / (newNumOfInt + 1)); newNumOfInt++; ToT.update(i.getID(), newPIE, newNumOfInt);

}

/**

 

*

demote node i and increment number of interactions if there was one

and

 

*

update ToT

*

*

@param i

*

@param interacted

*/ private void demote(node i, boolean interacted) {

double newPIE; int newNumOfInt;

newNumOfInt = this.ToT.getNumOfInteractions(i.getID()); newPIE = this.ToT.getPIE(i.getID())

* ((double) newNumOfInt / (double) (newNumOfInt + 1)); if (interacted) newNumOfInt++; ToT.update(i.getID(), newPIE, newNumOfInt);

}

/**

*

compute trust, interact with node i if trust > Tacc, consult IDS and

*

update ToT

*

*

@param i

*

@return true if an interaction occured and false otherwise

*/ public boolean interactAndUpdateTrust(node x, node[] neighbors) {

36

if (x.getID() == this.ID) { System.err.println("Error: cannot interact with self"); return false;

}

double trust; double IDS; int i; Random rand = new Random(System.currentTimeMillis());

trust = this.computeTrust(x, neighbors); // System.out.println("trust: " + trust);

if (trust < node.Tacc) { if (this.getRecomendation(x) > node.Tacc) this.demote(x, false); return false; } else { if (x.isMal()) IDS = rand.nextDouble(); else IDS = 1;

if (IDS >= (1 - IDSError)) { this.promote(x); for (i = 0; i < node.N; i++) { if (neighbors[i].getID() != this.getID() && this.getRecomendation(neighbors[i]) < node.Tacc) if (neighbors[i].getRecomendation(x) >= node.Tacc) this.promote(neighbors[i]);

}

return true; } else { this.demote(x, true); for (i = 0; i < node.N; i++) { if (neighbors[i].getID() != this.getID() && this.getRecomendation(neighbors[i]) < node.Tacc) if (neighbors[i].getRecomendation(x) < node.Tacc) this.promote(neighbors[i]);

}

return true;

}

}

}

public double computeTrust(node j, node neighbors[]) {

double adviceSum = 0; double trustSum = 0; double trust; int i; for (i = 0; i < neighbors.length; i++) { if (neighbors[i].getID() != ID && neighbors[i].getID() != j.getID())

{

adviceSum += ((ToT.getPIE(neighbors[i].getID())) * (neighbors[i]

.getRecomendation(j))); trustSum += ToT.getPIE(j.ID);

}

}

trust = (1.0 - node.a) * (this.ToT.getPIE(j.getID())) + node.a

37

* (adviceSum / trustSum); return trust;

}

public static void setTacc(double acc) { Tacc = acc;

}

public static void setN(int n) {

N = n;

}

public static void setAlpha(double r) {

a = r;

}

public static void setLying(char x) { lying = x;

}

public static void setLyingDist(double x) { dl = x;

}

public static void setInitDist(double x) {

d = x;

}

public boolean isMal() { return this.mal;

}

public boolean isOsMal() { return this.osMal;

}

public boolean isOsHon() { return this.osHon;

}

public void setMal() { mal = true;

}

public void unSetMal() { mal = false;

}

public void setOsMal() { osMal = true;

}

public void setOsHon() { osHon = true;

}

public int getID() { return ID;

}

public void setID(int id) {

38

ID = id;

}

public static void setIDSError(double error) { IDSError = error;

}

public static double getTacc() { return Tacc;

}

public static double getDist() { return d;

}

public double getRecomendation(node i) { if (getID() != i.getID()) return ToT.getPIE(i.getID()); else return 0.0;

}

public int getNumOfInteractions(node i) { return ToT.getNumOfInteractions(i.getID());

}

}

39

Class network

import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; import java.util.Arrays; import java.util.Random;

public class network {

private int s;// size of the network

private node[] net;

private int malicious;

private int oscillatingMal;

private int oscillatingHon;

private int falsePositive;

private int rejected_healthy;

private char mode;

private char oscMode;

private final int max = 100000000; // maximum number of allowed // interactions

private Random rand;

private int testingNodeMal; // the ID of the malicious node that is tested

private double avgTrustMal; // the average PIE of the malicious node that is // tested

private boolean penalize; // penalize nodes that give worng recommendation // or not

private int trials;// ToTal number of tried collaborations

private boolean defenseless;

public network(String FileName, int n, int m, double a, double tac, double de, int iter, char mod, double tol, char LM, double dis, boolean pen, boolean def, char osmod, int osc) throws IOException {

BufferedWriter out = new BufferedWriter(new FileWriter(FileName + ".xls"));

double[] rr;

int[] mm;

int i; int iterations = 0;

40

int tes = 0; int div = iter; rejected_healthy = 0;

rand = new Random(System.currentTimeMillis()); penalize = pen; defenseless = def;

mode = mod; oscMode = osmod;

out

.write("\t\t\tnodes\tmalicious\ta\tTacc\tdelta\titerations\t" + "mode\tIDS error\tlying mode\tlying dist\tpenalize\n");

out.write("\t\t\t" + n + "\t" + m + "\t" + a + "\t" + tac + "\t" + de

+ "\t" + iter + "\t" + mode + "\t" + tol + "\t" + LM + "\t"

+ dis + "\t" + penalize + "\n"); // change alpha if (mode == 'a') { rr = generateArray(0, 1, 0.05); // double[] rr = {0.5}; out

.write("weight of 2nd hand\tinteractions\tfalsePositive" + "\trejected healthy\ttrials\n"); for (double r : rr) { System.out.println(r); falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { initializeNetwork(n, r, tac, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max)

iterations += tes; else div--;

}

iterations /= div; System.out.println(div);

out.write(1 - r + "\t" + iterations + "\t"

+ ((double) falsePositive / (double) div) / s + "\t"

+ (double) rejected_healthy / (double) (trials - iterations) + "\t"

+ ((double) trials / (double) div) + "\n");

}

}

// change Tacc else if (mode == 't') { rr = generateArray(0.1, 1, 0.01); out

.write("Tacc\tinteractions\tfalsePositive\trejected" + " healthy\ttrials\n"); for (double r : rr) { System.out.println(r); falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0;

41

for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, r, LM, m, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--;

}

iterations /= div; out.write(r + "\t" + iterations + "\t" + (double) falsePositive

/ (double) div / s + "\t" + (double) rejected_healthy

/ (double) (trials - iterations) + "\t"

+ ((double) trials / (double) div) + "\n");

}

}

// change number of nodes else if (mode == 'n') {

mm

= generateArray(10, n, 1);

out

.write("#nodes\tinteractions\tfalsePositive\trejected" +

" healthy\ttrials\n"); for (int r : mm) {

System.out.println(r); falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { this.initializeNetwork(r, a, tac, LM, m, dis, de, tol, osc);

tes = testNetwork(); if (tes < max) iterations += tes; else div--;

}

iterations /= div; out.write(r + "\t" + iterations + "\t" + (double) falsePositive

/ (double) div / s + "\t" + (double) rejected_healthy

/ (double) trials + "\t"

+ ((double) trials / (double) div) + "\t" + malicious

+ "\n");

}

}

// change number of malisious nodes else if (mode == 'm') {

mm

= generateArray(5, m, 5);

out

.write("#malicious\tinteractions\tfalsePositive\trejected" +

" healthy\ttrials\n");

for (int r : mm) { System.out.println((int) (n * (r / 100.0)));

falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) {

// System.out.println(i);

42

this.initializeNetwork(n, a, tac, LM, r, dis, de, tol, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--;

}

iterations /= div;

trials /= div; out.write(r + "\t" + ((double) iterations / (double) trials) * 100.0 + "\t" + (double) falsePositive / (double) div

+ "\t" + (double) rejected_healthy

/ (double) (trials - iterations) + "\t"

+ ((double) trials / (double) div) + "\t" + div + "\n");

}

}

// change delta else if (mode == 'd') { rr = generateArray(0.0, de, 0.05); out

.write("distance\tinteractions\tfalsePositive\trejected" +

" healthy\ttrials\n");

for (double r : rr) {

System.out.println(r); falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0; trials = 0; for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, tac, LM, m, dis, 0.5 - r, tol,

osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--;

}

iterations /= div; out.write(r + "\t" + iterations + "\t" + (double) falsePositive

/ (double) div / s + "\t" + (double) rejected_healthy

/ (double) (trials - iterations) + "\t"

+ ((double) trials / (double) div) + "\n");

}

}

// change IDS error rate else if (mode == 'i') { malicious = m; rr = generateArray(0.0, tol, 0.05); out

.write("IDS error\tinteractions\tfalsePositive\trejected" +

" healthy\ttrials\n");

for (double r : rr) {

System.out.println(r); falsePositive = 0; iterations = 0;

div = iter;

rejected_healthy = 0; trials = 0;

43

for (i = 0; i < iter; i++) { this.initializeNetwork(n, a, tac, LM, m, dis, de, r, osc); tes = testNetwork(); if (tes < max) iterations += tes; else div--;

}

iterations /= div;

out.write(r + "\t" + iterations + "\t" + falsePositive + "\t"

+ (double) rejected_healthy

/ (double) (trials - iterations) + "\t"

+ ((double) trials / (double) div) + "\n");

}

}

// test avg PIE for a malicious node else if (mode == '-') {

iterations = 0;

div = iter;

rejected_healthy = 0;

trials = 0; out

.write("interactions\tfalsePositive\trejected" + " healthy\ttrials\n"); // System.out.println("********************************\n\n\n\n"); this.initializeNetwork(n, a, tac, LM, m, dis, de, tol, osc);

tes = testNetwork();

if (tes < max) iterations += tes; else { // break; div--;

}

iterations /= div; out.write(iterations + "\t" + (double) falsePositive / (double) div / s + "\t" + (double) rejected_healthy / (double) trials

+ "\t" + ((double) (trials - iterations) / (double) div)

+ "\n"); System.out.println(); // System.out.println(div);

}

out.close();

}

public void initializeNetwork(int n, double al, double tacc, char lMode, int mal, double lDist, double initDist, double IDSError, int os) {

int i; net = new node[n]; s = n; malicious = (int) (n * (mal / 100.0)); oscillatingMal = (int) (n * (os / 100.0)); oscillatingHon = (int) (n * (os / 100.0)); node.setAlpha(al); node.setTacc(tacc); node.setLying(lMode); node.setInitDist(initDist); node.setLyingDist(lDist); node.setIDSError(IDSError);

44

// creating nodes and giving them IDS for (i = 0; i < s; i++) { if (net[i] == null) net[i] = new node(i);

}

// initialize nodes' ToTs for (i = 0; i < s; i++) { net[i].initializeToT(net);

}

// setting malicious nodes int r1; if (oscMode != 'm' && oscMode != 'h') { for (i = 0; i < malicious; i++) { r1 = rand.nextInt(n); if (!net[r1].isMal()) { if (mode == '-') // setting malicious node to test speed // of // detection testingNodeMal = r1;

net[r1].setMal();

}

}

} else

i--;

else if (oscMode == 'm') {

for (i = 0; i < oscillatingMal; i++) { r1 = rand.nextInt(n); if (!net[r1].isOsMal()) { testingNodeMal = r1;

net[r1].setOsMal();

} else

i--;

}

} else if (oscMode == 'h') {

for (i = 0; i < oscillatingHon; i++) { r1 = rand.nextInt(n); if (!net[r1].isOsHon()) { testingNodeMal = r1;

net[r1].setOsHon();

net[r1].setMal();

}

}

} else

i--;

// System.out.println("finished initialization");

}

/**

*

Test Network

*

*

@param size

*

@param tol

45

* @param A

* @return

* @throws IOException

*/ public int testNetwork() throws IOException {

BufferedWriter out = new BufferedWriter(new FileWriter( "Trust_change.xls")); out.write("avg PIE\n");

// ***************************************************** // initialize network // ***************************************************** int inter = 0; int i, j; int[] mali = new int[s];

int k = 0;// number of detected nodes inter = 0; int one; int two;

boolean interacted = false; int x, y, z, l; boolean bool;

z

l

= 0;

= 0;

for (j = 0; j < s; j++) { mali[j] = s + 1;

}

// ******************************************************* // start interaction // ******************************************************** // System.out.println("Starting reaction."); while (k < malicious && z <= max) { one = rand.nextInt(s); two = rand.nextInt(s); interacted = false; if (one != two) { trials++; if (!defenseless) { interacted = net[one].interactAndUpdateTrust(net[two], net); // if(net[two].isMal()) // System.out.println("trust updated: " + one + " " + two + // " trust " + net[one].getRecomendation(net[two])); } else { interacted = net[one].interact(net[two]); // System.out.println("interacted.");

}

for (x = 0; x < s; x++) { bool = true; for (y = 0; y < s; y++) { if (y != x && !net[y].isMal()) { if (net[y].getRecomendation(net[x]) < node

.getTacc()) { bool = bool & true; } else bool = false;

}

46

}

if (bool) { if (Arrays.binarySearch(mali, x) < 0) {

mali[l] = x; l++;

if (net[x].isMal()) { k++;

}

}

}

Arrays.sort(mali);

}

z++; // ************************************************************ // Trust Change Test // ************************************************************

if (mode == '-' && oscMode != 'm' && oscMode != 'h') { avgTrustMal = 0; if (two == testingNodeMal) { for (j = 0; j < s; j++) { if (j != testingNodeMal && !net[j].isMal()) avgTrustMal += net[j] .getRecomendation(net[testingNodeMal]);

}

avgTrustMal /= (s - malicious); out.write(avgTrustMal + "\n");

}

} else if (oscMode == 'm') { int num = 0; for (i = 0; i < s; i++) { if (net[i].isOsMal()) { num = 0; avgTrustMal = 0; for (j = 0; j < s; j++) { if (!net[j].isOsMal() && !net[j].isMal()) {

avgTrustMal += net[j] .getRecomendation(net[i]); num++;

}

}

avgTrustMal /= num; if (two == testingNodeMal) out.write(inter + "\t" + avgTrustMal + "\n"); if (avgTrustMal >= 0.8) net[i].setMal();

}

}

} else if (oscMode == 'h') { int num = 0; k = 0; for (i = 0; i < s; i++) { if (net[i].isOsHon()) { num = 0; avgTrustMal = 0; for (j = 0; j < s; j++) { if (!net[j].isOsHon() && !net[j].isMal()) { avgTrustMal += net[j] .getRecomendation(net[i]); num++;

47

}

}

avgTrustMal /= num; if (two == testingNodeMal) out.write(inter + "\t" + avgTrustMal + "\n"); if (avgTrustMal < 0.3) net[i].unSetMal(); if (avgTrustMal > 0.7 && !net[i].isMal()) z = max + 1;

}

}

}

if (interacted && net[two].isMal()) { // System.out.print(i++ + "\r"); inter++;

}

}

}

/* * check for false positives */

//System.out.println("Number of iterations untill detection: " + // inter); System.out.println("malisous nodes detected: "); try { for (j = 0; j < s; j++) { System.out.print(mali[j] + " "); if (mali[j] < s) { if (!net[mali[j]].isMal()) falsePositive++;

}

}

} catch (ArrayIndexOutOfBoundsException e) {

System.out.println("error"); System.out.println(e.getMessage()); // return max;

}

out.close();

return inter;

}

/**

 

*

function to fill an array of type double with values between start

and

 

*

end and incremented by step

*

*

@param start

*

@param end

*

@param step

*

@return array

*/ public static double[] generateArray(double start, double end, double

step){

double[] array; int i = 0; int size = (int) ((end - start) / step) + 1; array = new double[size];

48

array[0] = start; for (i = 1; i < size; i++) array[i] = array[i - 1] + step;

return array;

}

/**

 

*

function to fill an array of type int with values between start and

end

 

*

and incremented by step

*

*

@param start

*

@param end

*

@param step

*

@return array

*/ public static int[] generateArray(int start, int end, int step) {

int[] array; int i = 0; int size = (end - start) / step + 1; array = new int[size];

array[0] = start; for (i = 1; i < size; i++) array[i] = array[i - 1] + step;

return array;

}

public static void main(String[] args) { if (args.length != 13) { System.err .println("Specify args: fileName #nodes, #malicious, " + "alpha, Tacc, delta, iteration, mode, " + "IDS error rate, lying Model, lying distance " + "value, penalize"); System.err

.println("----------------------------------------------"); System.err.println("Modes: a \tfix all and change alpha");

System.err.println("

System.err.println("

System.err

t \tfix all and change Tacc"); n \tfix all and change number of nodes");

.println("

m \tfix all and change number of " +

"malicious nodes"); System.err

.println("

d \tfix all and change distance from " +

"Tacc for initializing PIE");

System.err.println("

System.err.println("

System.err

i \tfix all and change IDS error rate"); - \tfix all");

.println("-----------------------------------------------");

System.err

.println("Lying Modes: r \tmalicious nodes give random" + " recommendations"); System.err

.println("

d \tmalicious nodes give " +

"Tacc+distance to other malicious " + "nodes\n\t\tand Tacc-distance to Honest nodes");

49

System.err.println("

System.err

n \tno lying");

.println("\nif you want to take advice from all nodes" +

"

set advice_all to 1\nif you want it from" +

"

trusted nodes only set it to 0");

System.err

.println("\nif you want to penalize nodes that give high" +

" trust for malicious\npeers set penalize to" +

" 1 if you don't set it to 0");

} else if (args.length == 13) { try { network Net = new network(args[0], Integer.parseInt(args[1]),

Integer.parseInt(args[2]), Double.parseDouble(args[3]), Double.parseDouble(args[4]), Double .parseDouble(args[5]), Integer .parseInt(args[6]), args[7].charAt(0), Double .parseDouble(args[8]), args[9].charAt(0), Double.parseDouble(args[10]), args[11].equals("1"), args[12].equals("1"), args[13].charAt(0), Integer

.parseInt(args[14]));

} catch (IOException e) { e.printStackTrace();

}

}

}

}

50

Class Main

import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Font; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.io.IOException;

import javax.swing.ButtonGroup; import javax.swing.JButton; import javax.swing.JCheckBox; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JProgressBar;