Sie sind auf Seite 1von 45

Dynamic Reconfiguration of Network

Based on Security Events

A Project Report Presented to


The faculty of the Department of Electrical Engineering
San Jos State University

In Partial Fulfillment
Of the Requirements for the Degree
Master of Science
By
Ram Gandhi Arumugaperumal
008851466
ram.gandhi@sjsu.edu

Vignesh Goutham Ganesh


009278555
vignesh.goutham.ganesh@sjsu.edu

512-788-1579

669-222-9542

05/04/2015

05/04/2015

Department
Approval

______________________________________
_________________
Dr. Greg Bernstein

Date

Project Advisor

______________________________________
_________________
Dr. Nader Mir

Date

Project Co-Advisor

______________________________________
_________________
Dr. Thuy T. Le

Date

Graduate Advisor

Department of Electrical Engineering


Charles W. Davidson College of
Engineering San Jos State University
San Jose, CA 95192-0084

ACKNOWLEDGEMENT
We would like to express our deepest appreciation to all those who provided us the
possibility to complete this report. We are grateful to our final year project advisor, Dr. Greg
Bernstein and Dr. Nader Mir, whose contribution in stimulating suggestions and encouragement,
helped us to coordinate this project. Dr. Bernstein has given us opportunity to explore in this area
of networking and given his expertise with his guidance. We express our gratitude to Electrical
engineering department for providing us resource and regular guidelines to complete the report.
We would like to thank our family members for their support and understanding.
At the end we thank all those engineers who have given their contribution to make the
society better.

ABSTRACT
This project aims to create a secure, robust and a proactive network environment which
can counteract ever-evolving cyber threats in a more efficient way than the current methods. In
order to achieve this goal we utilize Software Defined Networking (SDN) technology by adding a
security control application on top of SDN network controller. Our SDN security application will
take evasive actions (reconfiguration) on its own rather than placing all security action
responsibility to the user.
The main attacks taken into consideration would be DoS, DDoS and other prevalent cybersecurity threats. A key part of our security controller application is the reconfiguration to take place
automatically, various parameters of the network have been defined, which when exceeds the
threshold value, can force the controller to reconfigure the network to make it more secure.
The ability to create queues dynamically and change the policy of the switch/router in the
SDN network to route the matching flows via the qos scheduled queues has been exploited in this
project in order to achieve security from attacks that implement bandwidth abuse like DoS.

TABLE OF CONTENTS
TABLE OF FIGURES ................................................................................................................................ 6
1.

INTRODUCTION............................................................................................................................... 7
1.1

BACKGROUND: ........................................................................................................................ 8

1.2 Why SDN? ....................................................................................................................................... 10


1.3 The attack we defend: ..................................................................................................................... 12
1.4 PROPOSED WORK ....................................................................................................................... 13
1.5 Overview of working....................................................................................................................... 15
2.

PROJECT IMPLEMENTATION ................................................................................................... 16


2.1 Working ........................................................................................................................................... 16
2.2 Project Architecture ....................................................................................................................... 16
2.2.1 In the POX controller .............................................................................................................. 18
2.2.2 In the Open vSwitch................................................................................................................. 18
2.3 Detailed Workflow .......................................................................................................................... 22
2.3.1Monitoring for congestion ........................................................................................................ 24
2.3.2 Defining flows ........................................................................................................................... 26
2.3.3 Queue creation ......................................................................................................................... 27
2.3.4 Directing Flows to queues........................................................................................................ 29
2.4 Testing and Results ......................................................................................................................... 30
2.5 State of Art....................................................................................................................................... 37
2.6 Tools used ......................................................................................................................................... 39

3.

Schedule ............................................................................................................................................. 40

4.

CONCLUSION: ................................................................................................................................ 42

5.

REFERENCE .................................................................................................................................... 43

TABLE OF FIGURES

Figure 1 block representation of a SOFTWARE DEFINED NETWORK ..............9


Figure 2 block representation of the proposed SDN Network ...............................13
Figure 3 Experiment setup .......................................................................................17
Figure 4 Inside the Open vSwitch............................................................................19
Figure 5 Flow of packets..........................................................................................22
Figure 6 Bandwidth graph with and without DRoN ................................................33
Figure 7 HTTP response time with and without DRoN ..........................................37

1. INTRODUCTION
The project is focusing on building an intelligent network that senses the incoming security
attack and adjust the SDN policies on the fly so that the networks remain stable and perfect.
There are lot of different kinds of attacks that are possible on a computer network. The
motive of a network attack differs with the type of attack. Some attacks aim to steal the data that
is flowing between a particular source and destination and also sometimes modify it e.g. man-inmiddle attack. Some other attacks focus on making the components of the network trust that the
attacker is someone else, so as to gain access to restricted areas e.g. IP spoofing. The last and the
most common type of attack is making the resource not available so as to not provide its service
to its intended places or persons e.g DoS. The projects prime focus is to identify these attacks that
aims to make the network resource max out and decline their service to the ones that ask for it, and
apply appropriate measures so that the service is still granted to the legitimate clients residing in
the network. This is achievable using bandwidth monitoring and control in the flow level. More
about this is discussed in the later sections.
Our algorithm includes a method to calculate the hardware resources available, monitor the
traffic bandwidth per flow and a fix a threshold bandwidth per flow, above which, will indicate
that that particular traffic flow has the possibility of being malicious. When the threshold is
breached, we create queues for each flow and then the respective flow is then directed to their
respective queues. On the whole our hypothesis is designing a pro-active network monitor to
analyze the incoming traffic and optimizing the amount of traffic that the network actually handles

without creating congestion. The project uses an SDN architecture on which the security algorithm
works.
1.1 BACKGROUND:
Software Defined Networking is a rapidly advancing concept that involves the decoupling
of data plane and the control plane. It is because of this separation of functional planes that makes
it more robust and pro-active. This layout gives us the opportunity to have a centralized control
plane which results in a more dynamic control over the whole network and also the functionalities
[2]

of each device can be customized according to the needs of a particular situation that the network

might be put under

[1]

. SDN provides us with the ease of controlling all the components in the

network with a centralized controller rather than individually controlling [3] each and every device
present in the network.

Figure 1 block representation of a SOFTWARE DEFINED NETWORK [21]

SDN Controller:
SDN controller is a centralized controller which possess a global view of the complete
network. It has all the software entities since the controller is the brain of a SDN network. As the
controller possesses all the path, the controller optimizes the flow by choosing the ideal route
which in turn helps in optimizing the hardware of the network. Controllers these days have
graphical interfaces for the user which enables the users to have a pictorial representation of the
entire network the administrator possess.

Logical layers of SDN:


Typically SDN is said to have a three-tiered logical layer

[4]

. At the lowest level is the

switching layer which is connected to the hardware. The middle layer consist of the controller. The
layer at the top contains the applications. These applications trigger and produce the spark for the
flow of packets in the network.

Southbound API:
Southbound API or Southbound Protocol is essentially a set of APIs and protocols. These
juggle between the lowest level and the middle level of the SDN layers [5]. The main focus of this
southbound API is to provide an effective communication channel between the controller at the
middle level to install decisions related to control-plane and there by controlling the data plane.

Northbound API:
Northbound API is a bundle of APIs and protocols to facilitate the interaction between the
Applications (top layer of the SDN) and the middle layer possessing the controller. It is with the
Northbound API, that the applications send a required command to the controller [6], which adjusts
the various parameters of the network required to service the request by the specific application.
Governments and Organizations, in their quest to reduce the costs going towards building
and maintaining a network have always looked for innovative ways to get the advantages of a high
powered network without having the same. One of most promising innovations in this direction is
the use of Software Defined Network (SDN).

1.2 Why SDN?


Software Defined Networks have a lot of advantages when compared to the traditional
networks. Since SDN has its data plane and control plane separated, it opens possibilities of many
centralized and dynamic control over the entire network architecture. And also since its software

based we can program to listen for lot of events and also execute certain functions as handlers [7].
The advantage that we are exploiting in this project is the capability to listen for specific events
and create queues in the switches thereby directing the specific flows to the specific queues. This
dynamic queue creation once the router or switch is alive in the network is not possible in a
traditional network.
One way using a SDN will help our project is that in large topologies, with only one SDN
switch reporting, the controller could look up for all the routers sending packets to the congested
interface and perform bandwidth control (aka end-to-end congestion control) the way is done now,
each SDN reports the congestion, so control is applied across the whole path.
One of the major concerns while maintaining the sanctity of a network is that most of the
existing security attack algorithms are based on creating a congestion in the network and make the
networking components like switches and routers, firewalls vulnerable. Another issue with these
types of attacks is that in a scenario where there is a huge load or a huge congestion in a particular
path with legitimate traffic, the existing security measures will not differentiate between the
different flows [8], which will in turn make the congestion more severe. Instead of relying on the
network components in case of an attack, for example in a Layer 2 Switch, the switch can easily
be subjected to a MAC Flooding attack which essentially turn the switch into a hub, thereby
broadcasting traffic on all its ports rather than on a single one as mandated for a L2 Switch

[9]

This can be easily overcome by making use of a SDN where the traffic flow can be adjusted by
the controller and can be customized according to the needs of a Network Manager. Since we use
SDN, we can override this issue by asking the controller to re-direct flows that are identified as
malicious [10].

1.3 The attack we defend:


When a network is subjected to unusual amounts of traffic, more than what it can handle
in an ideal situation, the network becomes congested, i.e. the network components are flooded
with traffic and consequentially, legitimate requests coming in to the network is dropped or
serviced at a very slow pace.
Attackers make use of this fact and try to target a network by mounting Denial-of-service
attacks or Distributed Denial-of-Service attack [18]. The aim is to flood a network with humungous
amounts of traffic, overload all the interface present in the network and render the network unsafe
and unusable. This is owing to the fact that the attackers malicious traffic blankets all the
legitimate traffic coming into the network which will be most probably dropped by the network
owing to the overpowering traffic being handled by the network.
To prevent this, we can define an optimal bandwidth value and then subject all the traffic
to be below this value and if it exceeds this value, we impose a penalty in the service time of that
traffic.

1.4 PROPOSED WORK

Windows
VMware Workstation
Ubuntu Virtual Machine
containing the POX controller
Pox Port- 6635

Ports being monitored1234 & 2345

Eth0

Linux Mint Virtual Machine


containing the OpenVSwitch

Ports being monitored- 1234 & 2345


Ubuntu VM
as Host 1

Ubuntu VM
as Host 2

vPort1

vPort2

Figure 2 block representation of the proposed SDN Network

To create the required SDN topology, we heavily depend on the Virtual Machines (VMs).
We create two virtual machines, one Ubuntu Virtual Machine, consisting the POX controller which
takes care of all the bandwidth calculations and the routing mechanisms. The interface eth0 on this
VM is set to bind with the port 6635 for the POX controller to send the control messages

[14]

. In

addition to this we also use two other ports 1234 and 2345 to send and receive additional control
messages which are not openflow supported.
Since, the interface (eth0) of the virtual machine was proving difficult to be accessed from
outside, we decided to create a separate Linux Mint Virtual Machine running the Open vSwitch on
which we simulate two live hosts. Again, the traffic on this machine primarily uses two ports, 1234
& 2345.
The threshold value, over which a traffic is termed as rogue traffic is determined in the
controller. This value is measured by the controller depending on the total load the controller is
expected to handle at any given time. This also depends on the resource at hand like the number
and capacity of the switch. This is a sensitive task since a wrong input may completely defeat the
purpose of the project. A large value may allow all the traffic to pass through the network and
ultimately the project will not be able to make any significant impact [15]. A lower value will make
the controller to consider all traffic, including legitimate traffic as malicious and as a result all
traffic will have a lower processing rate, this is particularly dangerous when the network is
handling time sensitive data.

1.5 Overview of working


1. SDN switch monitors the output interface and reports congestion
2. The controller receives the notification and issues an Openflow message to request the
switch flow statistics
3. The SDN replies with the statistics
4. The controller classifies the flows:
(a) A well behaved flow is one whose bandwidth usage is lesser to Ct/n, where Ct
is the total interface capacity and n is the number of flows
(b) A bad behaved flow is the one whose bandwidth usage exceeds the Ct/n limit.
5. Well behaved flows receive their requested bandwidth
6. Bad behaved flows receive a penalty in their bandwidth usage, this penalty is proportional
to the difference between the fair use rate and their reported bandwidth
7. Well behaved flows receive an equal distribution of the remaining flow. Now, to
implement the bandwidth control, the SDN switch should create a queue for each flow
8. The controller sends a list of dictionaries, one dictionary for each flow indicating src_ip,
dst_ip and flow bandwidth
9. The switch notifies the controller when the queues are ready
10. The controller sends the flow_mod command to redirect each flow to its queue

2. PROJECT IMPLEMENTATION
2.1 Working
The project uses a bandwidth control algorithm that allows for the detection of the good
and bad flows in the network, after which certain resource restrictions are placed for the different
flows thereby controlling the illegitimate traffic

[16]

. The following provides a step by step

explanation of the architecture, the different resources used for the algorithm to work, and finally
the working of the algorithm and therefore the project.
2.2 Project Architecture
This project was developed on a Software Defined Networks (SDN) based network
architecture. The hosts in the SDN were virtual machines running Ubuntu distros. The controller
in use was a POX controller. This POX controller is based on python. An open vSwitch is also
used. The whole setup is virtual machine based i.e. separate VMs were used to host the different
components of the project on top of a windows machine. An Ubuntu VM created using VMware
workstation, is used to host the POX controller. Another VM running Linux Mint is used to host
the open vSwitch. The open vSwitch creates a virtual bridge[17] to which the eth0 internet interface
is connected. Along with this two other virtual interfaces such as the vport1 and vport2 are also
connected. These vports are tuntap ports that were created to provide interface points for getting
the VMs connected. Inside this Linux Mint VM, two other VMs running Ubuntu are created and
connected using the bridged architecture.

Figure 3 Experiment setup

The pox running on the VM is also made to run the forwarding.l2_learning module. This
module helps in detecting any openflow switches in the network.

The above code snippet shows the code that is used to handle the connectionUp event that is
triggered whenever a switch comes up. This is what enables the controller to learn of the any
switches that comes up in the SDN network. It also floods the network to find of any switches in
the network.
The algorithm of bandwidth control works on both the controller as well as the open vSwitch end.
On the controller side, the python module DRoN Algorithm is used. This module integrates with
the POX controller and implements the needed functionalities. On the other hand, its counterpart

algorithm works on the open vSwitch to send and receive notifications and messages from their
counterparts and work accordingly.
2.2.1 In the POX controller
DRoN Algorithm is the python module that is made to run along with the POX controller.
DRoN Algorithm has the IP address of the controller. Now it uses 2 ports, to create 2 sockets. Note
that these ports are not the usual ports that are defined in the open flow standards. The standard
openflow port 6633 is used by the controller to send and receive notification and messages from
the switches in the network. These two ports are for sending and receiving data, notification and
other stats between the controller, switch and the other hosts. These ports are used only by the
DRoN Algorithm and its counterpart in the vSwitch. The port number used for the DRoN
Algorithm to send messages to the switch is 1234 and the port used by switch to send messages to
the DRoN Algorithm is 2345. The socket API are used for retrieving the IP address and to bind
them to the ports.
2.2.2 In the Open vSwitch
The Open vSwitch provides methods to create different virtual bridges and to assign the
existing ports as well as virtual ports. The following is the architecture that is created and deployed
in the virtual switch.

Figure 4 Inside the Open vSwitch

A virtual bridge is created using the ovs-vsctl commands. eth0 is the default interface in a
linux machine using which the machines network stack communicated to the outside world. This
eth0 is added as one of the interface in the vBridge. In the topology or the architecture that the
project is implemented on, there are VMs created inside the VM in which the open vSwitch is
running. So in order for these level 2 VMs to get connected, two tuntap ports

[19]

are created.

Tun/tap ports are software only ports. In other words these are ports that are known only to the
kernel. They do not exist physically. So after these tun/tap ports are created, these are also added
to the vBridge. After all these interfaces are added to the vBridge the ip address of the eth0 port is
flushed out and the vBridge is configured as a dhcp client there by getting an ip address. This is
done using the dhclient vBridge command. After the vBridge gets an IP, the controller IP and port
must be configure for that vBridge. This is also accomplished using the ovs-vsctl commands. The

open vSwitch doesnt know to find it controller and communicate with it. On the other hand the
POX controller running the l2_learining module has the ability to send BPDU packets and learn
of any switches in the network.
The commands to configure the Open vSwitch is as follows:
#Create internal and external bridges for each interface
ovs-vsctl add-br eth0br
ovs-vsctl add-port eth0br eth0

# Get interfaces up and with ip (edit /etc/network/interfaces)


ifconfig eth0br up

# Add tuntap ports


ip add tuntap mode tap vport1
ip add tuntap mode tap vport2

# Flush ip addresses from erroneus interfaces


ip address flush eth0

#Get back internal interfaces


ifconfig eth0 up
ifconfig vport1 up
ifconfig vport2 up

#Configure bridge to aim to the controller


ovs-vsctl set-controller eth0br tcp:10.1.4.1:2626

Output of ovs-vsctl in the SDN Switch 1


One bridge is configured for each data plane interface.

root@pdc-1:~# ovs-vsctl show


4fda1c9d-746f-4602-adff-c669a5bba7ed
Manager "ptcp:8282"
Bridge "eth0br"
Controller "tcp:10.1.4.1:2626"
Port "eth0"
Interface "eth0"
Port "vport1"
Interface "vport1"
Port "vport2"
Interface "vport2"
Port "eth0br"
Interface "eth0br"
type: internal
ovs_version: "2.3.0"

Now that the vSwitch and the controller has been configured, the DRoN Algorithm works
and controls the bandwidth accordingly.

2.3 Detailed Workflow

Figure 5 Flow of packets

The SDN Router indicated here is the open vSwitch. Likewise the SDN controller here is
the POX controller running the DRoN Algorithm module. Also in addition to the open vSwitch,
the DRoN-switch module is run with the OVS (Open vSwitch). In the OVS, we first detect its
capacity, specifically the bandwidth. This is done using the popular iperf tool. An iperf server is
run on a separate machine which is not in the SDN architecture, in our case the Windows machine
is used for this purpose. An iperf client is run on the Open vSwitch terminal. The bandwidth that
is got from the iperf test is provided to the OVS DRoN-switch module.

After the resources have been measured, these values are stored as the capacity of the switch. This
data is hardcoded in the OVS DRoN-switch module.

After this step, all the modules are started. First the POX controller along with the DRoN
python module and the forwarding.l2_learning module.

After the controller and its DRoN module is up and running, we goto the OVS and run the OVSSwitch module in a similar way to the controller. This is where the flows and other bandwidth
measurements are made.
Now that all the modules are running, the OVS-Switch module monitors the bandwidth of
its vBridge interfaces. This is done by exploiting the /proc/net/dev/ directory [20]. The /proc/ is a
kernel based filesystem, that provides information available to user space processes. The
/proc/net/dev/ is file that contains vital information regarding the configured devices. The OVSSwitch module takes a sample of the information available in the /proc/net/dev/ file and uses it for
monitoring. A sample of the /dev/ file is as below. This screenshot tells us the vital information
about all the physical as well as virtual interfaces such as the bandwidth (transmit bytes) currently
used.

This information is stored in an array along with the interface name. The below command is used
to grep out the interface name and check their output transmit bytes and using the awk command
the output is shortened to the 10th column and appended to the sample array.

After the transmit byte information of all the interfaces have been sampled and saved, the OVS
Switch-DRoN module checks if any congestion is happening in these interfaces. This is done by
comparing the bandwidth got from the iperf and the data got from the /proc/net/dev/. While the
iperf data provides the maximum bandwidth possible, the /proc/ data provides the bandwidth
currently in use.

2.3.1Monitoring for congestion


Now that the data about the bandwidth has been obtained, we monitor for congestion in the
interfaces. This is done as follows. A sampling window of one second is defined. In this one second
period, 10 samples are taken from the /proc/net/dev/ file. An average per interface for all the
interfaces are computed using these 10 samples. Two new levels namely the upper limit and the
lower limit are defined. These values are 80% of the maximum bandwidth of that interface and 20
% of maximum bandwidth of that interface respectively. Now we have the average, upper limit
and lower limit for an interface. We bring in another variable here called the threshold value. The

threshold is initially set to the upper limit of the particular interface. Another flag variable
is_congested is used to set to 1 when a congestion is detected. Also a time window is defined so
that samples are captured within that time frame. This windows is reset after a calculation has been
completed. So when the monitor happens, the algorithm checks if the average of that interface has
exceeded the threshold and also if the flag is not set. If all these conditions satisfy, then the module
has detected a congestion. So, now a congestion notification is sent to the DRoN module running
with the POX controller using the port 1234, and the is_congested flag is set to true and also the
threshold is now set to lower list so as to implement a penalty [13].

Now that we know how a congestion is detected, we also need to know when a detected congestion
has stopped, and the traffic is back to normal. For this, when we sample and monitor all the
interfaces in loop, and if the average bandwidth has become less than the lower limit and also the
is_congested flag is set true, then the module will send a congestion stopped notification to the
DRoN module. Also the is_congested flag is set to false and now the threshold points to the upper
limit of that interface. The notifications are sent using the congestion_detected and
congestion_stopped methods.

Now this congestion notification is received by the DRoN module on the port 1234. After this
notification has been received, a stat request is sent to the OVS. The stat request is a standard
request defined in the openflow 1.0 standards. It is used to request the stats of the network
component such as the OVS (Open vSwitch) in this case. The reply contains details about the flow
stats.

Since this is a standard defined in the openflow 1.0 standards, after using this method to request
for the open flow stats, the switch replies with the stats back to the controller.

2.3.2 Defining flows


The DRoN module has received the flow stats from the request and now it calculates the
flows and classifies the flows as bad and good flows. Flows are defined in the usually standard. A
traffic with the same source and destination IP address are defined as a flow. All the flow details
are got from the stats reply from the switch. The algorithm used to classify the flows as good and
bad is as follows. The byte_count index of the flow_index list indicates the bandwidth being used
by that flow.
If the bandwidth that is used by that particular flow is greater than the value of the interface
maximum bandwidth (got by running iperf as the first step) divided by the number of flows, then it
is classified as a bad flow else it is classified as a good flow.

Now that we have classified the good and bad flows, the next step is to assign computed and
modified bandwidth to the flows inorder for the security to work. A method is written to check the
good flows and the bad flows bandwidth. The good flow can be provided a maximum bandwidth
of 900000000 bits per second and 30000000 bits per second for a bad flow. If any good or bad
flow exceeds their respective limit, then their upper limit is assigned as their computed bandwidth
else their current bandwidth is assigned as the computed new bandwidth. A variable
tot_comp_band is used to store the total computed bandwidth. This variable is incremented with
the computed new bandwidth per flow each time a flow is assigned the bandwidth. Now if the
bandwidth stored by this variable is less than the total maximum bandwidth that the interface can
support (this is got from the iperf), the rest of the bandwidth is divided by the total number of good
flows and the extra bandwidth per good flow is assigned to each good flows.

2.3.3 Queue creation


After the flows are computed the new bandwidth the next step is to create the queues so as
to make the flows to comply the computed bandwidth. For this the queue parameters are populated
in a list and this list is sent to the OVS Switch-DRoN module. The dpid and the respective
bandwidth for the queues are sent to the OVS. This information is sent through the port number
1234. This information after being received by the OVS Switch-DRoN module is used to create

the respective queues. A dictionary in python is used to store the variables of the queue. Then this
dict is changed into a JSON format and sent to OVS.

After this message is received by the OVS Switch-DRoN module, the queues are created per flows.

The queue creation is done using the ovs-vsctl commands. The above snippet shows that the command
has a max rate of 900000000 bits per second. And the bandwidth assigned to that particular queue is got
from the dict queue from the controller (DRoN). After the queues are created, the method queue ready
is used to send the notification to the DRoN controller to notify of the completion of the queue creation.
So inorder to send the notification back, the same process is repeated that was followed to send the
message to create the queues. All the needed data such as the dpid, the maximum bandwidth and the
notification of Queues created are put in a dict, converted to JSON format and sent to the DRoN
controller.

2.3.4 Directing Flows to queues

Now the controller i.e. the DRoN running along with the POX has got the notification indicating
that the queues have been created with the specific bandwidth restriction for each queue. But only the
queues have been created, but the traffic is still flowing through no queues and just through the default
way(no queues and no qos) to reach their destination. So the next step is to send a flow modification
command that would make the traffic in specific flows go through their respective queues. This is done
with the help of flow_mod packet. This is an openflow standard that is usually sent by the controller to
the switch asking it to explicitly change its flow table in accordance with the data that the flow_mod.
Since this is an open flow standard, this message could be sent by the controller through the openflow
port of 6633 itself.
A foreach loop is run across all the flows, and for each flow (detected from the network source and
destination address) the traffic of that particular flow is subjected to the queue conditions.

The ofp_action_enqeue method from the openflow protocol help us in achieving this.
Now that the queues have been both implemented as well as the flows have been directed to the
respective queues, the bandwidth has been controlled.

2.4 Testing and Results


Testing a network security solution is to create or lodge an attack against the host running the
defense mechanism and check how it performs on doing so. As we indicated in the start of the project
that we are focusing on the attack that involves making the resources not accessible to the legitimate
users, and such attacks include DoS and DDoS. A DoS attack exploits the TCP transport layer 3 way
handshake to open a session between a source and a destination. A DoS attack floods a destination with
SYN packets that makes the destination host to open up unsuccessful sessions with users/hosts on its
network. This leads to the exhaustion of the resources on the server side and hence the legitimate hosts
are declined service. DDoS is the distributed version of the DoS attack. In DDoS the attackers take control
of zombie machines and start their DoS from all the remote single node at the same time, which is a very
big DDoS attack. Unfortunately this attack is a bit difficult to simulate or to create one. Hence we use
other different resource utilization attacks that will trigger the congestion notification on the OVS SwitchDRoN module and send it to the DRoN controller. Whole point of measuring the bandwidth using iperf
and hard coding the value is to make the testing easy. If an iperf is run now after all the process have been
started, then when during measurement of bandwidth, iperf makes that much traffic and send it through
the network, and hence the congestion will be triggered, which will start the whole process.

The testing of the algorithm was done using a series of methods. The most important were
the iperf and the HTTPerf. An iperf server was hosted in a separate server outside the current
network. The iperf clients are run in the host level 2 VMs. When the test is run, the time for each
measurement is given as 1 second. That is the time windows is given as 1 second and the test is
run for 40 seconds. So now the bandwidth measurement for 40 seconds is got with data for each
second. The data sent exceed the threshold for congestion when it crosses few seconds of iperf

testing. This is more experienced when there is more than 3 nodes being tested. That is more than
3, level 2 VMs running in the open vSwitch.
So as soon as the congestion is triggered, and the notification is sent to the controller, the
appropriate actions take place to regulate the bandwidth.
ovs-vsctl -- set Port $1 qos=@DRoNqos -- --id=@DRoNqos create QoS type=linux-htb otherconfig:max-rate=10000000 queues:0=@queue0 queues:1=@queue1 -- --id=@queue0 create Queue otherconfig:min-rate=1000000

other-config:max-rate=1000000

--

--id=@queue1

create

Queue

other-

config:min-rate=8000000 other-config:max-rate=8000000

The above is an example command that the open vSwitch executes to set the queue bandwidth and
assign QoS thereby doing it. The output doesnt shed much information when its done with a 2
node setup. But when the setup is extended to a 4 or a 7 node, then considerable difference is seen
in the bandwidth set.
The below is the output when the DRoN Algorithm is running.

When the DRoN Algorithm is not running the result is as follows

Note that the bandwidth without DRoN Algorithm running, for legitimate traffic or in other words
good behaved traffic is very less when compared to the one with DRoN Algorithm module running.
The below is a summarized output for different number of nodes used.
Two nodes
Bandwidth with defense = [56.25, 28.05, 30.06]
Bandwidth without defense = [20.62, 19.44, 19.76]
Four nodes
Bandwidth with defense = [20.23, 21.68, 19.94]
Bandwidth without defense = [0.99, 1.21, 0.95]
Ten nodes
Bandwidth with defense = [14.52, 18.52, 13.39]
Bandwidth without defense = [1.24, 1.98, 2.2]

A graph is plotted using the data from the above median values for the bandwidth obtained by
hosts with and without using DRoN while an iperf is also simultaneously running to create a
congestion. The bandwidth used by attackers tends to make the other flows (good flows) to utilize
a lesser bandwidth. A graph indicating the bandwidth when both DRoN Algorithm is used and not
used is below. The x-axis indicated the different test results with different number of nodes, and
the y axis uses bandwidth obtained. The data from the iperf bandwidth output from every second
interval both with and without using the DRoN module is used to plot this graph. .We can see that
when the DRoN module is running, we can get an increased bandwidth by the normal users and
when the DRoN module is not used the bandwidth per flow is decreased. This situation is well
seen when the number of nodes increases in the network. The values are taken from the average
of test results.

Bandwidth usage graph


60

Bandwidth in mbits/s

50
40
30
20
10
0
Bandwidth with
Bandwidth
Bandwidth with
Bandwidth
Defense (2 Nodes) without defense Defense (4 Nodes) without defense
(2 Nodes)
(4 Nodes)

Bandwidth with
Defense (10
Nodes)

Data of 2 Node, 4 Node and 10 node


Series1

Series2

Series3

Figure 6 Bandwidth for legitimate and illegitimate clients

Bandwidth
without defense
(10 Nodes)

Similar to this HTTPerf was also run in 2, 4 and 10 node setup.


Output with defense
httperf --hog --timeout=7 --client=0/1 --server=192.168.146.139 --port=80 -uri=http://192.168.146.139/index.html --rate=10 --send-buffer=4096 --recvbuffer=16384 --num-conns=100 --num-calls=1
Maximum connect burst length: 1

Total: connections 100 requests 100 replies 100 test-duration 12.814 s

Connection rate: 7.8 conn/s (128.1 ms/conn, <=4 concurrent connections)


Connection time [ms]: min 9.9 avg 116.9 max 3014.1 median 41.5 stddev 344.3
Connection time [ms]: connect 105.1
Connection length [replies/conn]: 1.000

Request rate: 7.8 req/s (128.1 ms/req)


Request size [B]: 86.0

Reply rate [replies/s]: min 9.8 avg 9.8 max 9.8 stddev 0.0 (2 samples)
Reply time [ms]: response 4.9 transfer 0.0
Reply size [B]: header 255.0 content 11104.0 footer 0.0 (total 11359.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 3.14 system 9.64 (user 24.5% system 75.3% total 99.7%)
Net I/O: 87.2 KB/s (0.7*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0


Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Output without defense

httperf --hog --timeout=7 --client=0/1 --server=192.168.146.139 --port=80 -uri=http://192.168.146.139/index.html --rate=10 --send-buffer=4096 --recvbuffer=16384 --num-conns=100 --num-calls=1
Maximum connect burst length: 1

Total: connections 100 requests 100 replies 100 test-duration 9.904 s

Connection rate: 10.1 conn/s (99.0 ms/conn, <=3 concurrent connections)


Connection time [ms]: min 2.4 avg 64.7 max 1000.6 median 3.5 stddev 204.1
Connection time [ms]: connect 42.8
Connection length [replies/conn]: 1.000

Request rate: 10.1 req/s (99.0 ms/req)


Request size [B]: 86.0

Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
Reply time [ms]: response 21.9 transfer 0.0
Reply size [B]: header 255.0 content 11104.0 footer 0.0 (total 11359.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 2.46 system 7.43 (user 24.8% system 75.0% total 99.8%)

Net I/O: 112.9 KB/s (0.9*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0


Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

The connection block and the reply block of the output will show light on the statistics that vary
on using the DRoN Algorithm module in the POX controller. A similar graph to the TCP iperf for
HTTperf is as follows. In this graph reply time is taken across the y-axis and the number of nodes
in the network is taken across the x-axis. From the test result above we can see that the response
time for one node when HTTPerf is run with the hog to use as much as TCP connections as
possible to the http server, so as to test the response time under maximum load. Also note that this
HTTPerf is run on the same time an iperf is also running so as to exhaust the bandwidth of that
particular interface and get queues created with qos. From the test result for a single node above
we can see that the response time is 4.9 seconds for the httperf with the DRoN and 21 seconds for
httperf without DRoN. Results from output for 2 and 4 nodes were used for this graph.
Following is the result obtained from the HTTPerf from running with and without DRoN
Number

of Response time with defense

Response time without defense

Nodes
1

4.9

19.7

5.3

23

13

40

Response time graph


45
40

Response time in ms

35
30
25
20
15
10
5
0
1

Number of Nodes
Response time with defense

Response time without defense

Figure 7 HTTP response time with and without DRoN

2.5 State of Art


A traditional network security solution would include either or both an Intrusion detection system
(IDS) and an intrusion prevention system (IPS). The IPS would detect any malicious packet flow or any
attacker trying to gain access to an internal restricted system. On the other hand an IPS, would try to
prevent the attacker from being successful in his/her attack. Few prevention mechanism include dropping
that particular flow or routing it to a black hole after that flow has been identified as malicious. The project
is different in the approach to providing security in comparison to the traditional methods. It looks for
bandwidth abuse and this is used as the trigger to start the whole process. According to the network that
the DRoN is made to run, an iperf test can be run and the congestion threshold bandwidth can be
determined.

In the traditional networks, if an illegitimate packet flow is determined, that particular flow is
dropped. For example, a DoS attacks using a SYN packet flooding is taking place in an interface. Let us
assume that there are also legitimate users trying to open a TCP session to the server using that interface.
But in a traditional system, after detection of the SYN packet flood, the most optimal solution is to either
scale the server (incase of huge corporates like Google and Facebook, where all legitimate users will look
like a SYN flood) or to drop the packets(in all other cases). If we take a closed network, where the number
users vary less frequently, then the optimal solution is to drop the packets

[11]

. This is owing to the fact

that the attackers malicious traffic blankets all the legitimate traffic coming into the network which will
be most probably dropped by the network owing to the overpowering

[12]

traffic being handled by the

network. In this project, we dont drop the packets. Instead, we detect the bad and good flows according
to the percentage of the bandwidth they are using. And a bad flow instead of getting dropped, is allocated
a lesser qos or in other word a lesser bandwidth portion of the maximum bandwidth available. So now,
the SYN flood wont make the server overflow of resources and crash, and also the legitimate users amidst
the illegitimate users/attackers are also served their requests.

2.6 Tools used


Tool Name

Used for

OS Ubuntu, Linux Mint

To run the different hosts for controller and open


vSwitch, as well as other hosts.

VMware Workstation

To virtualize the physical machine into multiple


level of nodes.

POX controller

Open sourced openflow controller

Open vSwitch

Open sourced openflow open switch

Pycharm

IDE for python development

Jmeter

Network attributes like bandwidth, delay, jitter


testing and graphing tool.

Iperf and HTTperf

Bandwidth and delay testing tools.

3. Schedule
Week

Date

Tasks

Oct 10-17

Basic study about SDN and its features. Study about the

different ways a SDN system can be built.

Oct 20 24

Get in-depth details about mininet, its features and

Oct 26 31

mininet installation

Nov 3 7

Nov 10 14

Play around with mininet and understand how to design

Nov 17 21

a network topology using Python.

Nov 24 28

Dec 1 5

Research about OpenDaylight and study about its

Dec 8 12

features

10

Dec 15 19

Test the network that we designed in mininet and

11

Dec 22 - 26

research how well it gels with the controller.

12

Dec 29 Jan 2

(Project research time considerably decreases because of

13

Jan 5 - 9

the semester break).

14

Jan 12 16

Formulate various attack scenarios and try to think of

15

Jan 19 23

various security principles that can help the system to

16

Jan 26 - 30

prevent itself from the attack.

17

Feb 2 - 6

Figure out all the possible routes and come up with the

18

Feb 9 - 13

most efficient route for the given scenario.

19

Feb 16 20

Implement all the findings and research onto the

20

Feb 23 - 27

designed SDN network and make sure the network runs

21

Mar 2 - 6

without any hiccups.

22

Mar 9 - 13

23

Mar 16 20

Simulate the necessary attacks and make sure the

24

Mar 23 - 27

security principle is being followed by the network.

25

Mar 30 Apr 3

26

Apr 6 - 10

Verify the results of the final design and check if it

27

Apr 13 17

adheres to the initial goal of the project.

28

Apr 20 24

29

Apr 27 May 1

30

May 4 May 8

Prepare the report.

4. CONCLUSION:

SDN systems has gradually started to find applications in many fields where sensitive data is being
handled. The need of the hour is to have stable and versatile security system that has a brain on its
own, which has the ability to sense an attack if at all its taking place and configure itself in such
a way that the attacker will not gain unauthorized access into the system. In this project, we detect
the congestion and malicious flows without even having an IDS system. After detection
appropriate bandwidth control mechanisms are taken place to avoid the attack. The pro of the
project is that, this works for almost many attacks which involves bandwidth abuse. In case of
DoS, the attack is going to be SYN flooding which again a bandwidth abuses for that particular
flow. So this solution is one that is useful to major and popular attack scenarios. The con of this
algorithm is that, this works perfectly fine if there is just one switch in the network. Incase more
switches are been used, there must be a way to track the stats of each switch, which will lead to
resource exhaustion in the controller.

5. REFERENCE
[1]Taejin Kim; Taekkyeun Lee; Ki-Hyung Kim; Honjin Yeh; Manpyo Hong "An efficient packet
processing protocol based on exchanging messages between switches and controller in OpenFlow
networks", Emerging Technologies for a Smarter World (CEWIT)
[2]Akbar, M.S.; Khaliq, K.A.; Bin Rais, R.N.; Qayyum, A. "Information-Centric Networks:
Categorizations, challenges, and classifications",

Wireless and Optical Communication

Conference (WOCC), 2014


[3]Ravindran, R.; Xuan Liu; Chakraborti, A.; Xinwen Zhang; Guoqiang Wang "Towards software
defined ICN based edge-cloud services",

Cloud Networking (CloudNet), 2013 IEEE 2nd

International Conference
[4]Wun-Yuan Huang; Ta-Yuan Chou; Jen-Wei Hu; Te-Lung Liu "Automatical End to End
Topology Discovery and Flow Viewer on SDN", Advanced Information Networking and
Applications Workshops (WAINA), 2014
[5]CISCO White paper
http://www.cisco.com/web/strategy/docs/gov/cis13090_sdn_sled_white_paper.pdf
[6]Gates, C.; Becknel, D. "Host anomalies from network data", Information Assurance Workshop,
2005. IAW '05. Proceedings from the Sixth Annual IEEE SMC
[7] SDN: Software Defined Networks by Thomas D. Nadeau book
[8] Thanasegaran, S.; Tateiwa, Y.; Katayama, Y.; Takahashi, N., "Simultaneous Analysis of Time
and Space for Conflict Detection in Time-Based Firewall Policies," Computer and Information

Technology (CIT), 2010 IEEE 10th International Conference on, vol., no., pp.1015, 1021, June 29
2010-July 1
[9] Chun-Jen Chung; Khatkar, P.; Tianyi Xing; Jeongkeun Lee; Dijiang Huang, "NICE: Network
Intrusion Detection and Countermeasure Selection in Virtual Network Systems," Dependable and
Secure Computing, IEEE Transactions on , vol.10, no.4, pp.198,211, July-Aug. 2013.
[10] Smith, P.; Schaeffer-Filho, A; Hutchison, D.; Mauthe, A, "Management patterns: SDNenabled network resilience management," Network Operations and Management Symposium
(NOMS), 2014 IEEE , vol., no., pp.1,9, 5-9 May 2014
[11] Kai Wang; Yaxuan Qi; Baohua Yang; Yibo Xue; Jun Li, "LiveSec: Towards Effective
Security Management in Large-Scale Production Networks," Distributed Computing Systems
Workshops (ICDCSW), 2012 32nd International Conference on , vol., no., pp.451,460, 18-21 June
2012
[12]Vaughan Nichols, S.J. 2011. OpenFlow: The Next Generation of the Network? Computer,
Volume 44 Issue 8 Page(s):13-15.
[13] Miki Yamamoto, A Survey of Active NetworkTrans, Vol.J84-B. No.8 pp.14011412,Aug.2001
[14] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, Nox:
towards an operating system for networks,SIGCOMM Comput. Commun. Rev., vol. 38, no. 3,
pp. 105110, Jul.2008.

[15] S. Gutz, A. Story, C. Schlesinger, and N. Foster, Splendid isolation: a slice abstraction for
software-defined networks, in Proceedings of the first workshop on Hot topics in software defined
networks, ser. HotSDN12. New York, NY, USA: ACM, 2012
[16] P. Ayres, H. Sun, H. Chao, and W. Lau, Alpi: A ddos defense system for hogh-speed
networks, IEEE Journal oc Selected Areas in Communications, vol. 24, pp. 18641876, 2006.
[17] D. Z. S. Hartman, M. Wasserman, Security requirements in the software defined networking
model, April 2013. [Online].
[18] Wentao Liu, "Research on DoS Attack and Detection Programming," Intelligent Information
Technology Application, 2009. IITA 2009. Third International Symposium on , vol.1, no.,
pp.207,210,21-22Nov.2009
[19] Backreference on Tuntap software interfaces- tutorial
http://backreference.org/2010/03/26/tuntap-interface-tutorial/
[20] Linux Dev center Reference http://www.onlamp.com/pub/a/linux/2000/11/16/LinuxAdmin.html
[21] Picture reference - http://en.wikipedia.org/wiki/Software-defined_networking