Sie sind auf Seite 1von 22

EEET1246 Advanced Computer Network Engineering

Laboratory Assignment 2 Report

Professor: Andrew Jennins (andrew.jennis@rmit.edu.au)


Tutor: Piya Techateerawat (s3100479@student.rmit.edu.au)

Student: Xiaolin Zhang


Email: s3097029@student.rmit.edu.au

Student: Wilson Castillo Bautista


Email: s3143667@student.rmit.edu.au

Subject Code: EEET1246 Advanced Computer Network Eng.

Melbourne, Octubre 2nd,2006


NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Table of Contents

1 Aim......................................................................................................................................................4
2 Introduction ......................................................................................................................................4
3 Differentiated Services....................................................................................................................5
3.1 DiffServ Field Definition ...................................................................................................5
3.2 Traffic Classification.........................................................................................................6
3.2.1 Classifier.............................................................................................................................6
3.2.2 Multi-Field Classifier (MF) ................................................................................................6
3.2.3 Behaviour Aggregate Classifier (BA)............................................................................6
3.3 Meter..................................................................................................................................6
3.4 Marker................................................................................................................................7
3.5 Shaper ...............................................................................................................................7
3.6 Dropper .............................................................................................................................7
3.7 DiffServ Architecture .......................................................................................................8
3.7.1 Edge Router Responsibilities ..........................................................................................8
3.7.2 Core Router Responsibilities...........................................................................................8
3.8 Multiple RED Routers .......................................................................................................8
3.8.1 Introduction ......................................................................................................................8
3.8.2 Multiple RED Parameters................................................................................................9
4 DiffServ Simulation Using NS .........................................................................................................10
4.1.1 NS Architecture ..............................................................................................................10
4.1.2 DiffServ Support..............................................................................................................10
4.1.2.1 DiffServ Simulation Improvements...........................................................10
4.1.2.2 Defining policies .........................................................................................11
5 Results...............................................................................................................................................12
5.1 Types of traffic ................................................................................................................12
5.1.1 Premium ..........................................................................................................................12
5.1.1.1 Classifying and Marking............................................................................12
5.1.1.2 Mettering .....................................................................................................12
5.1.1.3 Shaping/Dropping .....................................................................................12
5.1.2 Gold .................................................................................................................................12
5.1.2.1 Classifying and Marking............................................................................12
5.1.2.2 Mettering .....................................................................................................13
5.1.2.3 Shaping and Dropping .............................................................................13
5.1.3 Best Effort.........................................................................................................................13
5.1.3.1 Classifying and Marking............................................................................13
5.1.3.2 Mettering .....................................................................................................13
5.1.3.3 Shaping and Dropping .............................................................................13
5.2 Simulation........................................................................................................................13
5.2.1 Simulation using PQ scheduler....................................................................................14
5.2.2 Simulation using LLQ scheduler...................................................................................15
5.2.3 Simulation using WFQ scheduler.................................................................................17
5.2.4 Simulation using SCFQ scheduler ...............................................................................18
6 Problems that we overcame.......................................................................................................21
7 Conclusions .....................................................................................................................................21
8 References ......................................................................................................................................22

RMIT University © 2006 2 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Table of Figures

Figure 1: Topology used in the Differentiated Services Simulation ................................................5


Figure 2: IPV4 header..............................................................................................................................6
Figure 3: IPV6 header..............................................................................................................................6
Figure 4: Traffic Conditioner in Differentiated Services ....................................................................7
Figure 5: Nam Output resulted from simulation...............................................................................13
Figure 6: Class Rate - PQ ......................................................................................................................14
Figure 7: Packet Loss - PQ ....................................................................................................................14
Figure 8: Queue Length - PQ...............................................................................................................14
Figure 9: Service Rate - PQ ..................................................................................................................14
Figure 10: Avg One-Way Dealy for EF - PQ......................................................................................15
Figure 11: Virtual Queue Length - PQ ................................................................................................15
Figure 12: EF IPVD - PQ..........................................................................................................................15
Figure 13: Goodput (Telnet and FTP) - PQ ........................................................................................15
Figure 14: Class Rate – LLQ ..................................................................................................................16
Figure 15: Packet Loss - LLQ.................................................................................................................16
Figure 16: Queue Length - LLQ ...........................................................................................................16
Figure 17: Virtual Queue Length - LLQ...............................................................................................16
Figure 18: Service Rate - LLQ ...............................................................................................................16
Figure 19: EF IPVD - LLQ ........................................................................................................................16
Figure 20: Avg One-Way Dealy for EF - LLQ.....................................................................................17
Figure 21: Goodput (Telnet and FTP) - LLQ .......................................................................................17
Figure 22: Class Rate – WFQ ................................................................................................................17
Figure 23: Packet Loss - WFQ...............................................................................................................17
Figure 24: Queue Length - WFQ .........................................................................................................17
Figure 25: Virtual Queue Length - WFQ.............................................................................................17
Figure 26: Service Rate - WFQ .............................................................................................................18
Figure 27: EF IPVD -WFQ .......................................................................................................................18
Figure 28: Avg One-Way Dealy for EF - WFQ...................................................................................18
Figure 29: Goodput (Telnet and FTP) - WFQ .....................................................................................18
Figure 30: Class Rate - SCFQ................................................................................................................19
Figure 31: Packet Loss - SCFQ..............................................................................................................19
Figure 32: Queue Length - SCFQ ........................................................................................................19
Figure 33: Virtual Queue Length -SCFQ.............................................................................................19
Figure 34: Service Rate - SCFQ............................................................................................................19
Figure 35: EF IPVD - SCFQ .....................................................................................................................19
Figure 36: Avg One-Way Delay for EF - SCFQ .................................................................................20
Figure 37: Goodput (Telnet and FTP) - SCFQ....................................................................................20
Figure 38: Statistics - PQ........................................................................................................................20
Figure 39: Statistics - LLQ.......................................................................................................................20
Figure 40: Statistics - WFQ.....................................................................................................................20
Figure 41: Statisctics – SCFQ ................................................................................................................20
Figure 42: Statistics – WF2PQp .............................................................................................................21
Figure 43: EF IPVD - SFQ ........................................................................................................................21

RMIT University © 2006 3 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

NS Network Simulation and Differentiated Services Analysis

1 Aim

The aim of this lab is to investigate the impact of routing policy and traffic policing at the
edge and core routers inside a common network. Moreover, it is the main aim to
understand the Differentiated Services architecture. For instance, to achieve this aim it is
necessary to simulate a network topology behaviour in order to analyse how the Quality of
Service behave under a variety of Differentiated Served configurations.

2 Introduction

Nowadays internet is by the fact one of the most important sources of information and
communication integration between users around the world. Moreover, the information
sent through the network is divided into packages which travel from one point to another
with the same treatment, e.g. without any differentiation between them. This is the basic
Quality of Service Model that governs most of the networks today and it is called Best Effort
model. For instance, all connections get the same treatment with unpredictable delays
and data loss and consequently, they cannot support real time applications.

To solve this issue the Internet Engineering Task Force has created the differentiated
services architecture which deals with traffic management to provide scalable services
differentiation on the internet. As it is stated in their first paragraph:

“Differentiated services enhancements to the Internet protocol are intended to


enable scalable service discrimination in the Internet without the need for per-
flow state and signaling at every hop. A variety o services may be built from a
small, well-defined set of building blocks which are deployed in network
nodes.” (www.ietf.org, 2006/10/01).

To investigate the real effects of differentiated services nodes in the network it is important
to compare the variety of options that give differentiated service enhancements. For
instance, it is necessary to simulate the network behaviour of different scenarios. And this is
done by using NS (Network Simulator) that is a discrete event simulator targeted at
networking research (www.isi.edu, 2006/10/01).

The network we are investigated in this lab consists of 1 core router, 2 edge routers with 5
user hosts that cluster around them. There are two applications running on this network:
each user host will connect randomly to any other user host to do peer-peer file serving, FTP
over TCP as the underlying transport protocol; as well as each user host interacts with its
local core router for web access using UDP over TCP. We defined to use this architecture
because by definition the intelligence of the differentiated services architecture is located
at the edge router of the network (Jimenez and Altman, 2006), in our example e1 router:

RMIT University © 2006 4 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 1: Topology used in the Differentiated Services Simulation

3 Differentiated Services

Differentiated Services (Diffserv) is an IP QoS architecture based on marking packets at the


edge of the network according to user requirements. According to the marks, packets are
treated differently at the network’s nodes using different parameters.

According to RFC2474 (www.ietf.org, 2006/10/01), the way that packets are marked is
done by the definition of a SLS (Service Level Specification) that is a combination between
SLA (Service Level Agreement; between a provider and a customer) and TCA (Traffic
Conditioning Agreement – rules applied to the traffic).

3.1 DiffServ Field Definition

As described above each packet that enters to a DS network needs to be marked


according to certain conditions; this mark is called Differentiated Services Code Point
(DSCP), (Andreozzi, 2001). This field is shown in the following graphs:

0 4 8 16

Head
Vers4 Type of Service Total Length
Len
Identification Flags Fragment Offset

Time to Live Protocol Header Cheksum

RMIT University © 2006 5 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Source Address

Destination Address

Options

PAD
Figure 2: IPV4 header

0 4 12

Vers6 Traffic Class Flow Label


Next
Payload length Hop Limit
Header

Source Address

Destination Address

Figure 3: IPV6 header

3.2 Traffic Classification

Traffic classification is done according to the following mechanisms: Classifier, Meter,


Markers and Shaper/Dropper: (Rodriguez, Gatrell, Karas and Peschke, 2001)

3.2.1 Classifier

The main function of the classifier is to discriminate packets according to their header: It is
defined two kinds of classifiers

3.2.2 Multi-Field Classifier (MF)

They are able to classify according to a combination of several fields like IP source address,
IP destination address, IP source port and destination port.

3.2.3 Behaviour Aggregate Classifier (BA)

This classifier only is able to discriminate packets according to the DS Field in the IP packet
as described in Figure 2 and Figure 3.

3.3 Meter

RMIT University © 2006 6 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

The function of the meter is to analyse is the incoming packet fits some of the internal
profiles that are configured in the router. Some of these meters could be:

 Average Rate Meter


 EWMA (Exponential Weighted Moving Average Meter).
 TSW2CM
 TSW3CM
 TB (Token Bucket)
 srTCM (Single rate Three Colours Marker)
 trTCM (Two rate Three Colours Marker)

3.4 Marker

The function of the marker is to set the DS field according to a pattern.

3.5 Shaper

The function of the shaper is to delay some or all of the incoming packets

3.6 Dropper

The dropper discard packets that do not fit any profile inside the router. The way packet
are discarded is done by an algorithm:

 RIO Coupled (RIO-C)


 RIO De-coupled (RIO-D)
 Weighted RED (WRED)
 Drop on threshold

The previous concepts could be viewed in the following graph

Figure 4: Traffic Conditioner in Differentiated Services

RMIT University © 2006 7 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Once the packet have been conditioned it is necessary to schedule it in order to transmit it
to the following node in the network. Several schedulers are defined in the standard. The
following is a short list of some of them:

 RR (Round Robin)
 WRR (Weighted Round Robin)
 WIRR (Weighted Interleaved Round Robin)
 PQ (Priority Queuing)
 WFQ (Weighted Fair Bandwidth sharing) also known packet by packet
 WF2Qt
 SCFQ
 SFQ (Start Time Fair Queuing)
 LLQ (Low Latency Queuing)

3.7 DiffServ Architecture

The DiffServ architecture has three major components:

 The policy, which is specified for each edge and core device through the Tcl scripts,
determines which traffic receives a particular level of service in the network, this
task may depend on the behaviour of the source of the flow, e.g, its average rate
and its burstiness.

 Edge routers and


 Core routers.

3.7.1 Edge Router Responsibilities

 Examining incoming packets and classifying them according to policy specified by


the network administrator.
 Marking packets with a code point that reflects the desired level of service.
 Ensuring that user traffic adheres to its policy specifications, by shaping and policing
traffic.

3.7.2 Core Router Responsibilities

 Examining incoming packets for the code point marking done (DSCP) on the
packet by the edge routers.
 Forwarding incoming packets according to their markings. (Core routers provide a
reaction to the marking done by edge routers.)

3.8 Multiple RED Routers

3.8.1 Introduction

RMIT University © 2006 8 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

In Multiple RED routers, the DifferServ architecture provides QoS by dividing traffic into
different categories, marking each packet with a code point that indicates its category,
and scheduling packets according to their code points. In a NS DiffServ network, not more
than four classes of traffic are defined, each of which has three drop precedences
resulting in different treatment of traffic within a single class.

In order to differentiate between packets belonging to the same class, three virtual queues
are implemented in each of the four queues, one for each drop precedence. A packet
with lower drop precedence is given better treatment.

Therefore, each of the 12 combination of the four flow calss and the three internal priority
levels within a flow correspond a code point that a packet is given when entering the
network. But in practice, not all queues and all priority groups need to be implemented.

The three virtual RED buffers in each physical queue allowing enhancing its behaviour are
RIO-C, the probability of dropping low priority packets is based on the weighted average
lengths of all virtual queues and the probability of dropping a high priority packet is based
only on the weighted average length of its own virtual queue, by default, the MRED mode
is set to RIO-C; in contrast, in RIO-D, the probability of dropping each packet is based on
the size of its virtual queue; another one is the WRED (Weighted RED) in which all
probabilities are based on a single queue length (Jimenez and Altman, 2006).

3.8.2 Multiple RED Parameters

To Set DS RED parameters from Edge1 to Core, we use the command:


$qE1C set numQueues_ m
Where m can take values between 1 and 4. $qE1C identifies the edge Router object;

To specify the number of virtual queues, we use the command in the queue and
precedence levels settings:
$qE1C setNumPrec 0 2
Queue 0, two levels of precedence

RED parameters are then configured for one virtual queue using the command in the
shaping/dropping:
$qE1C configQ $queueNum $virtualQueueNum $minTh $maxTh $maxP
It thus has 5 parameters:
1. the queue number,
2. virtual queue number,
3. min th ,
4. max th and
5. max p .

For example, “$dsredq configQ 0 1 10 20 0.10”, specifies that physical queue 0/virtual
queue 1 has a minth value of 10 packets, a maxth value of 20 packets, and a maxp value
of 0.10.

For the mean packet size (in bytes), this command is used in the shaping/dropping:
$qE1C meanPktSize 1300

RMIT University © 2006 9 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

In addition, commands are available which allow us to choose the scheduling mode
between queues, such as:
$qE1C setSchedularMode SFQ
$qE1C addQueueWeight 0 3
The above pair of commands sets the scheduling mode to SCFQ, and then sets the queue
weight for queue o to 3. SFQ stands Start-time Fair Queueing.

4 DiffServ Simulation Using NS

4.1.1 NS Architecture

A simulation is defined by an OTcl script. Running a simulation involves creating and


executing a file with a “.tcl” extension, such as “example.tcl.”

A Tcl ns script:

 Defines a network topology (including the nodes, links, and scheduling and routing
algorithms of a network).
 Defines a traffic pattern (for example, the start and stop time of an FTP session).
 Collects statistics and outputs the results of the simulation. Results are usually written
to files, including files for Nam, the Network Animator program that comes with the
full ns download.

For example, the statement:


$ns at 0.5 “$tcp start”, which is translated into event: at 0.5 seconds into the simulation,
starts up a TCP source.

4.1.2 DiffServ Support

The NS module has some limitations when the user needs to simulate differentiated services
behaviour (Andreozzi, 2001):

 It is not possible to mark traffic on per packet type.


 It is not possible to define meter for aggregate meter.
 It is not possible to drop out-of-profile traffic.

4.1.2.1 DiffServ Simulation Improvements

In our simulation example we are using the improvement made by Sergio Andreozzi
(http://www.cnaf.infn.it/~andreozzi/, 2006/10/01) The following changes are applied to
improve router modelling capabilities in DiffServ module (Andreozzi, 2001):

RMIT University © 2006 10 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

 Schedulers: the targets are to speed the scheduler addition by encapsulating the
scheduler mechanism in its own class and to increase the number of available
schedulers
 marker and meter: the target is to enable marking on a per-packet basis and to
decouple marking from metering
 dropper: the target is to enable a drop out-of-profile traffic capability on a drop
precedence level basis

Moreover, the following set of functionalities for measurement and performance analysis
were added:

 One-Way Delay (OWD): the target is to enable end-to-end one-way delay


computation. instantaneous, average, minimum and frequency distributed OWD
will be provided for UDP-based traffic
 IP Packet Delay Variation (IPDV): the target is to enable the delay variation
computation in a destination node for packets belonging to the same micro-flow;
instantaneous, average, minimum and frequency distributed IPDV will be provided
for UDP-based traffic
 Queue length: the target is to enable queue length checking at a script
 language level; it will be provided on a per queue and per drop level precedence
basis
 Maximum burstiness for queue 0: the target is to enable maximum enqueued
 packets checking for queue 0; this queue is typically used for priority traffic
 Departure rate on a per queue basis and on a per queue and per drop level
precedence basis
 Received packets, transmitted packets, early dropped and late dropped packets
on a DSCP basis, both absolute and percentage values
 TCP Goodput on a DSCP basis, instantaneous and frequency distributed TCP
Round-Trip Time on a DSCP basis, instantaneous and frequency distributed TCP
Windows Size on a DSCP basis: the target is to enable computation of performance
parameters for TCP-based traffic to understand the level of differentiation on an
aggregate level

4.1.2.2 Defining policies

 All flows having the same source and destination are subject to a common policy.
 A policy specifies at least two code points.
 The choice between them depends on the comparison between the flow's target
 and its current sending rate, and possibly on the policy dependent parameters
(such as burstiness).
 The policy specifies meter types that are used for measuring the relevant input
 traffic parameters. And a packet arriving at the edge device causes the meter to
update the state variables corresponding to the flow, and the packet is then
marked according to the policy.
 The packet has an initial code point corresponding to the required service level; the
 marking can result in downgrading the service level with respect to the initial
required one.
 A policy table is used in ns to store the policy type of each flow. Not all entries are
 actually used. To update the policy table, the ''addPolicyEntry'' command is used.

RMIT University © 2006 11 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

 An example is:

$edgeQueue addPolicerEntry [$n1 id] [$n8 id] trTCM 10 200000 1000 300000 1000

Here we added a policy for the flow that originates in $n1 and ends at $n8. If the TSW
policers are used, one can add at the end the TSW window length. If not added, it is taken
to be 1sec by default.

5 Results

In the following simulations the following parameters are defined:

 BW between s(x) nodes and e1 router is 100 MB


 BW between e1 node and core is 100kB
 EF (Expedited Traffic) cir (commited information rate) = 500kB
 EF (Expedited Traffic) cbs (Commited Burst Rate) between = 1300 bytes.
 Number of queues = 3

5.1 Types of traffic

5.1.1 Premium
Queue Number 0
Queue Size = 50
VIrutal queues = 2

5.1.1.1 Classifying and Marking


Any traffic coming from node s(0)
DSCP (Diffserv Code Point) = 46.

5.1.1.2 Mettering
Entry Policy Token Bucket 500000 1301
Policer Token Bucket.

5.1.1.3 Shaping/Dropping
DROP 0

5.1.2 Gold
Queue Number 1
Queue Size = 150
Virtual queues = 3
AF11 telnet
AF!2, AF13 for ftp

5.1.2.1 Classifying and Marking


Any telnet DSCP (Diffserv Code Point) = 10
Any ftp DCP = 12

RMIT University © 2006 12 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

5.1.2.2 Mettering
telnet : DUMP: No policy for telnet.
ftp: TSW2CM 500000 (500KB) when ftp exceeds this value packets are dropped.

5.1.2.3 Shaping and Dropping


RIO-C

5.1.3 Best Effort

Queue Number 2
Queue Size = 100
Virtual queues = 2

5.1.3.1 Classifying and Marking


No rules, all packets that do not fit other profile will be considered for best effort policy.

5.1.3.2 Mettering
Entry Policy Token Bucket 500000 1301
Policer Token Bucket.

5.1.3.3 Shaping and Dropping


DROP 2

5.2 Simulation

The output result simulation from nam is showed in the following graphic:

Figure 5: Nam Output resulted from simulation.

RMIT University © 2006 13 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

The simulations were done varying the way the traffic is scheduled. For instance, we
change the algorithm in each simulation. The following show the different outputs for each
different scheduler algorithm:

5.2.1 Simulation using PQ scheduler

Figure 6: Class Rate - PQ Figure 7: Packet Loss - PQ

Figure 8: Queue Length - PQ Figure 9: Service Rate - PQ

RMIT University © 2006 14 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 10: Avg One-Way Dealy for EF - PQ Figure 11: Virtual Queue Length - PQ

Figure 12: EF IPVD - PQ Figure 13: Goodput (Telnet and FTP) - PQ

5.2.2 Simulation using LLQ scheduler

RMIT University © 2006 15 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 14: Class Rate – LLQ Figure 15: Packet Loss - LLQ

Figure 16: Queue Length - LLQ Figure 17: Virtual Queue Length - LLQ

Figure 18: Service Rate - LLQ Figure 19: EF IPVD - LLQ

RMIT University © 2006 16 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 20: Avg One-Way Dealy for EF - LLQ Figure 21: Goodput (Telnet and FTP) - LLQ

5.2.3 Simulation using WFQ scheduler

Figure 22: Class Rate – WFQ Figure 23: Packet Loss - WFQ

Figure 24: Queue Length - WFQ Figure 25: Virtual Queue Length - WFQ

RMIT University © 2006 17 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 26: Service Rate - WFQ Figure 27: EF IPVD -WFQ

Figure 28: Avg One-Way Dealy for EF - WFQ Figure 29: Goodput (Telnet and FTP) - WFQ

5.2.4 Simulation using SCFQ scheduler

RMIT University © 2006 18 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 30: Class Rate - SCFQ Figure 31: Packet Loss - SCFQ

Figure 32: Queue Length - SCFQ Figure 33: Virtual Queue Length -SCFQ

Figure 34: Service Rate - SCFQ Figure 35: EF IPVD - SCFQ

RMIT University © 2006 19 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Figure 36: Avg One-Way Delay for EF - SCFQ Figure 37: Goodput (Telnet and FTP) - SCFQ

Statistics

Packets Statistics Packets Statistics


======================================= =======================================
CP TotPkts TxPkts ldrops edrops CP TotPkts TxPkts ldrops edrops
-- ------- ------ ------ ------ -- ------- ------ ------ ------
0 47522 0.21% 99.79% 0.00% 0 46717 28.22% 71.78% 0.00%
10 1097 100.00% 0.00% 0.00% 10 1132 100.00% 0.00% 0.00%
12 5439 91.29% 0.00% 8.71% 12 2745 89.14% 0.00% 10.86%
14 139 58.99% 0.00% 41.01% 14 31 93.55% 0.00% 6.45%
46 2816 100.00% 0.00% 0.00% 46 2802 100.00% 0.00% 0.00%
50 24614 0.00% 0.00% 100.00% 50 25277 0.00% 0.00% 100.00%
---------------------------------------- ----------------------------------------
All 81627 11.10% 58.10% 30.80% All 78704 24.90% 42.61% 32.50%

Figure 38: Statistics - PQ Figure 39: Statistics - LLQ

Packets Statistics Packets Statistics


======================================= =======================================
CP TotPkts TxPkts ldrops edrops CP TotPkts TxPkts ldrops edrops
-- ------- ------ ------ ------ -- ------- ------ ------ ------
0 48279 59.89% 40.11% 0.00% 0 46881 40.07% 59.93% 0.00%
10 1171 100.00% 0.00% 0.00% 10 1130 100.00% 0.00% 0.00%
12 2493 89.09% 0.00% 10.91% 12 4085 90.33% 0.00% 9.67%
14 10 100.00% 0.00% 0.00% 14 17 100.00% 0.00% 0.00%
46 2823 41.27% 0.00% 58.73% 46 2835 41.41% 0.00% 58.59%
50 24048 0.00% 0.00% 100.00% 50 25460 0.00% 0.00% 100.00%
---------------------------------------- ----------------------------------------
All 78824 42.47% 24.57% 32.96% All 80408 30.84% 34.94% 34.22%

Figure 40: Statistics - WFQ Figure 41: Statisctics – SCFQ

RMIT University © 2006 20 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

Packets Statistics Packets Statistics


======================================= =======================================
CP TotPkts TxPkts ldrops edrops CP TotPkts TxPkts ldrops edrops
-- ------- ------ ------ ------ -- ------- ------ ------ ------
0 47946 57.99% 42.01% 0.00% 0 47593 59.45% 40.55% 0.00%
10 1204 100.00% 0.00% 0.00% 10 1094 100.00% 0.00% 0.00%
12 2495 88.82% 0.00% 11.18% 12 2601 89.39% 0.00% 10.61%
14 17 100.00% 0.00% 0.00% 14 10 100.00% 0.00% 0.00%
46 2829 41.22% 0.00% 58.78% 46 2819 41.26% 0.00% 58.74%
50 24143 0.00% 0.00% 100.00% 50 24474 0.00% 0.00% 100.00%
---------------------------------------- ----------------------------------------
All 78634 41.21% 25.62% 33.17% All 78591 41.85% 24.55% 33.60%

Figure 42: Statistics – WF2PQp Figure 43: EF IPVD - SFQ

6 Problems that we overcame

The main problem found in this lab is to get a complete manual of NS. However to master
NS software it is required a considerable amount of time between trying and testing.

Internet is a very valuable source of information.

7 Conclusions

The way that traffic is treated in the network is affected in a direct way by the scheduler
chosen. As can be seen in the simulations when PQ was chosen the Service rate for EF
traffic did not seem to be affected. Instead, 100% of the packets were passed from e1 to
the core. Contrary, when SCFQ scheduler was chosen EF traffic dropped by the network
raised dramatically to 58.59%.

Other effects of changing the scheduler is the queue length behaviour in the entire
simulation. With PQ scheduler EF queue length remain almost empty but with SCFQ the
length queue raised dramatically to almost 30 packets.

In our simulations it could be seen that the number of packet successfully delivered are
quite independent of the CIR (Committed Information Rate).

Another important parameter that affects packet dropping is the CBS (Commited Burst
Specification). This parameter means that every packet that is above this value will be
dropped in the network. This parameter could be used when the network administrator has
several constrains about bandwidth in the network.

Queue length parameter affect the probability of dropping packets. For instance, it is
important to have a small queue length for every data flow.

RMIT University © 2006 21 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006
NS Network Simulation and Differentiated Services Student: Xiaolin Zhang (s3097029)
Laboratory 2 Report Student: Wilson Castillo (s3143667)
Laboratory Report

8 References

Andreozzi S, 2001, DiffServ simulations using the Network Simulator: requirements, issues and solutions,
Master’s Thesis.
Carpenter B and Nichols K, 2002, Differentiated Services in the Internet, Proceedings of the IEEE, vol.
90, no. 9, sept. 2002
Altman E, 2006, Simulating Diffser (Differentiated Services), Lecture Notes, January-February 2006,
Inria.
Pieda P, Ethridge J, Baines M and Shallwani F, 2000, A Network Simulator Differentiated Services
Implementation, Open IP, Nortel Networks.
Stevens, W. Richard. 2001, TCP/IP Illustrated, Volume 1, Addison-Wesley Professional Computing
Series, Indianapolis.
Rodriguez A, Gatrell J, Kara J and Peschke Roland, 2001, TCP/IP Tutorial and Technical Overview,
ibm.com/redbooks.
Hao J, Puliu Y and Delin X., 2003, A Dynamic-Weight RED Gateway, Wuhan University, Hubei.
Sahu S, Towsley D and Kurose J, 1999, A Quantitative Study of Differentiated Services for the Internet,
Global Telecommunications Conference, Globecom’99, Massachusetts.

RMIT University © 2006 22 of 22


School of Electrical and Computer Engineering Melbourne, 2nd October, 2006

Das könnte Ihnen auch gefallen