Sie sind auf Seite 1von 40

Bandwidth allocation for Distributed

Algorithm

Abstract

Service prioritization among different traffic classes is an important goal for the
Internet. Conventional approaches to solving this problem consider the existing best-
effort class as the low-priority class, and attempt to develop mechanisms that provide
“better-than-best-effort” service.

We explore the opposite approach, and devise a new distributed algorithm to


realize a low-priority service (as compared to the existing best effort)from the network
endpoints. To this end, we develop TCP Low Priority (TCP-LP), a distributed algorithm
whose goal is to utilize only the excess network bandwidth as compared to the “fair
share” of bandwidth as targeted by TCP.

The key mechanisms unique to TCP-LP congestion control are the use of one-way
packet delays for early congestion indications and a TCP-transparent congestion
avoidance policy. Following things are experienced in our project

1) TCP-LP is largely non-intrusive to TCP traffic.

2) Both single and aggregate TCP-LP flows are able to successfully utilize excess
Network bandwidth; moreover, multiple TCP-LP flows share excess bandwidth fairly

3) Substantial amounts of excess bandwidth are available to the low-priority class, even
in the presence of “greedy” TCP flows

4) Despite their low-priority nature, TCP-LP flows are able to utilize significant amounts
of available bandwidth in a wide-area network environment
Synopsis

We devise TCP-LP (Low Priority), an end-point protocol that achieves two-class


service prioritization without any support from the network. The key observation is that
end-to-end differentiation can be achieved by having different end-host applications
employ different congestion control algorithms as dictated by their performance
objectives.

Since TCP is the dominant protocol for best-effort traffic, we design TCP-LP to
realize a low-priority service as compared to the existing best effort service. Namely, the
objective is for TCP-LP flows to utilize the bandwidth left unused by TCP flows in a
non-intrusive, or TCP-transparent, fashion.

Moreover, TCP-LP is a distributed algorithm that is realized as a sender-side


modification of the TCP protocol. One class of applications of TCP-LP is low-priority
file transfer over the Internet. For network clients on low-speed access links, TCP- LP
provides a mechanism to retain faster response times for interactive applications using
TCP, while simultaneously making progress on background file transfers using TCP-LP.
Similarly, in enterprise networks, TCP-LP enables large file backups to proceed without
impeding interactive applications, a functionality that would otherwise require a multi-
priority or separate network In contrast, TCP-LP allows low priority applications to use
all excess capacity while also remaining transparent to TCP flows.

A second class of applications of TCP-LP is inference of available bandwidth for


network monitoring, end-point admission control, and performance optimization. Current
techniques estimate available bandwidth by making statistical inferences on
measurements of the delay or loss characteristics of a sequence of transmitted probe
packets.

In contrast, TCP-LP is algorithmic with the goal of transmitting at the rate of the
available bandwidth. Consequently, competing TCP-LP flows obtain their fair share of
the available bandwidth, as opposed to probing flows which infer the total available
bandwidth, overestimating the fraction actually available individually when many flows
are simultaneously probing. Moreover, as the available bandwidth changes over time,
TCP-LP provides a mechanism to continuously adapt to changing network conditions.
ANALYSIS

 TRANSMISSION CONTROL PROTOCOL (TCP)

In the Internet protocol suite, TCP the intermediate layer between the Internet
Protocol (IP) below it, and an application above it. Applications often need reliable pipe-
like connections to each other, whereas the Internet Protocol does not provide such
streams, rather only reliable packets. TCP does the task of the Transport Layer in the
simplified OSI model of computer networks. The Transmission Control Protocol (TCP) is
one of the core protocol of the Internet Protocol suite.

Using TCP, applications on networked hosts can create connections to one and in-
order delivery of data from sender to receiver. TCP also distinguishes data for multiple
connections by current applications e.g. webserver and e-mail server running on the same
host. Cp supports many of Internet’s most popular application protocols and resulting
applications including the World Wide Web e-mail and Secure Shell.

 DATA TRANSFER IN TCP

Application send streams of octets (8-bit bytes) to TCP for delivery through the
network, and TCP divides the byte streams into appropriately sized segments usually
delineated by the maximum transmission unit (MTU) size of the data link layer of the
network the computer attached to. TCP then passes the resulting packets to the Internet
Protocol, for delivery through a network to the TCP module of the entity at the other end.
TCP checks to make sure no packets are lost by giving each packet a sequence number,
which is also used to make sure that the data are delivered to the entity at the other end in
the correct order. The TCP module at the far end sends back an acknowledgement for
packets which have been successfully receiver, a timer at the sending TCP will cause a
timeout if an acknowledgement is not received with a reasonable round-trip time RTT)
and the data will be re-transmitted. The TCP checks that no bytes are damaged by using a
checksum; one is computer at the sender for each block of data before it is send, and
checked at the receiver.
CONGESTION CONTROL

The final part to TCP is congestion throttling. Acknowledgements for data send,
or lack of acknowledgements, are used by senders to implicitly interpret network
conditions between the TCP sender and receiver. Coupled with the timers, TCP senders
and receivers can alter the behavior of the flow of data. This is generally referred to as
flow control, congestion control and /or network congestion avoidance. TCP uses a
number of mechanisms to achieve high performance and avoid congesting the network.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast
in very high-speed environments are ongoing areas of research and standards
developments.

PRIOR WORK

“TCP Vegas: End to End Congestion Avoidance on a Global Internet” was


proposed by “Lawrence S. Branko”[2]. One of the TCP’s strengths lies in its adaptive
retransmission and congestion control mechanism. In TCP Vegas Lawrence has
attempted to go beyond this earlier work to provide some new insights into congestion
control, and to propose modifications to the implement of TCP that exploit these insights.
Vegas is an implementation of TCP that achieves between 37% and 71% better
throughput on the Internet, with one-fifth to one-half the losses, as compared to the
implementation of TCP in the Reno distribution of BSD UNIX. Vegas does not involve
any changes to the TCP specification, it is merely an alternative implementation that
interoperates with any other valid implementation of TCP. TCP Vegas laid uses delayed
base congestion control in an effort to increase TCP throughput due to a reduced number
of packet losses and timeouts and a reduced level of congestion over the path.

LIMITATONS:

TCP/Vegas can improve TCP throughput over the Internet by avoiding packet
loss. However, these studies were based on Internet paths that existed in the early 1990’s
which generally involved at least one T1 speed link and consequently allows any given
flow to consume a significant fraction of available bandwidth. The studies also did not
isolate the impact of the congestion avoidance algorithm (i.e., CAM) from the enhanced
loss recovery enhancement.
In order to overcome the drawbacks in TCP Vegas “ The Incremental
Deployability of RTT-Based Congestion Avoidance for High Speed TCP Internet
Connections “ [4] by Jim Martin, Arne Nilsson and Injong Rhee was proposed .
This focuses on end-to-end congestion avoidance algorithms that use round trip time
(RTT) fluctuations as an indicator of the level of congestion.

The algorithms were referred to as delay-based congestion avoidance or DCA.


Due to the economics associated with deploying change within an existing network, they
where interested in an incrementally deployable enhancement to the TCP/Reno protocol.
TCP/Vegas, which is a DCA algorithm, have been proposed as an incremental
enhancement. Requiring relatively minor modifications to a TCP sender, TCP/Vegas
showed to increase end-to-end TCP throughput primarily by avoiding packet loss.

DCA in today’s best effort Internet where IP switches are subject to thousands of
TCP flows resulting in congestion with time scales that span orders of magnitude. There
results suggested that RTT-based congestion avoidance may not be reliably incrementally
deployed in this environment. Through extensive measurement and simulation, it was
found that when TCP/DCA (i.e., a TCP/Reno sender that is extended with DCA) is
deployed over a high speed Internet path, the flow generally experience degraded
throughput compared to an unmodified TCP/Reno flow. It showed that the congestion
information contained in RTT samples is not sufficient to reliably predict packet loss and
that the congestion reaction by a DCA flow, assuming that the flow consumes a small
fraction of the resources at the bottleneck, has minimal impact on the congestion level
over the path when the total DCA traffic at the bottleneck consumes less than 10% of the
bottleneck bandwidth.

LIMITATIONS:

1. The measurements represent a small sample of Internet dynamics.


2. The throughput analysis assumes that the analytic throughput model
that we use is accurate (at least to some degree).
3. Finally, the simulation models, especially the design of the background
traffic, relies in part on conjecture. The simulation analysis of
TCP/DCA and TCP/Vegas is more dependent on the end-to-end
Characteristics of the path models which we were able to validate to some degree.
Several of the other results, however, that they derive by looking within the simulated
network (e.g., a queue levels) are more dependent on the accuracy of the models.

In the last decade a large body of work has been devoted to providing quality of
service to individual real-time flows. Admission control is the common element of these
Integrated Services (IntServ) architectures; that is, flows must request service from the
network and are accepted (or rejected) depending on the level of available resources.
Typically this involves a signaling mechanism such as to carry the reservation request to
all the routers along the path. While such architectures provide excellent quality-of
service, they have significant scalability problems. In “ Endpoint Admission Control :
Architectural Issues and Performance “ [5] by Lee Breslau, Edward W.Knightly, Scott
Shenkar proposed the traditional approach to implementing admission control, as
exemplified by the Integrated Services proposal in the IETF, uses a signaling protocol to
establish reservations at all routers along the path.

While providing excellent quality-of-service, this approach has limited scalability


because it requires routers to keep per-flow state and to process per-flow reservation
messages. In an attempt to implement admission control without these scalability
problems, several recent papers have proposed various forms of endpoint admission
control. In these designs, the hosts (the endpoints) probe the network to detect the level
of congestion; the host admits the flow only if the detected level of congestion is
sufficiently low. This paper was devoted to the study of endpoint admission control. They
first considered several architectural issues that guide (and constrain) the design of such
systems. Then they used simulations to evaluate the performance of endpoint admission
control in various settings. The modest performance degradation between traditional
router-based admission control and endpoint admission control suggests that a real-time
service based on endpoint probing may be viable.

LIMITATIONS:
1. End point admission control certainly has its flaws. The
Set-up delay is substantial, on the order of seconds, which may limit
its appeal for certain applications.
2. The utilization and loss rate can degrade somewhat under sufficiently high loads even
with slow start probing.
3. The quality of service is not predictable across settings.
While these performance problems are not insignificant, there are two far greater barriers
to adoption.
First, as of yet we have no proposed mechanism to enforce the uniformity
of the admission thresholds, or even to enforce the use of Admission control at all in this
service class. That is, users could send packets with the appropriate admission control DS
field without using admission control. A similar problem is faced by our current best
effort congestion control paradigm, where users can Where we contended that the real
complexity for out-of-band marking was the virtual queue, as one could easily achieve
exactly the same results doing out-of-band virtual dropping instead of out-of-band
marking. This is equivalent to using a threshold of _ = 1, which is why we related it to
the problem of setting the thresholds uniformly currently send best-effort traffic without
using any congestion control.

. Second, we must continue to explore how one could deploy endpoint


admission control incrementally. The ability to estimate cross-traffic is key to the
development of a better understanding of Internet dynamics, and can potentially be used
in the design of bandwidth efficient transport protocols and rate based clocking
methodologies.In “ MULTIFRACTAL CROSS-TRAFFIC ESTIMATION “ [6] by Vinay
Ribeiro, Mark Coates, Rudolf Riedi developed a novel model-based technique, the
DELPHI algorithm, for inferring the instantaneous volume of competing cross-traffic
across an end-to-end path. By using only end-to-end measurements, Delphi avoids the
need for data collection within the Internet. Unique to the algorithm is an efficient
exponential spaced probing packet train an parsimonious multiracial parametric model
for the cross-traffic that captures its multistage statistical properties and queuing
behavior.

ADVANTAGES:
The algorithm is adaptive it requires no a priori traffic statistics and effectively tracks
changes in network conditions. Network simulator experiments revealed that Delphi
gives accurate cross-traffic estimates for higher link utilization levels.

LIMITATIONS:

Larger queues imply a smaller chance of the smaller uncertainty in cross-traffic


volume. At low utilizations the errors in traffic estimates prevented Delphi from tracking
the cross-traffic statistics.
The huge expansion of the Internet coupled with the emergence of new (in
particular, multimedia) applications pose challenging problems in terms of performance
and control of the network. These include the design of efficient congestion control and
recovery mechanisms, and the ability of the network to offer good Quality of Service
(QoS) to the users. In the current Internet, there is a single class best effort service which
does not promise anything to the users in terms of performance guarantees.

Many distributed applications can make use of large background transfers of data
that humans are not waiting for to improve availability, reliability, latency or consistency.
However, given the rapid fluctuations of available network bandwidth and changing
resource costs due to technology trends, hand tuning the aggressiveness of background
transfers risks (1) complicating applications, (2) being too aggressive and interfering
with other applications, and (3) being too timid and not gaining the benefits of
background transfers.

Our goal is for the operating system to manage network resources in order to
provide a simple abstraction of near zero-cost background transfers. “ TCP NICE: A
MECHANISM FOR BACKGROUND TRANSFERS” [9] by Arun Venkataramani, Ravi
Kokku and Mike Dahlin can provably bound the interference inflicted by background
flows on foreground flows in a restricted network model. And they microbenchmarks and
case study applications suggest that in practice it interferes little with foreground flows,
reaps a large fraction of spare network bandwidth, and simplifies application construction
and deployment. For example, in our prefetching case study application, aggressive
prefetching improves demand performance by a factor of three when Nice manages
resources; but the same prefetching hurts demand performance by a factor of six under
standard network congestion control. It dramatically reduces the interference inflicted by
background flows on foreground flows. It does so by modifying TCP congestion control
to be more sensitive to congestion than traditional protocols such as TCP Reno or TCP
Vegas by detecting congestion earlier, reacting to it more aggressively, and allowing
much smaller effective minimum congestion windows.
SYSTEM DESIGN

TCP-LP (Low Priority), an end-point protocol will be devised that achieves two-
class service prioritization without any support from the network. The key observation is
that end-to-end differentiation can be achieved by having different end-host applications
employ different congestion control algorithms as dictated by their performance
objectives. Since TCP is the dominant protocol for best-effort traffic, we design TCP-LP
to realize a low-priority service as compared to the existing best effort service. Namely,
the objective is for TCP-LP flows to utilize the bandwidth left unused by TCP flows in a
non-intrusive, or TCP-transparent, fashion. Moreover, TCP-LP is a distributed algorithm
that is realized as a sender-side modification of the TCP protocol.

IMPLEMENTATION PLAN

First, we develop a reference model to formalize the two design objectives: TCP-
LP transparency to TCP, and (TCP-like) fairness among multiple TCP-LP flows
competing to share the excess bandwidth. The reference model consists of a two level
hierarchical scheduler in which the first level provides TCP packets with strict priority
over TCP-LP packets and the second level provides fairness among micro flows within
each class. TCP-LP aims to achieve this behavior in networks with non differentiated
(first-come-first-serve) service.

Next, to approximate the reference model from a distributed end-point protocol,


TCP-LP employs two new mechanisms. First, in order to provide TCP-transparent low-
priority service, TCP-LP flows must detect oncoming congestion prior to TCP flows.
Consequently, TCP-LP uses inferences of one-way packet delays as early indications of
network congestion rather than packet losses as used by TCP. We develop a simple
analytical model to show that due to the non-linear relationship between throughput and
round-trip time, TCP-LP can maintain TCP-transparency even if TCP-LP flows have
larger round-trip times than TCP flows. Moreover, a desirable consequence of early
congestion inferences via one-way delay measurements is that they detect congestion
only on the forward path (from the source to the destination) and prevent false early
congestion indications from reverse cross-traffic.TCP-LP’s second mechanism is a novel
congestion avoidance
policy with three objectives:
(1) Quickly back off in the presence of congestion from TCP flows,
(2) Quickly utilize the available excess bandwidth in the absence of sufficient TCP
traffic, and
(3) achieve fairness among TCP-LP flows. To achieve these objectives, TCP-LP’s
congestion avoidance policy modifies the additive-increase multiplicative
decrease policy of TCP via the addition of an inference phase and use of a
modified back-off policy.

ALGORITHM DESCRYPTION

TCP-LP, a low-priority congestion control protocol that uses the excess


bandwidth on an end-to-end path, versus the fair-rate utilized by TCP.We first devise a
mechanism for early congestion indication via inferences of one-way packet delays.
Next, we present TCP-LP’s congestion avoidance policy to exploit available bandwidth
while being sensitive to early congestion indicators. Then develop a simple queueing
model to study the feasibility of TCP-transparent congestion control under heterogeneous
round trip times. Finally, we provide guidelines for TCP-LP parameter settings.

EARLY CONGESTION INDICATION

To achieve low priority service in the presence of TCP traffic, it is necessary for
TCP-LP to infer congestion earlier than TCP. In principle, the network could provide
such early congestion indicators. For example, TCP-LP flows could use a type-of service
bit to indicate low priority, and routers could use Early Congestion Notification (ECN)
messages to inform TCPLP flows of lesser congestion levels than TCP flows. However,
given the absence of such network support, we devise an endpoint realization of this
functionality by using packet delays as early indicators for TCP-LP, as compared to
packet drops used by TCP. In this way, TCP-LP and TCP implicitly coordinate in a
distributed manner to provide the desired priority levels.

DELAY THRESHOLD:

TCP-LP measures one-way packet delays and employs a simple delay threshold-
based method for early inference of congestion. Denote di as the one-way delay of the
packet with sequence number i, and dmin and dmax as the minimum and maximum
one-way packet delays experienced throughout the As UDP flows are non-responsive,
they would also be considered high priority and multiplexed with the TCP flows.
connection’s lifetime. Thus, dmin is an estimate of the oneway propagation delay and
dmax - dmin is an estimate of the maximum queueing delay. Next, denote as the
delay smoothing parameter, and sdi as the smoothed one-way delay. An early indication
of congestion is inferred by a TCP-LP flow whenever the smoothed one-way delay
exceeds a threshold
within the range of the minimum and maximum delay.

DELAY MEASUREMENT

TCP-LP obtains samples of one-way packet delays using the TCP timestamp
option . Each TCP packet carries two fourbyte timestamp fields. A TCP-LP sender
timestamps one of these fields with its current clock value when it sends a data packet.
On the other side, the receiver echoes back this timestamp value and in addition
timestamps the ACK packet with its own current time. In this way, the TCP-LP sender
measures one-way packet delays. Note that the sender and receiver clocks do not have to
be synchronized since we are only interested in the relative time difference. Moreover, a
drift between the two clocks is not significant here as resets of dmin and dmax on
timescales of minutes can be applied .Finally, we note that by using one-way packet
delay measurements instead of round-trip times, cross-traffic in the reverse direction does
not influence TCP-LP’s inference of early congestion. Minimum and maximum one-way
packet delays are initially estimated during the slow-start phase and are used after the
first packet loss, i.e., in the congestion avoidance phase.
CONGESTION AVOIDANCE POLICY

Additive Increase Multiplicative Decrease (AIMD) is the dominant algorithm for


congestion avoidance and control in the Internet. The major goal of AIMD is to achieve
fairness and efficiency in allocating resources. In the context of packet networks, AIMD
attains its goal partially. We exploit here a property of AIMD-based data sources to share
common knowledge, yet in a distributed manner; we use this as our departing point to
achieve better efficiency and faster convergence to fairness.

Our control model is based on the assumptions of the original AIMD algorithm; we show
that both efficiency and fairness of AIMD can be improved.

TCP Congestion Control


Fig. 1 shows a temporal view of the TCP/Reno congestion window behavior at different
stages with points on the top indicating packet losses.1 Data transfer begins with the
slow-start phase in which TCP increases its sending rate exponentially until it encounters
the first loss or maximum window size. From this point on, TCP enters the congestion-
avoidance phase and uses an additive-increase multiplicative-decrease policy to adapt to
congestion. Losses are detected via either time-out from non receipt of an
acknowledgment,. If loss occurs and less than three duplicate ACKs are received, TCP
reduces its congestion window to one segment and waits for a period of retransmission
time out (RTO), after which the packet is resent. In the case that another time out occurs
before successfully retransmitting the packet, TCP enters the exponential-backoff phase
and doubles RTO until the packet is successfully acknowledged. One objective of TCP
congestion control is for each flow to transmit at its fair rate at its bottleneck link.

Finally, the minimum congestion window for TCP-LP flows in the


inference phase is set to 1. In this way, TCP-LP flows conservatively
ensure that an excess bandwidth of at least one packet per round-trip
time is available before probing for additional
Bandwidth.

.1 Objectives TCP-LP is an end-point algorithm that aims to emulate the


functionality of the reference- scheduling model depicted in Figure

2. Consider for simplicity a scenario with one TCP-LP and one TCP flow. The
reference strict priority scheduler serves TCP-LP packets only when there are
no TCP packets in the system. However, whenever TCP packets arrive, the
scheduler immediately begins service of higher priority TCP packets.
Similarly, after serving the last packet from the TCP class, the strict priority
scheduler immediately starts serving TCP-LP packets. Note that it is impossible
to exactly achieve this behavior from the network endpoints as TCP-LP operates
on timescales of round-trip times, while the reference scheduling model operates on time-
scales of packet transmission times. Thus, our goal is to develop a congestion control
policy that is able to approximate the desired dynamic behavior.

REACTING TO EARLY CONGESTION INDICATORS

TCP-LP must react quickly to early congestion indicators to achieve TCP-


transparency. However, simply decreasing the congestion window promptly to zero
packets after the receipt of an early congestion indication (as implied by the reference
scheduling model) unnecessarily inhibits the throughput of TCP-LP flows. This is
because a single early congestion indication cannot be considered as a reliable indication
of network congestion given the complex dynamics of cross traffic. On the other hand,
halving the congestion window of TCP-LP flows upon the congestion indication, as
recommended for ECN flows would result in too slow a response to achieve TCP
transparency. To compromise between the two extremes, TCP-LP employs the following
algorithm. After receipt of the initial early congestion indication, TCP-LP halves its
congestion window and enters an inference phase by starting an inference time-out timer.
During this inference period, TCP-LP only observes responses from the network, without
increasing its congestion window. If it receives another early congestion indication before
the inference timer expires, this indicates the activity of cross traffic, and TCP-LP
decreases its congestion window to one packet. Thus, with persistent congestion, it takes
two round-trip times for a TCP-LP flow to decrease its window to 1. Otherwise, after
expiration of the inference timer, TCP-LP enters the additive increase congestion
avoidance phase and increases its congestion

window by one per round-trip time (as with TCP flows in this phase). We observe
that as with router-assisted early congestion indication consecutive packets from the same
flow often experience similar network congestion state. Consequently, as suggested for
ECN flows, TCP-LP also reacts to a congestion indication event at most once per round-
trip time. Thus, in order to prevent TCP-LP from over-reacting to bursts of congestion
indicated packets, TCP-LP ignores succeeding congestion indications if the source has
reacted to a previous delay-based congestion indication or to a dropped packet in the last
round-trip time. Finally, the minimum congestion window for TCP-LP flows in the
inference phase is set to 1. In this way, TCP-LP flows conservatively ensure that an
excess bandwidth of at least one packet per round-trip time is available before probing for
additional bandwidth.

PRESERVING TCP-TRANSPARENCY IN LARGE AGGREGATION


Destination
A key goal of TCP-LP is to achieve non-intrusiveness to TCP flows. Thus, TCP-
LP reduces its window size to one packet per RTT in the presence of TCP flows.
However, in scenarios with many TCP-LP flows, it becomes increasingly possible for
TCP-LP aggregates to impact TCP flows. For example, consider a scenario with a
hundred TCP-LP flows competing with TCP flows on a 10Mb/s link. If the roundtrip
time of the TCP-LP flows is 100ms and the packet size is 1500Bytes, then this TCP-LP
aggregate utilizes 12% of the bandwidth, despite the fact that each flow sends only a
single packet per RTT.5 To mitigate this problem, TCP-LP decreases the packet size to
64Bytes whenever the window size drops below 5 packets. In this way, TCP-LP
significantly decreases its impact on TCP flows in high-aggregation regimes, yet it is still
able to quickly react (after RTT) to changes in congestion. In the above example, a
hundred TCP-LP flows would then utilize only 0.5% of the bandwidth in the presence of
TCP flows.
Source

TCP Low TCP High


Priority Priority

Select Low Congesti No


Priority File on
Like E-Mail Occurren
ce

Yes
Transmission
Congestion
Avoidance
Policy (AIMD)

TCP Data
Transmission

Simultaneous
Background File
Transfer
Module Description:

Early Congestion Indication.

This Module is used to check whether the congestion is occurred or not in the
network This is achieved by sending time stamp with the header of TCP packet.
For each and every Packet , We will attach the information that is to be transferred
with the header of the each Packet. Among these measurement, the Timestamp is
important thing to identify the congestion.

There are two types of approach such as Loss Based, Delay Based to identify the
congestion. Loss Based Approach is used in TCP – Reno . The Concept of this
Approach is to check the Congestion in loss of first packet during Transmission.

But, Delay Based Approach solves the problem with out loss of packet. Instead
that we take the total Round Trip Queuing Delay for the entire Transmission. These
Delay is used to compare the Time Stamp value that we defined in Header of Each
Packet.

If the Round Trip Queuing Delay is Greater than the Timestamp, We determine
the Congestion Status
Congestion Avoidance Policy.

There are number of approach to identify the congestion and to prevent it. But here we
consider AIMD (Additive Increase Multiplicative Decrease)

Additive Increase Multiplicative Decrease (AIMD) is the dominant algorithm for


congestion avoidance and control in the Internet. The major goal of AIMD is to achieve
fairness and efficiency in allocating resources. In the context of packet networks, AIMD
attains its goal partially. We exploit here a property of AIMD-based data sources to share
common knowledge, yet in a distributed manner; we use this as our departing point to
achieve better efficiency and faster convergence to fairness.

Our control model is based on the assumptions of the original AIMD algorithm;
we show that both efficiency and fairness of AIMD can be improved.

TCP Congestion Control

Fig. 1 shows a temporal view of the TCP/Reno congestion window behavior at


different stages with points on the top indicating packet losses.1 Data transfer begins
with the slow-start phase in which TCP increases its sending rate exponentially until it
encounters the first loss or maximum window size. From this point on, TCP enters the
congestion-avoidance phase and uses an additive-increase multiplicative-decrease policy
to adapt to congestion. Losses are detected via either time-out from non receipt of an
acknowledgment,. If loss occurs and less than three duplicate ACKs are received, TCP
reduces its congestion window to one segment and waits for a period of retransmission
time out (RTO), after which the packet is resent. In the case that another time out occurs
before successfully retransmitting the packet, TCP enters the exponential-backoff phase
and doubles RTO until the packet is successfully acknowledged. One objective of TCP
congestion control is for each flow to transmit at its fair rate at its bottleneck link.

Finally, the minimum congestion window for TCP-LP flows in the


inference phase is set to 1. In this way, TCP-LP flows conservatively
ensure that an excess bandwidth of at least one packet per round-trip
time is available before probing for additional
Bandwidth.

After detecting the congestion in Endpoint, to prevent the congestion AIMD


Algorithm is used. According to this Algorithm, the packet length is decreased if the
congestion is increased. Alternatively we increase the Packet size if the congestion status
is Zero.
3.TCP-LP Data transfer

Many applications require fast data transfer over high speed and long distance
networks. However, standard TCP fails to fully utilize the network capacity in high-speed
and long distance networks due to its conservative congestion control (CC) algorithm.
Some works have been proposed to improve the connection’s throughput by adopting
more aggressive loss-based CC algorithms, which may severely decrease the throughput
of regular TCP flows sharing the network path. On the other hand, pure delay-based
approaches may not work well if they compete with loss-based flows.

Many distributed applications can make use of large background transfers of data
that humans are not waiting for to improve service quality. For example, a broad range of
applications and services such as data backup, prefetching, enterprise data distribution,
Internet content distribution, and peer-to-peer storage can trade increased network

This is achieved by transferring the file in sharable thread which has the
capability to transfer more than one file at a time. Here we transfer TCP in one thread and
TCP-LP in another thread. So TCP-LP occupies unused bandwidth that is left by TCP
transmission due to ReverseTaffic Delay.

Low Priority data transfer is transferred through Unused Bandwidth as compared to the
“fair share” of bandwidth as targeted by TCP. This module achieves both the TCP and
TCP LP communication. Simultaneous transmission of both Tcp and Tcp Lp utilizes the
maximum Bandwidth.
4. TCP-LP Performance Evaluation against TCP

For each and every new Technology we propose there will be comparison against
the Existing system which has some defects. Here we compare the two service TCP
and TCP-LP in terms of Bandwidth Throughput. Bandwidth of TCP of TCP-LP
transmission is found and compared each of them.

Here We show the output in chart representation which compares two service for
High priority and Low priority.

Then following requirement is determined

• Bandwidth utilization is calculated for both the TCP and TCP-LP.


• Bandwidth utilization of TCP is plotted in Graph.
• Bandwidth utilization of TCP-LP is plotted against TCP share
Software Description

WHAT IS JAVA?

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple Architecture-neutral

Object-oriented Portable

Distributed High-performance

Interpreted multithreaded

Robust Dynamic

Secure

Java is also unusual in that each Java program is both compiled and interpreted.
With a compile you translate a Java program into an intermediate language called Java
bytecodes the platform-independent code instruction is passed and run on the
computer.

Compilation happens just once; interpretation occurs each time the program is
executed. The figure illustrates how this works.
Java
Interpreter
Program

Compilers My Program

You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a Java development
tool or a Web browser that can run Java applets, is an implementation of the Java VM.
The Java VM can also be implemented in hardware.

Java byte codes help make “write once, run anywhere” possible. You can compile
your Java program into byte codes on my platform that has a Java compiler. The byte
codes can then be run any implementation of the Java VM. For example, the same
Java program can run Windows NT, Solaris, and Macintosh.

JAVA PLATFORM

A platform is the hardware of software environment in which a program


runs. The Java platform differs from most other platforms in that it’s a software only
platform that runs on the top of other, hardware-based platform. Most other platforms
are described as a combination of hardware and operating system.
The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java
platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components


that provide many useful capabilities, such as graphical user interface (GUI) widgets.

The Java API is grouped into libraries (package) of related


components. The next sections, what can Java do? Highlights each area of
functionally provided by the package in the Java API.

The following figure depicts a Java program, such as an application or


applet, that’s running on the Java platform. A special kind of application known

as a server serves and supports clients on a network. Examples of the servers include
Web Servers, proxy servers, mail servers, print servers, and boot servers. Another
specialized program is a Servlet. Servlets are similar to applets in that they are
runtime extensions of the application. Instead of working in browsers, though, servlets
run with in Java Web Servers, configuring of tailoring the server.
How does the Java API support all of these kinds of programs? With
packages of software components that provide a wide range of functionality. The API
is the API included in every full implementation of the platform.

The core API gives you the following features:

The Essentials: Objects, Strings, threads, numbers, input and output, datastructures,
system properties, date and time, and so on.

Applets: The set of conventions used by Java applets.

Networking: URL’s TCP and UDP sockets and IP addresses.

Internationalization: Help for writing programs that can be localized for users.

Worldwide programs can automatically adapt to specific locates and be


displayed in the appropriate language.

JAVA PROGRAM

• Java API

• Java Virtual Machine

• Java Program

• Hard Ware

API and Virtual Machine insulates the Java program from hardware
dependencies. As a platform-independent environment, Java can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and Just-in-time-byte-
code compilers can bring Java’s performance close to the native code without
threatening portability.

WHAT CAN JAVA DO?

However, Java is not just for writing cut, entertaining applets for
the World Wide Web (WWW). Java is a general purpose, high-level programming
language and a powerful software platform. Using the fineries Java API,you can write
many types of programs.

The most common types of program are probably applets and


application, where a Java application is a standalone program that runs directly on the
Java platform.

Security:

Both low-level and high-level, including electronic signatures,


public/private key management, accesses control, and certificate.
Networking

Introduction

This article is about a client/server multi-threaded socket class. The


thread is optional since the developer is still responsible to decide if needs
it. There are other Socket classes here and other places over the Internet but
none of them can provide feedback (event detection) to your application
like this one does. It provides you with the following events detection:
connection established, connection dropped, connection failed and data
reception (including 0 byte packet).

Description

This article presents a new socket class which supports both TCP and
UDP communication. But it provides some advantages compared to other
classes that you may find here or on some other Socket Programming
articles. First of all, this class doesn't have any limitation like the need to
provide a window handle to be used. This limitation is bad if all you want is
a simple console application. So this library doesn't have such a limitation.
It also provides threading support automatically for you, which handles the
socket connection and disconnection to a peer. It also features some options
not yet found in any socket classes that I have seen so far. It supports both
client and server sockets. A server socket can be referred as to a socket that
can accept many connections. And a client socket is a socket that is
connected to server socket. You may still use this class to communicate
between two applications without establishing a connection. In the latter
case, you will want to create two UDP server sockets (one for each
application). This class also helps reduce coding need to create chat-like
applications and IPC (Inter-Process Communication) between two or more
applications (processes). Reliable communication between two peers is also
supported with TCP/IP with error handling. You may want to use the smart
addressing operation to control the destination of the data being transmitted
(UDP only). TCP operation of this class deals only with communication
between two peers.

Analysis of Network

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:


TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a
connectionless protocol.

IP datagram’s

The IP layer provides a connectionless and unreliable delivery system. It


considers each datagram independently of the others. Any association between
datagram must be supplied by the higher layers. The IP layer supplies a
checksum that includes its own header. The header includes the source and
destination addresses. The IP layer handles routing through an Internet. It is also
responsible for breaking up large datagram into smaller ones for transmission
and reassembling them at the other end.
UDP

UDP is also connectionless and unreliable. What it adds to IP is a checksum


for the contents of the datagram and port numbers. These are used to give a
client/server model - see later.

TCP

TCP supplies logic to give a reliable connection-oriented protocol above IP.


It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an
address scheme for machines so that they can be located. The address is a 32 bit
integer which gives the IP address. This encodes a network ID and more
addressing. The network ID falls into various classes according to the size of the
network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for other
addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network
addressing and class D uses all 32.
Subnet address

Internally, the UNIX network is divided into sub networks. Building 11 is


currently on one sub network and uses 10-bit addressing, allowing 1024
different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a
limit of 256 machines that can be on the subnet.

Total address

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit


number. To send a message to a server, you send it to the port for that service of
the host that it is running on. This is not location transparency! Certain of these
ports are "well known".
Real Time Application

• Simultaneously TCP-LP transfers the low priority file with TCP normal flows.
• Background file transfer after reducing the congestion in end point.
• Transmitting the large volume of data by utilizing the TCP unused bandwidth.
• Enables the physical channel to increase the data transfer rate.

Conclusion
 TCP-LP achieves low-priority service without the support of the network
 TCP-LP is largely non-intrusive to TCP traffic while at the same time, TCP-LP
flows can successfully utilize a large portion of the excess network bandwidth.
 TCP-LP Bandwidth utilization is increased with the TCP utilization
 File transfer time of best-effort web traffic are significantly reduced when long-
lived bulk data transfers use TCP-LP rather than TCP.

Future Enhancement

There is a say that nothing could be failure until try, try, try.
So in the near future, I will complete following things

This project is focused for end to end congestion avoidance and Low Priority Data
Transfer. In order to utilize the entire unused bandwidth It can be implemented in
entire World by covering all the terminal in the network.
TCP LP DATA FLOW DIAGRAM

TCP HIGH CONGESTION


PRIORITY AVOIDANCE N/W
SOURCE TRANSMI POLICY
SSION (AIMD)

LP DESTINATION
TRANSMISSION N/W
Unit testing

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program input produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge of
its construction and is invasive. Unit tests perform basic tests at component level and test
a specific business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
Tests, Scripts and Cases
Unit Tests

Intake – TCP-LP Main Form

Test
No. Test Case Expected Result Pass

. Leave the Destination Message stating that the


1. Field receiver host name must be 
as blank entered

Give the Invalid remote Message stating that the


2. machine IP address. receiver should be active 
one

If the Text Area is empty Message stating that the data


3. Should not be null 
2 Integration Testing

Integration tests are designed to test integrated software components to determine


if they actually run as one program. Testing is event driven and is more concerned with
the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test case

Individual Modules are executed separately and executed entire modules finally
This project is applied in the network and checked for the LP Service. If the
Remote System was not having LP application , it was shown that LP application should
be enabled in the two end.

If the port number is already in use , it shown that Address already


binded.ToAvoid this problem we must make sure that no port number used as in our
application.

Das könnte Ihnen auch gefallen