Sie sind auf Seite 1von 68

TCP Traffic Control

HIGH SPEED NETWORKS


MCS10 302(A) L T P C 3 1 0 4

Module 2 : TCP and ATM Congestion Control (14)

TCP Flow Control TCP Congestion Control Retransmission Timer
Management Exponential RTO backoff KARNs Algorithm Window
management Performance of TCP over ATM. Traffic and Congestion control
in ATM Requirements - Attributes Traffic Management Frame work , Traffic
Control ABR traffic Management ABR rate control, RM Cell formats, ABR
capacity allocations GFR Traffic Management.

.
TCP Flow Control
TCP Flow Control
As most of the protocols provide flow control, TCP
also uses the form of sliding window mechanism for flow
control.
It decouples acknowledgement of received data units
from the granting of permission to send additional data
units.
In TCP one side can acknowledge incoming data without
granting permission to send additional data.
The flow control mechanism used by TCP is known as a
credit allocation scheme.
In this scheme, each individual octets of data that is
transmitted is considered to have a sequence number.

TCP Flow Control
TCP Flow Control
In addition to data each transmitted segment includes
the header three field related to flow control.
They are, Sequence Number (SC), Acknowledgement
Number (AN),Window(W)
When TCP entity send a segment it include the sequence
number of the first octet in the segment data field.
A TCP entity acknowledges an incoming segment
with a segment that includes (AN = i,w=j)
All octets through i-1 is received and acknowledged
Permission granted to send additional window j octets
of data. J octets starting from i to i+j-1



TCP Credit Allocation
Mechanism
Note: trailing edge
advances each time A
sends data, leading edge
advances only when B
grants additional credit.
Effect of TCP Window Size on
Performance
W = TCP window size (octets)
R = Data rate (bps) at TCP source
D = Propagation delay (seconds)
between source and destination
After TCP source begins transmitting, it takes
D seconds for first bits to arrive, and D
seconds for acknowledgement to return (RTT)
TCP source could transmit at most 2RD bits,
or RD/4 octets(bytes)
RD/4 * 8 bits = 2 RD
Maximum Normalized
Throughput S

1 W RD / 4

4W W RD / 4
RD
S =
Where:
W = window size (octets)
R = data rate (bps) at TCP source
D = d
prop
(seconds) between TCP source and
destination, 2D = RTT
Note: RD (bits) known as rate-delay product
Complicating Factors
Multiple TCP connections are multiplexed over
same network interface, reducing data rate, R,
per connection (S )
For multi-hop connections, D is the sum of
delays across each network plus delays at
each router, increasing D (S)
If source data rate R at source exceeds data
rate on one of the hops, that hop will be a
bottleneck (S)
Lost segments are retransmitted, reducing
throughput. Impact depends on retransmission
policy (S)
Retransmission Strategy
TCP relies exclusively on positive
acknowledgements and retransmission
on acknowledgement timeout
There is no explicit negative
acknowledgement (NAK-less)
Retransmission required when:
1. Segment arrives damaged, as indicated by
checksum error, causing receiver to discard
segment
2. Segment fails to arrive (implicit detection
scheme)
TCP Timers
A timer is associated with each segment as it is
sent
If a timer expires before the segment is
acknowledged, sender must retransmit
Key Design Issue:
value of retransmission timer
Too small: many unnecessary retransmissions,
wasting network bandwidth
Too large: delay in handling lost segment
Two Strategies
Timer should be longer than round-trip
delay (send segment, receive ack)
Round Trip Delay is variable

Strategies:
1. Fixed timer
2. Adaptive
TCP Implementation policy Options
Send policy buffered in transmit buffer,
infrequent and large low overhead/more
delay
Frequent and small - quick response but
more overhead
Deliver policy
infrequent and large not prompt
Frequent and small - more processing by
n/w and software

TCP Implementation policy Options
Accept policy
In order : accept segments comes in order otherwise
discard
In window : accept all segments that are within the
window
Retransmit policy
First only(RTO same), Batch(RTO same), and
Individual(RTO Different for diff segments)
Acknowledge policy
Immediate, Cumulative
TCP Congestion Control
Dynamic routing can alleviate congestion by
spreading load more evenly
But, only effective for unbalanced loads and
brief surges in traffic
Congestion can only be effectively controlled
by limiting total amount of data entering the
network
ICMP Source Quench message is crude and
not effective
RSVP may help, but not widely implemented
TCP Segment Pacing: Self-Clocking
Congestion
Control
(bottleneck in
the network)
Flow Control
( bottleneck
at the
receiver)
TCP Congestion Control
The above first figure shows the bottleneck in
Internet.
The configuration shows the connection
between source and destination.
The thickness of the pipe is proportional to the
data rate.
The end system are high capacity nodes, but
the link creates the bottleneck.
The minimum p
b
is the segment spacing time
on the slowest link. The destination segment
spacing time is p
r
which is equal to p
b
.

TCP Congestion Control
After initial burst, the senders segment rate will match
the arrival rate of the ACKs. So, the sender segment
rate is equal to the slowest link on the path.
TCP automatically senses the network bottleneck and
regulates its flow, this is referred as TCPs self clocking
behaviour.
This mechanism works equally well if the bottleneck is
at the receiver.
The second diagram depict the scenario. The source
and link are having wide data rate but the destination is
narrow.
In this case, ACK will be emitted at a rate equal to the
destination capacity

TCP Flow and Congestion Control
Potential Bottlenecks
Physical bottlenecks: physical capacity constraints
Logical bottlenecks: queuing effects due to load
TCP Flow & Congestion Control
The bottleneck along a round-trip path between
source and destination can occur in a variety of places
and be either logical or physical.
In the above example, if the sender dedicates its entire
LAN capacity in a single TCP connection, then it has
potential throughput of 10 Mbps.
But, it need 1.5 Mbps links between each router and
the intervening internet become bottlenecks. This is
physical bottlenecks.
However, more often the bottleneck will be logical.
It because of queuing effect at a router, network switch,
or the destination. Because of these bahaviour the
steady state flow cannot be achieved.


TCP Flow & Congestion Control
If the TCP flows are too slow, then the internet is
underutilized and throughputs are unnecessarily low.
If one or a few TCP sources uses excessive capacity,
then other TCP flows will be less crowded.
Because of excessive transmission, some segment
may be lost, so that retransmission is carried out or
ACK may be delayed which forces timeout in source
which inturn retransmission.
If more segments are retransmitted, congestion will
increase so that delay also increased and more
number segments are dropped.
So, more number of technique can be introduced in
TCP to overcome these problems.

Window Management
Slow start
Dynamic window sizing on congestion
Fast retransmit
Fast recovery
ECN
Other Mechanisms
Slow Start
awnd = MIN[ credit, cwnd]
where
awnd = allowed window in segments
cwnd = congestion window in segments
credit = amount of unused credit granted in
most recent ack (rcvwindow)
cwnd = 1 for a new connection and
increased by 1 (except during slow
start) for each ack received, up to a
maximum
Effect of TCP Slow Start
Dynamic Window Sizing on
Congestion
A lost segment indicates congestion
Prudent (conservative) to reset cwnd to 1
and begin slow start process
May not be conservative enough: easy
to drive a network into saturation but hard
for the net to recover (Jacobson)
Instead, use slow start with linear growth in
cwnd after reaching a threshold value
Slow Start and Congestion
Avoidance
Illustration of Slow Start and
Congestion Avoidance
Fast Retransmit (TCP Tahoe)
RTO(Retransmission Time Out) is generally
noticeably longer than actual RTT
If a segment is lost, TCP may be slow to
retransmit
TCP rule: if a segment is received out of
order, an ack must be issued immediately for
the last in-order segment
Tahoe/Reno Fast Retransmit rule: if 4 acks
received for same segment (I.e. 3 duplicate
acks), highly likely it was lost, so retransmit
immediately, rather than waiting for timeout
Fast Retransmit
Triple duplicate
ACK
Fast Recovery (TCP Reno)
When TCP retransmits a segment using Fast
Retransmit, a segment was assumed lost
Congestion avoidance measures are appropriate
at this point
E.g., slow-start/congestion avoidance procedure
This may be unnecessarily conservative since
multiple ACKs indicate segments are actually
getting through
Fast Recovery: retransmit lost segment, cut
threshold in half, set congestion window to
threshold +3, proceed with linear increase of
cwnd
This avoids initial slow-start
Fast Recovery Example
Reno
Fast Recovery
(simplified)
Tahoe
Slow Start
TCP/IP ECN Protocol
Hosts negotiate ECN capability
during TCP connection setup
TCP
Sender
TCP
Receiver
SYN + ECE + CWR
SYN ACK + ECE
Receiver
TCP
Acks
packets
with ECN-
Echo set if
CE bits
set in IP
header.
Sender IP
marks data
packets as
ECN
Capable
Transport
Routers mark ECN-
capable IP packets if
Congestion
Experienced.
Performance of TCP over ATM
How best to manage TCPs segment
size, window management and
congestion control mechanisms
at the same time as ATMs quality of
service and traffic control policies
TCP may operate end-to-end over one
ATM network, or there may be multiple
ATM LANs or WANs with non-ATM
networks
TCP/IP over AAL5/ATM
TCP
IP
AAL5

ATM

Convergence S/L
SAR S/L
CPCS Trailer:
CPCS-UU Indication
Common Part Indicator
PDU Payload Length
Payload CRC
Observations
If a single cell is dropped, other cells in the
same IP datagram are unusable, yet ATM
network forwards these useless cells to
destination
Smaller buffer increases probability of
dropped cells
Larger segment size increases number of
useless cells transmitted if a single cell
dropped
Partial Packet and Early Packet
Discard
Reduce the transmission of useless cells
Work on a per-virtual-channel basis

Partial Packet Discard
If a cell is dropped, then drop all subsequent cells
in that segment (i.e., up to and including the first
cell with SDU type bit set to one)
Early Packet Discard
When a switch buffer reaches a threshold level,
preemptively discard all cells in a segment
Performance of TCP over UBR
(UBR)
Traffic Control
Resource management using virtual paths
Connection admission control
Usage parameter control
Selective cell discard
Traffic shaping
Explicit forward congestion indication
Resource Management Using
Virtual Paths
Allocate resources so that traffic is
separated according to service
characteristics
Virtual path connection (VPC) are
groupings of virtual channel connections
(VCC)


Applications
User-to-user applications
VPC between UNI pair
No knowledge of QoS for individual VCC
User checks that VPC can take VCCs demands
User-to-network applications
VPC between UNI and network node
Network aware of and accommodates QoS of VCCs
Network-to-network applications
VPC between two network nodes
Network aware of and accommodates QoS of VCCs


Resource Management
Concerns
Cell loss ratio
Max cell transfer delay
Peak to peak cell delay variation
All affected by resources devoted to VPC
If VCC goes through multiple VPCs,
performance depends on consecutive VPCs and
on node performance
VPC performance depends on capacity of VPC and
traffic characteristics of VCCs
VCC related function depends on
switching/processing speed and priority
VCCs and VPCs Configuration
Allocation of Capacity to VPC
Aggregate peak demand
May set VPC capacity (data rate) to total of VCC peak rates
Each VCC can give QoS to accommodate peak demand
VPC capacity may not be fully used
Statistical multiplexing
VPC capacity >= average data rate of VCCs but < aggregate
peak demand
Greater CDV (Cell Delay Variation)
May have greater CLR(Cell Loss Ratio)
More efficient use of capacity
For VCCs requiring lower QoS
Group VCCs of similar traffic together
Connection Admission Control

User must specify service required in both
directions
Category
Connection traffic descriptor
Source traffic descriptor
Cell Delay Varience Tolerence(CDVT)
Requested conformance definition
QoS parameter requested and acceptable value
Network accepts connection only if it can commit
resources to support requests
47
Usage Parameter Control
UPC
Monitors connection for conformity to traffic contract
Protect network resources from overload on one
connection
Done at VPC or VCC level
VPC level more important
Network resources allocated at this level
48
Location of UPC Function
49
Peak Cell Rate Algorithm
How UPC determines whether user is complying
with contract
Control of Peak Cell Rate and Cell Delay
Variation Tolerance (CDVT) Complies if peak
does not exceed agreed peak
Subject to CDV within agreed bounds
Generic cell rate algorithm
Leaky bucket algorithm
Leaky Bucket Algorithm
Chapter 13 Traffic and Congestion
Control in ATM Networks
Generic
Cell
Rate
Algorithm
52
Selective Cell Discard
selective discard comes into play when the network, at
some point beyond UPC parameters.
Then CLP =1
Then it discard low priority cell and to protect high priority
cell

53
Traffic shaping
Traffic policing
It will occur when the packets or frames exceeds the given level
Then it uses traffic shaping policy
It smoothen the traffic flow and reduce the cell clumbing
This will result fair allocation of resources


54
ABR Traffic Management
QoS for CBR, VBR based on traffic contract and
UPC described previously
No congestion feedback to source
Open-loop control
Not suited to non-real-time applications
Use best efforts or closed-loop control

57
Characteristics of ABR
ABR connections share available capacity
Access instantaneous capacity unused by CBR/VBR
Increases utilization without affecting CBR/VBR QoS
Share used by single ABR connection is dynamic
Varies between agreed Maximum Cell Rate and Peak
Cell Rate
Network gives feedback to ABR sources
ABR flow limited to available capacity
Buffers absorb excess traffic prior to arrival of
feedback
Low cell loss
Major distinction from UBR
58
Feedback Mechanisms (1)
Cell transmission rate characterized by:
Allowable cell rate (ACR)
Current rate
Minimum cell rate
Min for ACR
May be zero
Peak cell rate
Max for ACR
Initial cell rate
59
Feedback Mechanisms (2)
Start with ACR=ICR
Adjust ACR based on feedback
Feedback in Resource Management (RM) cells
Cell contains three fields for feedback
Congestion Indicator bit (CI)
No Increase Bit (NI)
Explicit Cell Rate Field (ER)

60
Source Reaction to Feedback
If CI=1
Reduce ACR by amount proportional to current ACR
but not less than CR
Else if NI=0
Increase ACR by amount proportional to PCR but not
more than PCR
If ACR>ER set ACR<-max[ER,MCR]
RIF Rate Increase Factor
RDF Rate Decresed Factor

61
Variations in ACR
62
Flow of Data and RM Cells
Congestion Indicator bit (CI)
No Increase Bit (NI)
Explicit Cell Rate Field (ER)
63
ABR Feedback vs TCP ACK
ABR feedback controls rate of transmission
Rate control
TCP feedback controls window size
Credit control
ABR feedback from switches or destination
TCP feedback from destination only

64
RM Cell Format
65
ABR Capacity Allocation
ATM switch must perform:
Congestion control
Monitor queue length
Fair capacity allocation
Throttle back connections using more than fair
share
ATM rate control signals are explicit
TCP are implicit
Increasing delay and cell loss
66
Guaranteed Frame Rate :
Traffic Contract
Peak cell rate PCR
Minimum cell rate MCR
Maximum burst size MBS
Maximum frame size MFS
Cell delay variation tolerance CDVT

67
Mechanisms for supporting
Rate Guarantees
Tagging and Policing
Buffer Management
Scheduling

UPC Usage Parameter Control
GCRAl Generic Cell Rate Algorithm
68
Components of GFR
Mechanism
69
Tagging and Policing
Tagging identifies frames that conform to contract and
those that dont
CLP=1 for those that dont
Set by network element doing conformance check
May be network element or source showing less
important frames
Get lower QoS in buffer management and scheduling
Tagged cells can be discarded at ingress to ATM
network or subsequent switch
Discarding is a policing function
70
Buffer Management
Treatment of cells in buffers or when arriving and
requiring buffering
If congested (high buffer occupancy) tagged cells
discarded in preference to untagged
Discard tagged cell to make room for untagged cell
May buffer per-VC
Discards may be based on per queue thresholds
71
Scheduling
Give preferential treatment to untagged cells
Separate queues for each VC
Per VC scheduling decisions
E.g. FIFO modified to give CLP=0 cells higher priority
Scheduling between queues controls outgoing rate of
VCs
Individual cells get fair allocation while meeting traffic
contract
Usage Parameter Control
Two levels requested by user
Priority for individual cell indicated by CLP bit
in header
If two levels are used, traffic parameters for
both flows specified
High priority CLP = 0
All traffic CLP = 0 + 1
May improve network resource allocation

Das könnte Ihnen auch gefallen