Sie sind auf Seite 1von 12

Assignment # 1

Submitted by : Adnan Anjum

Q1: Compare and contrast Frame Relay, ATM and Carrier Ethernet in terms of
Services, Communication architecture, Complexity, Operational problems, and
QoS?

ATM

ATM is a connection-oriented “fast packet” technology. It has the following advantages:

• Provides dynamic bandwidth allocation for more efficient handling of traffic, utilizing
the bandwidth when needed for bursty data, predictable data and all other traffic types
including voice, video and image.
• Scales from T1 and NxtT1 to 45 Mbps up to gigabit speeds
• Scales in topology from local area networks to campus area networks to wide area
networks .
• Protects against technology obsolescence Supports quantifiable “hard” quality of
service (QoS) parameters for all traffic types Efficiently supports voice over IP and
fully featured voice through ATM adaptation layer type 2 (AAL2)and connectivity to
the public switched telephone network .
• Allows integrated network management.
• Can be deployed in public, private or hybrid networks

ATM transmits only fixed-size frames, called cells, not variable-sized frames as frame relay
and packet switching do. The standard for ATM cell relay is 53 byte cells (48 bytes or user
data, 5 bytes of header). With only fixed-size cells to process, cell relay switches can perform
at a significantly faster pace than frame relay switches. More important, fixed-size cells allow
ATM to support quantifiable QoS, which in turn allows it to handle delay sensitive traffic like
voice and video conferencing.

Frame Relay

Frame relay has the advantages of being widely supported, well understood, easily adopted
and highly cost-effective for a wide variety of data networks. In particular, frame relay is
better suited than ATM for data-only, medium-speed (56/64 Kbps, T1) requirements, such as
the following:

• LAN to LAN interconnection


• Access to the Internet
• IBM SNA traffic

For frame relay, the ratio of header size to frame size is typically much smaller than the
overhead ratio for ATM, which makes frame relay more efficient. In addition, frame relay
will likely be used into the future as an access protocol via service interworking for higher
speed ATM networks. Thus, frame relay and ATM are likely to be complementary rather
than directly competitive technologies for quite a while to come.
Carrier Ethernet:

To create a market in Ethernet services, it is necessary to clarify and standardise the services
to be provided. Recognising this, the industry created the Metro Ethernet Forum , which has
played a key role in defining such services. The services defined are:
• E-line: a service connecting two customer Ethernet ports over a WAN.
• E-LAN: a multipoint service connecting a set of customer endpoints, giving the
appearance to the customer of a bridged Ethernet network connecting the sites.
• E-tree: a multipoint service connecting one or more roots and a set of leaves, but
preventing inter-leaf communication.

All these services provide standard definitions of such characteristics as bandwidth, resilience
and service multiplexing, allowing customers to compare service offerings and facilitating
service level agreements
arrier Ethernet is the use of high-bandwidth Ethernet technology for Internet access and for
communication among business, academic and government local area networks (LANs).
Carrier Ethernet can be deployed in three ways:
• Conventional or "pure" Ethernet
• Ethernet over Synchronous Digital Hierarchy (SDH)
• Ethernet over Multiprotocol Label Switching (MPLS)
Conventional Ethernet is the least expensive type of system but it can be difficult to modify
or expand. Ethernet over SDH can be an ideal solution in regions already having an SDH
infrastructure. However, most SDH-based systems are comparatively inflexible and may not
offer the desired level of bandwidth management when network communications volume
fluctuates rapidly and dramatically. Ethernet over MPLS offers superior scalability and
bandwidth management but is the most expensive technology of the three.
Carrier Ethernet circumvents bandwidth bottlenecks that can occur when a large number of
small networks are connected to a single larger network. Carrier Ethernet has minimal
configuration requirements and can accommodate individual home computers as well as
proprietary networks of all sizes. Most major network hardware vendors offer Carrier
Ethernet equipment.

Q2: Discuss how reliability is ensured in TCP. Assume unreliable underlying


layer?
TCP is a connection-oriented, end-to-end reliable protocol designed to fit into a layered
hierarchy of protocols which support multi-network applications. The TCP provides for
reliable inter-process communication between pairs of processes in host computers attached
to distinct but interconnected computer communication networks. Very few assumptions are
made as to the reliability of the communication protocols below the TCP layer. TCP assumes
it can obtain a simple, potentially unreliable datagram service from the lower level protocols.
In principle, the TCP should be able to operate above a wide spectrum of communication
systems ranging from hard-wired connections to packet-switched or circuit-switched
networks.

The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order
by the internet communication system. This is achieved by assigning a sequence number to
each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving
TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the
receiver, the sequence numbers are used to correctly order segments that may be received
out of order and to eliminate duplicates. Damage is handled by adding a checksum to each
segment transmitted, checking it at the receiver, and discarding damaged segments.

As long as the TCPs continue to function properly and the internet system does not become
completely partitioned, no transmission errors will affect the correct delivery of data. TCP
recovers from internet communication system errors.

In TCP reliability is ensured by.

 Sequence Number
 Checksum Calculation
 Acknowledgment
 Retransmission

Sequence Number:

The sequence number identifies the order of the bytes sent from each computer so that the
data can be reconstructed in order, regardless of any fragmentation, disordering, or packet
loss that may occur during transmission. For every payload byte transmitted the sequence
number must be incremented. In the first two steps of the 3-way handshake, both computers
exchange an initial sequence number (ISN). This number can be arbitrary, and should in fact
be unpredictable to defend against TCP Sequence Prediction Attacks.
TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an
acknowledgment signifying that the receiver has received all data preceding the
acknowledged sequence number. Essentially, the first byte in a segment's data field is
assigned a sequence number, which is inserted in the sequence number field, and the receiver
sends an acknowledgment specifying the sequence number of the next byte they expect to
receive. For example, if computer A sends 4 bytes with a sequence number of 100
(conceptually, the four bytes would have a sequence number of 100, 101, 102 and 103
assigned) then the receiver would send back an acknowledgment of 104 since that is the next
byte it expects to receive in the next packet.

TCP Checksum Calculation

The Transmission Control Protocol is designed to provide reliable data transfer between a
pair of devices on an IP internetwork. Much of the effort required to ensure reliable delivery
of data segments is of necessity focused on the problem of ensuring that data is not lost in
transit. But there's another important critical impediment to the safe transmission of data: the
risk of errors being introduced into a TCP segment during its travel across the internetwork.

Detecting Transmission Errors Using Checksums

If the data gets where it needs to go but is corrupted and we do not detect the corruption, this
is in some ways worse than it never showing up at all. To provide basic protection against
errors in transmission, TCP includes a 16-bit Checksum field in its header. The idea behind a
checksum is very straight-forward: take a string of data bytes and add them all together. Then
send this sum with the data stream and have the receiver check the sum. In TCP, a special
algorithm is used to calculate this checksum by the device sending the segment; the same
algorithm is then employed by the recipient to check the data it received and ensure that there
were no errors.
The checksum calculation used by TCP is a bit different than a regular checksum algorithm.
A conventional checksum is performed over all the bytes that the checksum is intended to
protect, and can detect most bit errors in any of those fields. The designers of TCP wanted
this bit error protection, but also desired to protect against other type of problems.

Acknowledgments

TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an
acknowledgment signifying that the receiver has received all data preceding the
acknowledged sequence number. Cumulative Acknowledgment scheme employed by the
original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose
10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during
transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it
received bytes 1,000 to 9,999 successfully, but failed to receive the first packet, containing
bytes 0 to 999. Thus the sender may then have to resend all 10,000 bytes.which allows the
receiver to acknowledge discontinuous blocks of packets that were received correctly, in
addition to the sequence number of the last contiguous byte received successively, as in the
basic TCP acknowledgment. The acknowledgement can specify a number of SACK blocks,
where each SACK block is conveyed by the starting and ending sequence numbers of a
contiguous range that the receiver correctly received. In the example above, the receiver
would send SACK with sequence numbers 1,000 and 9,999. The sender thus retransmits only
the first packet, bytes 0 to 999.

An out-of-order packet delivery can often falsely indicate the TCP sender of lost packet and,
in turn, the TCP sender retransmits the suspected-to-be-lost packet and slow down the data
delivery to prevent network congestion. The TCP sender undoes the action of slow-down,
that is a recovery of the original pace of data transmission, upon receiving a D-SACK that
indicates the retransmitted packet is duplicate.

TCP Adaptive Retransmission and Retransmission Timer Calculations

Whenever a TCP segment is transmitted, a copy of it is also placed on the retransmission


queue. When the segment is placed on the queue, a retransmission timer is started for the
segment, which starts from a particular value and counts down to zero. It is this timer that
controls how long a segment can remain unacknowledged before the sender gives up,
concludes that it is lost and sends it again.

The length of time we use for retransmission timer is thus very important. If it is set too low,
we might start retransmitting a segment that was actually received, because we didn't wait
long enough for the acknowledgment of that segment to arrive. Conversely, if we set the
timer too long, we waste time waiting for an acknowledgment that will never arrive, reducing
overall performance.

Difficulties in Choosing the Duration of the Retransmission Timer

Ideally, we would like to set the retransmission timer to a value just slightly larger than the
round-trip time (RTT) between the two TCP devices, that is, the typical time it takes to send a
segment from a client to a server and the server to send an acknowledgment back to the client
(or the other way around, of course). The problem is that there is no such “typical” round-trip
time. There are two main reasons for this:

o Differences In Connection Distance: Suppose you are at work in the United States,
and during your lunch hour you are transferring a large file between your workstation
and a local server connection using 100 Mbps Fast Ethernet, at the same time you are
downloading a picture of your nephew from your sister's personal Web site—which is
connected to the Internet using an analog modem to an ISP in a small town near Lima,
Peru. Would you want both of these TCP connections to use the same retransmission
timer value? I certainly hope not!

o Transient Delays and Variability: The amount of time it takes to send data between
any two devices will vary over time due to various happenings on the internetwork:
fluctuations in traffic, router loads and so on. To see an example of this for yourself,
try typing “ping www.tcpipguide.com” from the command line of an Internet-
connected PC and you'll see how the reported times can vary.
Q: 3- Discuss why TCP congestion and flow controls are introduced in TCP
design. Show how these mechanisms evolved and how they were standardized?

The Transmission Control Protocol (TCP) provides end to-end, reliable congestion controlled
connections over the Internet The congestion control method includes four phases: slow-start,
congestion avoidance, fast retransmit and fast recovery .More losses due to link failures,
independent as well as correlated, are expected over these wireless links.

When the TCP connection starts or re-transmits for timeout, it goes into the slow start phase.
In the slow start phase, the packet sending rate gradually increases from small to big value by
detection of network available bandwidth. The congestion window (cwnd) also increases with
the round-trip delay (RTT) exponentially. Therefore, it may cause the issue that packet
sending rate increase exponentially in the slow start phase so significantly that it leads to the
decline of TCP performance due to the overflow of the bottleneck link buffer.

Congestion Control

Slow Start

It operates by observing that the rate at which new packets should be injected into the
network is the rate at which the acknowledgments are returned by the other end.

Slow start adds another window to the sender's TCP: the congestion window, called "cwnd".
When a new connection is established with a host on another network, the congestion
window is initialized to one segment (i.e., the segment size announced by the other end, or
the default, typically 536 or 512). Each time an ACK is received, the congestion window is
increased by one segment. The sender can transmit up to the minimum of the congestion
window and the advertised window. The congestion window is flow control imposed by the
sender, while the advertised window is flow control imposed by the receiver. The former is
based on the sender's assessment of perceived network congestion; the latter is related to the
amount of available buffer space at the receiver for this connection.

The sender starts by transmitting one segment and waiting for its ACK. When that ACK is
received, the congestion window is incremented from one to two, and two segments can be
sent. When each of those two segments is acknowledged, the congestion window is increased
to four. This provides an exponential growth, although it is not exactly exponential because
the receiver may delay its ACKs, typically sending one ACK for every two segments that it
receives.

At some point the capacity of the internet can be reached, and an intermediate router will start
discarding packets. This tells the sender that its congestion window has gotten too large.

Early implementations performed slow start only if the other end was on a different network.
Current implementations always perform slow start.

Congestion Avoidance
Congestion can occur when data arrives on a big pipe (a fast LAN) and gets sent out a smaller
pipe (a slower WAN). Congestion can also occur when multiple input streams arrive at a
router whose output capacity is less than the sum of the inputs. Congestion avoidance is a
way to deal with lost packets.

The assumption of the algorithm is that packet loss caused by damage is very small (much
less than 1%), therefore the loss of a packet signals congestion somewhere in the network
between the source and destination. There are two indications of packet loss: a timeout
occurring and the receipt of duplicate ACKs.

Congestion avoidance and slow start are independent algorithms with different objectives.
But when congestion occurs TCP must slow down its transmission rate of packets into the
network, and then invoke slow start to get things going again. In practice they are
implemented together.

Congestion avoidance and slow start require that two variables be maintained for each
connection: a congestion window, cwnd, and a slow start threshold size, ssthresh. The
combined algorithm operates as follows:

1. Initialization for a given connection sets cwnd to one segment and ssthresh to 65535 bytes.

2. The TCP output routine never sends more than the minimum of cwnd and the receiver's
advertised window.

3. When congestion occurs (indicated by a timeout or the reception of duplicate ACKs), one-
half of the current window size (the minimum of cwnd and the receiver's advertised window,
but at least two segments) is saved in ssthresh. Additionally, if the congestion is indicated by
a timeout, cwnd is set to one segment (i.e., slow start).

4. When new data is acknowledged by the other end, increase cwnd, but the way it increases
depends on whether TCP is performing slow start or congestion avoidance.

If cwnd is less than or equal to ssthresh, TCP is in slow start; otherwise TCP is performing
congestion avoidance. Slow start continues until TCP is halfway to where it was when
congestion occurred (since it recorded half of the window size that caused the problem in step
2), and then congestion avoidance takes over.

Slow start has cwnd begin at one segment, and be incremented by one segment every time an
ACK is received. As mentioned earlier, this opens the window exponentially: send one
segment, then two, then four, and so on. Congestion avoidance dictates that cwnd be
incremented by segsize*segsize/cwnd each time an ACK is received, where segsize is the
segment size and cwnd is maintained in bytes. This is a linear growth of cwnd, compared to
slow start's exponential growth. The increase in cwnd should be at most one segment each
round-trip time (regardless how many ACKs are received in that RTT), whereas slow start
increments cwnd by the number of ACKs received in a round-trip time.

Fast Retransmit

TCP may generate an immediate acknowledgment (a duplicate ACK) when an out- of-order
segment is received. This duplicate ACK should not be delayed. The purpose of this duplicate
ACK is to let the other end know that a segment was received out of order, and to tell it what
sequence number is expected.

Since TCP does not know whether a duplicate ACK is caused by a lost segment or just a
reordering of segments, it waits for a small number of duplicate ACKs to be received. It is
assumed that if there is just a reordering of the segments, there will be only one or two
duplicate ACKs before the reordered segment is processed, which will then generate a new
ACK. If three or more duplicate ACKs are received in a row, it is a strong indication that a
segment has been lost. TCP then performs a retransmission of what appears to be the missing
segment, without waiting for a retransmission timer to expire.

Fast Recovery

After fast retransmit sends what appears to be the missing segment, congestion avoidance,
but not slow start is performed. This is the fast recovery algorithm. It is an improvement that
allows high throughput under moderate congestion, especially for large windows.

The reason for not performing slow start in this case is that the receipt of the duplicate ACKs
tells TCP more than just a packet has been lost. Since the receiver can only generate the
duplicate ACK when another segment is received, that segment has left the network and is in
the receiver's buffer. That is, there is still data flowing between the two ends, and TCP does
not want to reduce the flow abruptly by going into slow start.

The fast retransmit and fast recovery algorithms are usually implemented together as follows.

1. When the third duplicate ACK in a row is received, set ssthresh to one-half the current
congestion window, cwnd, but no less than two segments. Retransmit the missing segment.
Set cwnd to ssthresh plus 3 times the segment size. This inflates the congestion window by
the number of segments that have left the network and which the other end has cached .

2. Each time another duplicate ACK arrives, increment cwnd by the segment size. This
inflates the congestion window for the additional segment that has left the network. Transmit
a packet, if allowed by the new value of cwnd.

3. When the next ACK arrives that acknowledges new data, set cwnd to ssthresh (the value
set in step 1). This ACK should be the acknowledgment of the retransmission from step 1,
one round-trip time after the retransmission. Additionally, this ACK should acknowledge all
the intermediate segments sent between the lost packet and the receipt of the first duplicate
ACK. This step is congestion avoidance, since TCP is down to one-half the rate it was at
when the packet was lost.

Flow control

TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast
for the TCP receiver to receive and process it reliably. Having a mechanism for flow control
is essential in an environment where machines of diverse network speeds communicate. For
example, if a PC sends data to a hand-held PDA that is slowly processing received data, the
PDA must regulate data flow so as not to be overwhelmed.
TCP uses a sliding window flow control protocol. In each TCP segment, the receiver
specifies in the receive window field the amount of additional received data (in bytes) that it
is willing to buffer for the connection. The sending host can send only up to that amount of
data before it must wait for an acknowledgment and window update from the receiving host.

When a receiver advertises a window size of 0, the sender stops sending data and starts the
persist timer. The persist timer is used to protect TCP from a deadlock situation that could
arise if a subsequent window size update from the receiver is lost, and the sender cannot send
more data until receiving a new window size update from the receiver. When the persist timer
expires, the TCP sender attempts recovery by sending a small packet so that the receiver
responds by sending another acknowledgement containing the new window size.

If a receiver is processing incoming data in small increments, it may repeatedly advertise a


small receive window. This is referred to as the silly window syndrome, since it is inefficient
to send only a few bytes of data in a TCP segment, given the relatively large overhead of the
TCP header. TCP senders and receivers typically employ flow control logic to specifically
avoid repeatedly sending small segments.

SLIDING WINDOW

The sliding window serves several purposes:

(1) it guarantees the reliable delivery of data

(2) it ensures that the data is delivered in order,

(3) it enforces flow control between the sender and the receiver.

Reliable and ordered delivery

The sending and receiving sides of TCP interact in the following manner to implement
reliable and ordered delivery:
Each byte has a sequence number.

ACKs are cumulative.

Sending side

o LastByteAcked <=LastByteSent
o LastByteSent <= LastByteWritten
o bytes between LastByteAcked and LastByteWritten must be buffered.

Receiving side

o LastByteRead < NextByteExpected


o NextByteExpected <= LastByteRcvd + 1
o bytes between NextByteRead and LastByteRcvd must be buffered.

Flow Control

Sender buffer size : MaxSendBuffer

Receive buffer size : MaxRcvBuffer

Receiving side

o LastByteRcvd - NextBytteRead <= MaxRcvBuffer


o AdvertisedWindow = MaxRcvBuffer - (LastByteRcvd - NextByteRead)

Sending side

o LastByteSent - LastByteAcked <= AdvertisedWindow


o EffectiveWindow = AdvertisedWindow - (LastByteSent - LastByteAcked)
o LastByteWritten - LastByteAcked <= MaxSendBuffer
o Block sender if (LastByteWritten - LastByteAcked) + y > MaxSendBuffer

Always send ACK in response to an arriving data segment

Persist when AdvertisedWindow = 0

Q4: What is the importance of RTT calculation in TCP performance? Discuss, in highly
dynamic environments,

The TCP protocol was designed to operate reliably over almost any transmission medium
regardless of transmission rate, delay, corruption, duplication, or reordering of segments.
TCP performance depends not upon the transfer rate itself, but rather upon the product of the
transfer rate and the round-trip delay. (rfc1323)

One problem that must be resolved when using a Transmission Control Protocol (TCP) is
how to deal with timeouts and retransmissions. The round-trip delay time (RTD) or round-
trip time (RTT) is a big factor in helping to decide what to do in each case. RTT may also be
used to find the best possible route. That’s why RTT calculation is considered to be the
backbone in TCP performance.

RTT is the length of time it takes for a signal to be sent plus the length of time it takes for an
acknowledgment of that signal to be received.

The RTT was originally estimated in TCP by: RTT = (α * Old_RTT) + ((1-α) *
New_Round_Trip_Sample). This was improved by the Jacobson/Karel algorithm, which
takes standard deviation into account as well.

Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT
for that connection, and the procedure continues for every new calculation. Fundamental to
TCP's timeout and retransmission is the measurement of the round-trip time (RTT)
experienced on a given connection. We expect this can change over time, as routes might
change and as network traffic changes, and TCP should track these changes and modify its
timeout accordingly.

First TCP must measure the RTT between sending a byte with a particular sequence number
and receiving an acknowledgment that covers that sequence number. Recall from the
previous chapter that normally there is not a one-to-one correspondence between data
segments and ACKs. In Figure 20.1 this means that one RTT that can be measured by the
sender is the time between the transmission of segment 4 (data bytes 1-1024) and the
reception of segment 7 (the ACK of bytes 1-2048), even though this ACK is for an additional
1024 bytes. We'll use M to denote the measured RTT.

The original TCP specification had TCP update a smoothed RTT estimator (called R) using
the low-pass filter

R <- aR + (1-a)M

where a is a smoothing factor with a recommended value of 0.9. This smoothed RTT is
updated every time a new measurement is made. Ninety percent of each new estimate is from
the previous estimate and 10% is from the new measurement.

Given this smoothed estimator, which changes as the RTT changes, RFC 793 recommended
the retransmission timeout value (RTO) be set to RTO = Rb where b is a delay variance
factor with a recommended value of 2.

[Jacobson 1988] details the problems with this approach, basically that it can't keep up with
wide fluctuations in the RTT, causing unnecessary retransmissions. As Jacobson notes,
unnecessary retransmissions add to the network load, when the network is already loaded. It
is the network equivalent of pouring gasoline on a fire. What's needed is to keep track of the
variance in the RTT measurements, in addition to the smoothed RTT estimator. Calculating
the RTO based on both the mean and variance provides much better response to wide
fluctuations in the round-trip times, than just calculating the RTO as a constant multiple of
the mean. Figures 5 and 6 in [Jacobson 1988] show a comparison of the RFC 793 RTO
values for some actual round-trip times, versus the RTO calculations we show below, which
take into account the variance of the round-trip times. As described by Jacobson, the mean
deviation is a good approximation to the standard deviation, but easier to compute.
(Calculating the standard deviation requires a square root.) This leads to the following
equations that are applied to each RTT measurement M.

Err = M - A

A <- A + gErr

D <- D+ h(|Err| - D)

RTO = A + 4D

where A is the smoothed RTT (an estimator of the average) and D is the smoothed mean
deviation. Err is the difference between the measured value just obtained and the current RTT
estimator. Both A and D are used to calculate the next retransmission timeout (RTO). The
gain g is for the average and is set to 1/8 (0.125). The gain for the deviation is h and is set to
0.25. The larger gain for the deviation makes the RTO go up faster when the RTT changes.

Das könnte Ihnen auch gefallen