Sie sind auf Seite 1von 52

JAYAM COLLEGE OF ENGINEERING AND TECHNOLOGY

(An ISO 9001:2000 Certified Institution)


DHARMAPURI-636813.

COURSE CODE :CS1302


COURSE NAME :COMPUTER NETWORKS
DEPT :CSE
ACADEMIC YEAR :2006-2007 SEMESTER/YEAR:V/3

UNIT II
DATA LINK LAYER
CONTENTS

o Error – detection and correction


o Parity
o LRC
o CRC
o Hamming code
o low Control and Error control
o stop and wait
o go back-N ARQ
o selective repeat ARQ
o sliding window
o HDLC.
o LAN
o Ethernet IEEE-802.3 IEEE-802.4 IEEE-802.5
IEEE 802.11
o FDDI
o SONET
o Bridges.

Data Link Layer


The data link layer lies between the network layer and the physical layer in the Internet
model. It receives services from the physical layer and provides services to the network
layer. Figure 1 shows the position of the data link layer in the Internet model.
The data link layer is responsible for carrying a packet from one hop (computer or router)
to the next hop. Unlike the network layer which has a global responsibilities the data link
layer has a local responsibility. Its responsibility lies between two hops.:
other words, because LANs and WANs in the Integrity are delimited by hops, we c say
that the responsibility of the data link layer is to carry a packet through a LA or WAN.
The journey through a LAN or a WAN (between two hops) must preserve the integrity of
the packet; the data link layer must make sure that the packet arrives safe ar sound. If the
packet is corrupted during the transmission, it must either be corrected retransmitted. The
data link layer must also make sure that the next hop is not over-whelmed with data by
the previous hop; the flow of data must be controlled.
Access to a LAN or a WAN for the sending of data is also an issue. If several coil
computers or routers are connected to a common medium (link), and more than one want
send data at the same time, which has the right to send? What is the access method?

Duties
The duties of the data link layer include packetizing, addressing, error control, flow
control, and medium access control as shown in Figure 2.

Packetizing
The data link layer is responsible for moving data from one hop to the next. To get to the
next hop. the data must pass through a LAN or a WAN, each of which has its own
protocols. The packet coming from the upper layer must therefore be encapsulated in the
appropriate packet defined by the data link layer of the underlying LAN or WAN.
Different protocols have different names for the packet at the data link layer. However,
most LANs refer to the packet as a frame. The ATM WAN refers to a packet as a cell.
We see examples of packetizing in Chapters 12—18.
Addressing
We need one addressing mechanism at the data link layer. The data link layer addresses
are called physical addresses (or MAC addresses) and are used to find the address of the
next hop in hop-to-hop delivery. The physical address used by a LAN is totally dif ferent
from that used by a WAN. A LAN uses a next-hop address to carry a frame across the
LAN; a WAN normally uses a virtual circuit address. We discuss addressing mechanisms
in Chapters 14 to 13.
Error Control
in data communications, errors are inevitable. Using better equipment and more reli able
transmission media may reduce the severity or the frequency of occurrence, but it can
never eliminate errors. Networks must be able to transfer data from one device to another
with complete accuracy. As a prelude to error control, we discuss error detection in
Chapter 10. We then discuss error control in Chapter 11 as part of data link control.
Flow Control
Another responsibility of the data link layer is flow control. In most protocols, flow
control is a set of procedures that tells the sender how much data it can transmit before it
must wait for an acknowledgment from the receiver. The flow of data must not be
allowed to overwhelm the receiver.The receiving device must be able to inform the
sending device before some limit is reached and request that the transmitting device send
fewer frames or stop temporarily.
Medium Access Control
When computers use a shared medium (cable or air), there must be a method to control
access to the medium at any moment. To prevent this conflict or collision on a network,
there is a need for a medium access control (MAC) method. This method defines the
procedure a computer follows when it needs to send a frame or frames.
Local Area Networks
Local Area Networks (LANs) operate at the physical and data link layer. The obvious
place to discuss local area networks is after these layers have been discussed.

Error Detection and Correction


Networks must be able to transfer data from one device to another with complete
accuracy. A system that cannot guarantee that the data received by one device are
identical to the data transmitted by another device is essentially useless. Yet anytime data
are transmitted from one node to the next, they can become corrupted in passage. Many
factors can alter or wipe out one or more bits of a given data unit. Reliable systems must
have a mechanism for detecting and correcting such errors.
Data can be corrupted during transmission For reliable communication,
errors must he detected and çorrected.

TYPES OF ERRORS
Whenever bits flow from one point to another, they are subject to unpredictable
changes because of interference. This interference can change the shape of the signal. In a
single-bit error, a 0 is changed to a I or a I to a 0. In a burst error, multiple bits are
changed. For example, a 0.01-s burst of impulse noise on a transmission with a data rate
of 1200 bps might change all or some of 12 bits of information.
Single-Bit Error
The term single-bit error means that only one bit of a given data unit (such as a byte,
character, data unit, or packet) is changed from 1 to 0 or from 0 to I.
In a single-bit error,only one bit in the data unit has changed.
Figure 10.1 shows the effect of a single-bit error on a data unit. To understand the impact
of the change, imagine that each group of S bits is an ASCII character with a 0 bit added
to the left. In the figure, 00000010 (ASCII STX) was sent, meaning start of text, but
00001010 (ASCII LF) was received, meaning line feed.

Single-bit errors are the least likely type of error in serial data transmission. To
understand why, imagine a sender sends data at 1 Mbps. This means that each bit lasts
only 1/1,000,000 s, or 1 s. For a single-bit error to occur, the noise must have a duration
of only 1 is, which is very rare; noise normally lasts much longer than this.
However, a single-bit error can happen if we are sending data using parallel transmission.
For example, if eight wires are used to send all S bits of 1 byte at the same time and one
of the wires is noisy, one bit can be corrupted in each byte. Think of parallel transmission
inside a computer, between CPU and memory, for example.

Burst Error
The term burst error means that 2 or more bits in the data unit have changed from ito 0
orfrom0to 1.
A burst error means that 2 or more bits in the data un have changed
Following Figure shows the effect of a burst error on a data unit. In this case,
Ol000l000l0000ll was sent, but 0l0lll0l0l0000ll was received. Note that a burst error does
not necessarily mean that the errors occur in consecutive bits. The length of the burst is
measured from the first corrupted bit to the last corrupted bit. Some bits in between may
not have been corrupted.

Burst error is most likely to occur in a serial transmission. The duration of noise is
normally longer than the duration of one bit, which means that when noise affects data, it
affects a set of bits. The number of bits affected depends on the data rate and duration of
noise. For example, if we are sending data at 1 Kbps, a noise of 1/100 s can affect 10 bits;
if we are sending data at 1 Mbps, the same noise can affect 10,000 bits.
DETECTION
Although the goal of error checking is to correct errors, most of the time, we first need to
detect errors. Error detection is simpler than error correction and is the first step in the
error correction process.
Redundancy
One error detection mechanism would be to send every data unit twice. The receiv ing
device would then be able to do a bit-for-bit comparison between the two versions of the
data. Any discrepancy would indicate an error, and an appropriate correction mechanism
could be set in place. This system would be completely accurate (the odds of errors being
introduced onto exactly the same bits in both sets of data are infinitesimally small), but it
would also be insupportably slow. Not only would the transmission time double, but also
the time it takes to compare every unit bit by bit must be added.
The concept of including extra information in the transmission for error detection is a
good one. But instead of repeating the entire data stream, a shorter group of bits may be
appended to the end of each unit. This technique is called redundancy because the extra
bits are redundant to the information; they are discarded as soon as the accu racy of the
transmission has been determined.
.Error detection uses the concept. of redundancy, which means adding extra bits for
detect errors at the destination.
The following Figure shows the process of using redundant bits to check the accuracy of
a data unit. Once the data stream has been generated, it passes through a device that
analyzes it and adds on ah appropriately coded redundancy check. The data unit, now
enlarged by several bits, travels over the link to the receiver. The receiver puts

the entire stream through a checking function. If the received bit stream passes the
checking criteria, the data portion of the data unit is accepted and the redundant bits are
discarded.
Three types of redundancy checks are common in data communications: parity check,
cyclic redundancy check (CRC), and checksum
Detection methods

Parity Check
The most common and least expensive mechanism for error detection is the parity check.
Parity checking can be simple or two-dimensional.
Simple Parity Check
In this technique, a redundant bit, called a parity bit, is added to every data unit so that
the total number of is in the unit (including the parity bit) becomes even (or odd). Sup
pose we want to transmit the binary data unit 1100001 [ a (97)]; see Figure 10.5.
Even-parity concept
Adding the number of is gives us 3, an odd number, Before transmitting, we pass
the data unit through a parity generator. The parity generator counts the is and appends
the parity bit (a 1 in this case) to the end. The total number of is 1s now 4, an even
number. The system now transmits the entire expanded unit across the network link.
When it reaches its destination, the receiver puts all 8 bits through an even-parity
checking function. If the receiver sees 11000011, it counts four ls, an even number, and
the data unit passes. But what if the data unit has been damaged in transit? What if,
instead of il0000il, the receiver sees 11001011? Then when the parity checker counts the
is, it gets 5, an odd number. The receiver knows that an error has been introduced into the
data somewhere and therefore rejects the whole unit. Note that for the sake of simplicity,
we are discussing here even-parity checking, where the number of is should be an even
number. Some systems may use odd-parity checking, where the number of Is should be
odd. The principle is the same.
In parity check, a parity bit is added to every data unit so that the total number of is is
even (or odd for odd-parity).
Example 1
Suppose the sender wants to send the word world, in ASCII (see Appendix A), the five
characters are coded as
ß--- 1110111 1101111 1110010 1101100 1100100
w 0 r 1 d
Each of the first four characters has an even number of ls, so the parity bit is a 0. The last
character (d), however, has three is (an odd number), so the parity bit is a 1 to make the
total number of ls even. The following shows the actual bits sent (the parity bits are
underlined).
ß-- 11101110 11011110 11100100 11011000 11001001
Example 2
Now suppose the word world in Example 1 is received by the receiver without being
corrupted in transmission.
ß 1101110 11011110 11100100 11011000 11001001
The receiver counts the is in each character and comes up with even numbers (6, 6, 4, 4,
4). The data are accepted.
Performance
Simple parity check can detect all single-bit errors, It can also detect burst errors
as long as the total number of bits changed is odd (1, 3, 5, etc.). Let’s say we have an
even-parity data unit where the total number of is, including the parity bit, is 6:
1000111011. 11 any 3 bits change value, the resulting parity will be odd and the error
will be detected: 1111111011:9, 0110111011:7, 11000l001l:5—ail odd. The checker
would return a result of 1, and the data unit would be rejected. The same holds true for
any odd number of errors.
Suppose, however, that 2 bits of the data unit are changed: 1110111011:8, 110001
1011:6, 1000011010:4. In each case the number of is in the data unit is still even. The
parity checker will add them and return an even number although the data unit contains
two errors, This method cannot detect errors where the total number of bits changed is
even. If any two bits change in transmission, the changes cancel each other and the data
unit will pass a parity check even though the data unit is damaged. The same holds true
for any even number of errors.
Simple parity check detect all single bit errors. It can detect burst errors only if
total number of errors in each data unit is odd.
Two-Dimensional Parity Check
A better approach is the two-dimensional parity check. In this method, a block of
bits is organized in a table (rows and columns). First we calculate the parity bit for each
data unit. Then we organize them into a table. For example, as shown in Figure 10.6, we
have four data units shown in four rows and eight colunms. We then calculate the parity
bit for each column and create a new row of 8 bits; they are the parity bits for the whole
block. Note that the first parity bit in the fifth row is calculated based on all first

Two-dimensional parity
bits; the second parity bit is calculated based on all second bits; and so on. We then attach
the 8 parity bits to the original data and send them to the receiver.
Example 4
Suppose the following block is sent:
I 10101001 00111001 11011101 11100111 10101010
However, it is hit by a burst noise of length 8, and some bits are corrupted.
ß 10100011__10001001 11011101 11100111 10101010
When the receiver checks the parity bits, some of the bits do not follow the even-parity
rule and
the whole block is discarded (the nonmatching bits are shown in bold).
ß 10100011 10001001 11011101 11100111 10101010
(parity bits)
In two dimensional parity check, a block of bits divided into rows and a redundant
row of bits is added to the whole block

Performance
Two-dimensional parity check increases the likelihood of detecting burst errors. As we
showed in Example 4, a redundancy of n bits can easily detect a burst error of n bits. A
burst error of more than n bits is also detected by this method with a very high
probability. There is, however, one pattern of errors that remains elusive. If 2 bits in one
data unit are damaged and two bits in exactly the same positions in another data unit are
also damaged, the checker will not detect an error. Consider, for example, two data units:
11110000 and 11000011. If the first and last bits in each of them are changed, making the
data units read 01110001 and 01000010, the errors cannot be detected by this method.
Cyclic Redundancy Check (CRC)
The third and most powerful of the redundancy checking techniques is the cyclic
redundancy check (CRC). Unlike the parity check which is based on addition, CRC is
based on binary division. In CRC, instead of adding bits to achieve a desired parity, a
sequence of redundant bits, called the CRC or the CRC remainder, is appended to the end
of a data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number. At its destination, the incoming data unit is divided by the
same number. If at this step there is no remainder, the data unit is assumed to be intact
and is therefore accepted. A remainder indicates that the data unit has been dam aged in
transit and therefore must be rejected.
The redundancy bits used by CRC are derived by dividing the data unit by a
predetermined divisor; the remainder is the CRC. To be valid, a CRC must have two
qualities: It must have exactly one less bit than the divisor, and appending it to the end of
the data string must make the resulting bit sequence exactly divisible by the divisor.
Both the theory and the application of CRC error detection are straightforward. The only
complexity is in deriving the CRC. To clarify this process, we will start with an overview
and add complexity as we go. Figure 10.7 provides an outline of the three basic steps.

First, a string of n Os is appended to the data unit. The number n is I less than the number
of bits in the predetermined divisor, which is n + 1 bits.
Second, the newly elongated data unit is divided by the divisor, using a process called
binary division. The remainder resulting from this division is the CRC.
Third, the CRC of n bits derived in step 2 replaces the appended Os at the end of the data
unit. Note that the CRC may consist of all Os.
The data unit arrives at the receiver data first, followed by the CRC. The receiver treats
the whole string as a unit and divides it by the same divisor that was used to find the CRC
remainder.
If the string arrives without error, the CRC checker yields a remainder of zero and the
data unit passes. If the string has been changed in transit, the division yields a non zero
remainder and the data unit does not pass.
The CRC Generator
A CRC generator uses modulo-2 division. Figure 10.8 shows this process. In the first
step, the 4-bit divisor is subtracted from the first 4 bits of the dividend. Each bit of the
divisor is subtracted from the corresponding bit of the dividend without disturbing the
next-higher bit. In our example, the divisor, 1101, is subtracted from the first 4 bits of the
dividend, 1001, yielding 100 (the leading 0 of the remainder is dropped). The next
unused bit from the dividend is then pulled down to make the number of bits in the
remainder equal to the number of bits in the divisor. The next step, therefore, is 1000—
1101, which yields 101, and soon.
In this process, the divisor always begins with a 1; the divisor is subtracted from a portion
of the previous dividend/remainder that is equal to it in length; the divisor can only be
subtracted from a dividend/remainder whose leftmost bit is 1. Anytime the left most bit
of the dividend/remainder is 0, a string of Os, of the same length as the divisor, replaces
the divisor in that step of the process. For example, if the divisor is 4 bits tong, it is
replaced by four Os. (Remember, we are dealing with bit patterns, not with quantitative
values; 0000 is not the same as 0.) This restriction means that, at any step, the leftmost

subtraction will be either 0 —0 or 1 — 1, both of which equal 0. So, after subtraction, the
leftmost bit of the remainder will always be a leading zero, which is dropped, and the
next unused bit of the dividend is pulled down to fill out the remainder. Note that only the
first bit of the remainder is dropped—if the second bit is also 0, it is retained, and the
dividend/remainder for the next step will begin with 0. This process repeats until the
entire dividend has been used.
The CRC Checker
A CRC checker functions exactly as the generator does. After receiving the data
appended with the CRC, it does the same modulo-2 division. If the remainder is all Os,
the CRC is dropped and the data are accepted; otherwise, the received stream of bits is
discarded and data are resent. Figure 10.9 shows the same process of division in the
receiver. We assume that there is no error. The remainder is therefore all Os, and the data
are accepted.
Polynomials
The divisor in the CRC generator is most often represented not as a string of is and Os,
but as an algebraic polynomial (see Fig. 10.10). The polynomial format is useful for two
reasons: It is short, and it can be used to prove the concept mathematically (which is
beyond the scope of this book).
The relationship of a polynomial to its corresponding binary representation is shown in
Figure
A polynomial should be selected to have at least the following properties:
• It should not be divisible by x.
• It should be divisible by x + 1.
The first condition guarantees that all burst errors of a length equal to the degree of the
polynomial are detected. The second condition guarantees that all burst errors affect ing
an odd number of bits are detected (the proof is beyond the scope of this book).
Example 5
It is obvious that we cannot choose x (binary 10) orx + x (binary 110) as the polynomial
because both are divisible by x. However, we can choose x + 1 (binary I because it is not
divisible by x,
but is divisible by x + 1. We can also choose x + 1 (binary 101) because it is divisible by
x + 1 (binary division).
Standard Polynomials
Some standard polynomials used by popular protocols for CRC generation are shown in
Table
Table Standard polynomials
Performance
CRC is a very effective error detection method. If the divisor is chosen according to the
previously mentioned rules,
1. CRC can detect all burst errors that affect an odd number of bits.
2. CRC can detect all burst errors of length less than or equal to the degree of the
polynomial.
3. CRC can detect, with a very high probability, burst errors of length greater than the
degree of the polynomial.
Example 6
The CRC-l2 (x + ± + x + 1), which has a degree of 12, will detect all burst errors
affecting an odd number of bits, will detect all burst errors with a length less than or
equal to 12, and will detect, 99.97 percent of the time, burst errors with a length of 12 or
more.
Checksum
The third error detection method we discuss here is called the checksum. Like the parity
checks and CRC, the checksum is based on the concept of redundancy:
Checksum Generator
In the sender, the checksum generator subdivides the data unit into equal segments of n
bits (usually 16). These segments are added using ones complement arithmetic (see
Appendices B and E) in such a way that the total is also i bits long. That total (sum) is
then complemented and appended to the end of the original data unit as redundancy bits,
called the checksum field. The extended data unit is transmitted across the network. So if
the sum of the data segment is T, the checksum will be —T .
Figure Binary division in CRC checker
The sender follows these steps:
• The unit is divided Into k sections ,each of n bits
• All sections are added using ones complement to get the sum
• The sum is complemented and becomes the checksum
• checksum is sent with the data

Checksum Checker
The receiver subdivides the data unit as above and adds all segments and complements
the result. If the extended data unit is intact, the total value found by adding the data
The receiver follows these steps:
• The unit is divided into k sections each of n hits
• All sections are added using ones complement to get the sum
• The sum is complemented.
• If the result is zero the data are accepted otherwise, they are rejected
segments and the checksum field should be zero, If the result is not zero, the packet
contains an error and the receiver rejects it .
Example 7
Suppose the following block of 16 bits is to be sent using a checksum of 8 bits.
ß 10101001 00111001
The numbers are added using ones complement arithmetic (see Appendix E).
10101001
00111001
Sum 11100010
Checksum 00011101
The pattern sent is
ß 10101001 00111001 00011101
Checksum
Example 8
Now suppose the receiver receives the pattern sent in Example 7 and there is no error.
10101001 00111001 00011101
When the receiver adds the three sections, it will get all Is, which, after complementing,
is all Os and shows that there is no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is OK.
Example 9
Now suppose there is a burst error of length 5 that affects 4 bits.
10101111 11111001 00011101
When the receiver adds the three sections, it gets
10101111
11111001
00011101
Result 1 ll00010l
Carry 1
Sum 11000110
Comp 00111001 means that the pattern is corrupted.
Performance
The checksum detects all errors involving an odd number of bits as well as most errors
involving an even number of bits. However, if one or more bits of a segment are damaged
and the corresponding bit or bits of opposite value in a second segment are also damaged,
the sums of those columns will not change and the receiver will not detect a problem. If
the last digit of one segment is a 0 and it gets changed to a 1 in transit, then the last 1 in
another segment must be changed to a 0 if the error is to go undetected. In two-
dimensional parity check, two Os could both change to is without altering the parity
because carries were discarded. Checksum retains all carries; so although two Os
becoming ls would not alter the value of their own column, it would change the value of
the next-higher column. But anytime a bit inversion is balanced by an opposite bit
inversion in the corresponding digit of another data segment, the error is invisible.
ERROR CORRECTION
The mechanisms that we have discussed up to this point detect errors but do not correct
them. Error correction can be handled in several ways. The two most common are error
correction by retransmission and forward error correction.
Error Correction by Retransmission
In error correction by retransmission, when an error is discovered, the receiver can have
the sender retransmit the entire data unit. This type of error correction is discussed along
with flow and error control protocols in Chapter 11.
Forward Error Correction
In forward error correction (FEC), a receiver can use an error-correcting code, which
automatically corrects certain errors. In theory, it is possible to correct any errors
automatically. Error-correcting codes, however, are more sophisticated than error
detection codes and require more redundancy bits.
The concept underlying error correction can be most easily understood by examining the
simplest case: single-bit errors. As we saw earlier, single-bit errors can be detected by the
addition of a redundant (parity) bit. A single additional bit can detect single-bit errors in
any sequence of bits because it must distinguish between only two conditions: error or no
error. A bit has two states (0 and 1). These two states are sufficient for this level of
detection.
But what if we want to correct as well as detect single-bit errors? Two states are enough
to detect an error but not to correct it. An error occurs when the receiver reads a 1 bit as a
0 or a 0 bit as a i. To correct the error, the receiver simply reverses the value of the
altered bit. To do so, however, it must know which bit is in error. The secret of error
correction, therefore, is to locate the invalid bit or bits.
For example, to correct a single-bit error in an ASCII character, the error correction code
must determine which of the 7 bits has changed. In this case, we have to distinguish
between eight different states: no error, error in position 1, error in position 2,and so on,
up to error in position 7. To do so requires enough redundancy bi all eight states.
At first glance, it seems that a 3-bit redundancy code should be adequate
3 bits can show eight different states (000 to 1ll) and can therefore indications of eight
different possibilities. But what if an error occurs in the redundancy themselves? Seven
bits of data (the ASCII character) plus 3 bits of redundancy equal 10 bits. Three bits,
however, can identify only eight possibilities. Additional bits are necessary to cover all
possible error locations.
To calculate the number of redundancy bits r required to correct a given n data bits in, we
must f a relationship between in and r. With in bits of data of redundancy added to them,
the length of the resulting code is m+ r.
If the total number of bits in a transmittable unit is m + r, then r must be able to indicate
at least in + r + 1 different states. Of these, one state means no error, a states indicate the
location of an error in each of the in m + r positions.
So m + r + 1 states must be discoverable by r bits; and r bits can indicate 2 r different
states. Therefore, 2 r must be equal to or greater than in m+ r + 1:
2 r >= m+r +1
The value of r can be determined by plugging, in the value of m (the original length of
the data unit to be transmitted). For example, if the value of in is 7 (as in ASCII code),
the smallest r value that can satisfy this equation is 4:
2 4 >= 7 + 4 + 1

Hamming Code
Hamming provides a practical solution. The Hamming code can be applied to dai of any
length and uses the relationship between data and redundancy bits dis above. For
example, a 7-bit ASCII code requires 4 redundancy bits that can be ac the end of the data
unit or interspersed with the original data bits. In Figure 10.14 bits are placed in positions
1, 2,4, and S (the positions in an 11-bit sequence that ar ers of 2). For clarity in the
examples below, we refer to these bits as r1, r2 ,r4,and r8
Figure Positions of redundancy bits in Hamming code
In the Hamming code, each r bit is the parity bit for one combination of data bits, as
shown below:
r bits 1,3,5,7,9,11
r bits 2,3,6,7, 10, 11
r bits4,5,6,7
r bits8,9,l0,11
Each data bit may be included in more than one calculation. In the sequences above, for
example, each of the original data bits is included in at least two sets, while the r bits are
included in only one .
Figure Redundancy bits calculation

Calculating the r Values. In the first step, we place each bit of the original character in its
appropriate position in the 11-bit unit. In the subsequent steps, we calculate the even
parities for the various bit combinations. The parity value for each combination is the
value of the corresponding r bit.
Error Detection and Correction Now imagine that by the time the above transmis sion is
received, the number 7 bit has been changed from I toO. The receiver takes the
transmission and recalculates 4 new parity bits, using the same sets of bits used by the
sender plus the relevant parity r bit for each set (see Fig. 10.17). Then it assembles the
new parity values into a binary number in order of r position (r8 r4 r2 r1 )In our example,
this step gives us the binary number 0111 (7 in decimal), which is the precise location of
the bit in error.
Figure Error detection using Hamming code

Once the bit is identified, the receiver can reverse its value and correct the error. The
beauty of the technique is that it can easily be implemented in hardware and the code is
corrected before the receiver knows about it.
Burst Error Correction Although the Flamnting code cannot correct a burst error
directly, it is possible to rear range the data and then apply the code. Instead of sending
all the bits in a data unit together we can organize N units in a column and then send the
first bit of each, fol lowed by the second bit of each, and so on. In this way, if a burst
error of M bits occurs (M < P then the error does not corrupt M bits of one single unit; it
corrupts only 1 bit of a unit. With the Hamming scheme, we can then correct the
corrupted bit in each unit. Figure shows an example.
Figure Burst error correction example

In Figure we need to send six data units where each unit is a character with Hamming
redundant bits. We organize the bits in columns and rows. We send the first column, then
the second column, and so on. The bits that are corrupted by a burst error are shown in
squares. Five consecutive bits are corrupted during the actual transmission. However,
when these bits arrive at the destination and are reorganized into data units, each
corrupted bit belongs to one unit and is automatically corrected. The trick here is to let
the burst error corrupt only 1 bit of each unit.
FLOW AND ERROR CONTROL
Flow and error control are the main functions of the data link layer. Let us informally
define each.
Flow Control
Flow control coordinates the amount of data that can he sent before receiving
acknowledgment and is one of the most important duties of the data link layer. In most
proto cols, flow control is a set of procedures that tells the sender how much data it can
transmit before it must wait for an acknowledgment from the receiver. The flow of data
must not be allowed to overwhelm the receiver. Any receiving device has a limited speed
at which it can process incoming data and a limited amount of memory in which to store
incoming data. The receiving device must be able to inform the sending device before
those limits are reached and to request that the transmitting device send fewer frames or
stop temporarily. Incoming data must be checked and processed before they can be used.
The rate of such processing is often slower than the rate of transmission. For this reason,
each receiving device has a block of memory, called a buffer, reserved for storing
incoming data until they are processed. If the buffer begins to fill up, the receiver must be
able to tell the sender to halt transmission until it is once again able to receive.
Flow control refers to a set of procedures used to restrict the amount of data that
the sender can send before waiting for acknowledgment
Error Control
Error control is both error detection and error correction. It allows the receiver to inform
the sender of any frames lost or damaged in transmission and coordinates the
retransmission of those frames by the sender. In the data link layer, the term error control
refers primarily to methods of error detection and retransmission. Error control in the data
link layer is often implemented simply: Anytime an error is detected in an exchange,
specified frames are retransmitted. This process is called automatic repeat request
(ARQ).
Error control in the data link layer is based on automatic repeat request, which is
the retransmission data.
Flow and Error Control Mechanisms
In this section we introduce three common flow and error control mechanisms:Stop- and-
Wait ARQ, Go-Back-N ARQ, and Selective-Repeat ARQ. Although these are sometimes
referred to as protocols, we prefer the term mechanisms

STOP-AND-WAIT ARQ
Stop-and-Wait ARQ is the simplest flow and error control mechanism. It has the
following features:
• The sending device keeps a copy of the last frame transmitted until it receives an
acknowledgment for that frame. Keeping a copy allows the sender to retransmit
lost or damaged frames until they are received correctly.
• For identification purposes, both data frames and acknowledgment (ACK) frames
are numbered alternately 0 and 1. A data 0 frame is acknowledged by an ACK 1
frame, indicating that the receiver has received data frame 0 and is now expecting
data frame 1. This numbering allows for identification of data frames in case of
duplicate transmission (important in the case of lost acknowledgment or delayed
acknowledgment, as we will see shortly).
• A damaged or lost frame is treated in the same manner by the receiver, If the
receiver detects an error in the received frame, it simply discards the frame and
sends no acknowledgment. lithe receiver receives a frame that is out of order(0
instead of I or 1 instead of 0), it knows that a frame is lost. It discards the out-of-
order received frame.
• The sender has a control variable, which we call S, that holds the number of the
recently sent frame (0 or 1). The receiver has a control variable, which we call R,
that holds the number of the next frame expected (0 or 1).
• The sender starts a timer when it sends a frame. If an acknowledgment is not
received within an allotted time period, the sender assumes that the frame was lost
or damaged and resends it.
• The receiver sends only positive acknowledgment for frames received safe and
sound; it is silent about the frames damaged or lost. The acknowledgment number
always defines the number of the next expected frame. . If frame 0 is received,
ACK 1 is sent; if frame 1 is received, ACK 0 is sent.
Operation
In the transmission of a frame, we can have four situations: normal operation, the frame
is lost, the acknowledgment is lost, or the acknowledgment is delayed.
Normal Operation
In a normal transmission, the sender sends frame 0 and waits to receive ACK 1. when
ACK 1 is received, it sends frame 1 and then waits to receive ACK 0, and so on. The
ACK must be received before the timer set for each frame expires. Figure 11.1 shows
successful frame transmissions.
Figure Norma operation

Lost or Damaged Frame


A lost or damaged frame is handled in the same way by the receiver; when the receiver
receives a damaged frame, it discards it, which essentially means the frame is lost. The
receiver remains silent about a lost frame and keeps its value of R. For example, in Figure
11.2, the sender transmits frame 1, but it is lost. The receiver does nothing, retain ing the
value of R (1). After the timer at the sender site expires, another copy of frame 1 is sent.
Figure Stop-and-Wait ARQ, lost frame
Lost Acknowledgment
A lost or damaged acknowledgment is handled in the same way by the sender; if the
sender receives a damaged acknowledgment, it discards it. Figure 1 .3 shows a lost ACK
0. The waiting sender does not know if frame 1 has been received. When the timer for
frame I expires, the sender retransmits frame 1. Note that the receiver has already
received frame 1 and is expecting to receive frame 0 (R = 0). Therefore, it silently
discards the second copy of frame 1.
Figure Stop-and-Wait ARQ, lost ACK frame

The reader may have discovered through this example the need to number frames.
If the frames were not numbered, the receiver, thinking that frame 1 is a new frame (not a
duplicate), keeps it.
In Stop-and-Wait ARQ, numbering frames prevents the retaining of duplicate
frames.
Delayed Acknowledgment
Another problem that may occur is delayed acknowledgment. An acknowledgment can
be delayed at the receiver or by some problem with the link. Figure 11.4 shows the delay
of ACK 1; it is received after the timer for frame 0 has already expired. The sender has
already retransmitted a copy of frame 0. However, the value of R at the receiver site is
still 1, which means that the receiver expects to see frame 1. The receiver, therefore,
discards the duplicate frame 0.
Figure Stop-and-Wait ARQ, delayed ACK

The sender has now received two ACKs, one that was delayed and one that was sent after
the duplicate frame 0 arrived. The second ACK 1 is discarded.
To understand why we need to number the acknowledgments, let us examine Figure 11.4
again. After the delayed ACK 1 reaches the sender, frame 1 is sent. However, frame 1 is
lost and never reaches the receiver. The sender then receives an ACK 1 for the duplicate
frame sent. If the ACKs were not numbered, the sender would interpret the second ACK
as the acknowledgment for frame 1. Numbering the ACKs provides a method to keep
track of the received data frames.
Numbered acknowledgments are needed it an acknowledgment is delayed and the
nest frame is lost
Bidirectional Transmission
The stop-and-wait mechanism we have discussed is unidirectional. However, we can
have bidirectional transmission if the two parties have two separate channels for full-
duplex transmission or share the same channel for half-duplex transmission. In this case,
each party needs both S and R variables to track frames sent and expected.
Piggybacking
Piggybacking is a method to combine a data frame with an acknowledgment. For
example, in Figure 11.5, Stations A and B both have data to send. Instead of sending
separate data and ACK frames, station A sends a data frame that includes an ACK.
Station B behaves in a similar manner.

Piggybacking can save bandwidth because the overhead from a data frame and an ACK
frame (addressees, CRC, etc.,) can be combined into just one frame.

GO-BACK-N ARQ
. In Stop-and-Wait ARQ, at any point in time for a sender, there is only one frame, the
outstanding frame, that is sent and waiting to be acknowledged. This is not a good use of
the transmission medium. To improve the efficiency, multiple frames should be in
transition while waiting for acknowledgment. In other words, we need to let more than
one frame be outstanding. Two protocols use this concept: Go-Back-N ARQ and
Selective Repeat ARQ. We discuss the first in this section and the second in Section
11.4.
In Go-Back-N ARQ, we can send up to W frames before worrying about
acknowledgments; we keep a copy of these frames until the acknowledgments arrive.
This procedure requires additional features to be added to Stop-and-Wait ARQ.
Sequence Numbers
Frames from a sending station are numbered sequentially. However, because we need to
include the sequence number of each frame in the header, we need to set a limit. If the
header of the frame allows in bits for the sequence number, the sequence numbers range
from 0 to 2 — 1. For example, if in is 3, the only sequence numbers are 0 through 7
inclusive. However, we can repeat the sequence. So the sequence numbers are0, 1,2,
3,4,5, 6,7,0, 1,2,3,4,5,6,7,0, 1,
Sender Sliding Window
At the sender site, to hold the outstanding frames until they are acknowledged, we use the
concept of a window. We imagine that all frames are stored in a buffer. The out standing
frames are enclosed in a window. The frames to the left of the window are those that have
already been acknowledged and can be purged; those to the right of the window cannot
be sent until the window slides over them. The size of the window is at most 2 — 1 for
reasons that we discuss later.
The size of the window in this protocol is fixed, although we can have a variable- size
window in other protocols such as TCP (see Chapter 22). The window slides to include
new unsent frames when the correct acknowledgments are received. The win dow is a
sliding window. For example, in Figure 11 .6a, frames 0 through 6 have been sent. In part
b, the window slides two frames over because an acknowledgment was received for
frames 0 and 1.
Figure Sender sliding window

Receiver Sliding Window


The size of the window at the receiver site in this protocol is always 1. The receiver is
always looking for a specific frame to arrive in a specific order. Any frame arriving out
of order is discarded and needs to be resent. The receiver window also slides as shown in
Figure 11.7. In part a the receiver is waiting for frame 0. When that arrives, the window
slides over.
Figure Receiver sliding window
Control Variables
The sender has three variables, 5, 5 and 5 The S variable holds the sequence number of
the recently sent frame; 5 holds the sequence number of the first frame in the win dow;
and 5 holds the sequence number of the last frame in the window. The size of the window
is W, where W=SL—Sp+ 1.
The receiver only has one variable, N, that holds the sequence number of the frame it
expects to receive. Tithe sequence number of the received frame is the same as the value
of R, the frame is accepted; if not, it is rejected. Figure 1L8 shows the sender and receiver
window with their control variables.
Figure Control variables

Timers
The sender sets a timer for each, frame sent. The receiver has no timers.
Acknowledgment
The receiver sends positive acknowledgments if a frame has arrived safe and
sound and in order. If a frame is damaged or is received out of order, the receiver is silent
and will discard all subsequent frames until it receives the one it is expecting. The silence
of the receiver causes the timer of the unacknowledged frame to expire. This, in turn,
causes the sender to go back and resend all frames, beginning with the one with the
expired timer. The receiver does not have to acknowledge each frame received. It can
send one cumulative acknowledgment for several frames.

Resending Frame
When a frame is damaged, the sender goes back and sends a set of frames starting from
the damaged one up to the last one sent. For example, suppose the sender has already sent
frame 6, but the timer for frame 3 expires. This means that frame 3 has not been
acknowledged, so the sender goes back and sends frames 3, 4, 5, 6 again. That is why the
protocol is called Go-Back-N ARQ.
Operation
Let us see what happens in various situations.
Normal Operation
Figure 11.9 shows a normal operation of this mechanism. The sender keeps track of the
outstanding frames and updates the variables and windows as the acknowledgments
arrive.
Figure Go-Back-N ARQ, normal operation

Damaged or Lost Frame


Now let us see what happens if a frame is lost. Figure 11.10 shows that frame 2 is
lost. Note that when the receiver receives frame 3, it is discarded because the receiver is
expecting frame 2, not frame 3 (according to its window). After the timer for frame 2
expires at the sender site, the sender sends frames 2 arid 3 (it goes back to 2).
Figure Go-Back-N ARQ, lostframe
Damaged or Lost Acknowledgment
If an acknowledgment is damaged or lost, we can have two situations. If the next
acknowledgment arrives before the expiration of any timer, there is no need for retrans
mission of frames because acknowledgments are cumulative in this protocol. ACK 4
means ACK ito ACK 4. So if ACK 1, ACK 2, and ACK 3 are lost, ACK 4 covers them.
However, if the next ACK arrives after the time-out, the frame and all the frames after
that are resent. Note that the receiver never resends an ACK. We leave the figure and
details as an exercise.
Delayed Acknowledgment
A delayed acknowledgment also triggers the resending of frames. Again, we leave the
figure and details as an exercise.

Sender Window Size


We can now show why the size of the sender window must be less than 2”. As an exam
ple, we choose in = 2, which means the size of the window can be 2 — 1, or 3. Figure
11.11 compares a window size of 3 and 4.
Figure Go-Back-N ARQ: sender window size
If the size of the window is 3 (less than 22) and all three acknowledgments are
lost, the frame 0 timer expires and all three frames are resent. However, the window of
the receiver is now expecting frame 3, not frame 0, so the duplicate frame is correctly
discarded. On the other hand, if the size of the window is 4 (equal to 22) and all
acknowledgments are lost, the sender will send the duplicate of frame 0. However, this
time the window of the receiver expects to receive frame 0, s it accepts frame 0, not as a
duplicate, but as the first frame in the next cycle. This is an error.
If Go-Back-N ARQ, the size of the sender window mist be let 4han 2”; the size
receiver window Is always 1

Bidirectional Transmission and Piggybacking


As in the case of Stop-and-Wait ARQ, Go-Back-N ARQ can also he bidirectional. We
can also use piggybacking to improve the efficiency of the transmission. However, note
that each direction needs both a sender window and a receiver window. We leave the
configuration of the windows as an exercise.
SELECTIVE REPEAT ARQ
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps
track of only one variable, and there is no need to buffer out-of-order frames; they are
simply discarded. However, this protocol is very inefficient for a noisy link. In a noisy
link a frame has a higher probability of damage, which means the resending of multiple
frames. This resending uses up the bandwidth and slows down the transmission. For
noisy links, there is another mechanism that does not resend N frames when just one
frame is damaged; only the damaged frame is resent. This mechanism is called Selec tive
Repeat ARQ. It is more efficient for noisy links, but the processing at the receiver is more
complex.
Sender and Receiver Windows
The configuration of the sender and its control variables for Selective Repeat ARQ are
the same as those for Go-Back-N ARQ. However, for reasons to be discussed later, the
size of the window should be at most one-half of the value 2” The receiver window size
must also be this size. This window, however, specifies the range of the accepted
received frame. In other words, in Go-Back-N the receiver is looking for one specific
sequence number; in Selective Repeat, the receiver is looking for a range of sequence
numbers. The receiver has two control variables RF and RL to define the boundaries of
the window. Figure 11.12 shows the sender and receiver windows.
Figure Selective Repeat L4RQ, sender and receiver windows
Selective Repeat ARQ also defines a negative acknowledgment (NAK) that reports the
sequence number of a damaged frame before the timer expires.

Operation
Let us show the operation of the mechanism with an example of a lost frame, as shown in
Figure .
Frames 0 and 1 are accepted when received because they are in the range specified by the
receiver window. When frame 3 is received, it is also accepted for the same rea son.
However, the receiver sends a NAK 2 to show that frame 2 has not been received. When
the sender receives the NAK 2, it resends only frame 2, which is then accepted because it
is in the range of the window.
Figure Selective Repeat ARQ, lost frame
Lost and Delayed ACKs and NAKs
We leave lost and delayed ACKs and NAKs as exercises. Note that the sender also sets a
timer for each frame sent.
Sender Window Size
We can now show why the size of the sender and receiver windows must be at most one-
half of 2”’. For an example, we choose in = 2, which means the size of the win dow
should be 2”’/2, or 2. Figure 11.14 compares a window size of 2 with a window size of 3.
If the size of the window is 2 and all acknowledgments are lost, the timer for frame C)
expires and frame 0 is resent. However, the window of the receiver is now expecting
frame 2, not frame 0, so this duplicate frame is correctly discarded. When the size of the
window is 3 and alt acknowledgments are lost, the sender sends a duplicate of frame 0.
However, this time, the window of the receiver expects to receive frame 0 (0 is part of the
window), so it accepts frame 0, not as a duplicate, but as the first frame in the next cycle.
This is clearly an error.
In Selective Repeat ARQ the size otthe sender Wand receiver wuidow must be at
most one half of 2m
Figure Selective Repeat /RQ, sender window size
Bidirectional Transmission and Piggybacking
As in the case of Stop-and-Wait ARQ and the Go-Back-N ARQ, Selective Repeat ARQ
can also be bidirectional. We can use piggybacking to improve the efficiency of the
transmission. However, note that each direction needs both a sender window and a
receiver window. We leave the configuration of the windows as an exercise.
Bandwidth-Delay Product
A measure of the efficiency of an ARQ system is the product of the bandwidth (in bits
per second) and the round-trip delay (in seconds). If the link has an adequate band width,
the sender will exhaust its window quickly and will wait for the acknowledg nents to
come. If the delay is tong, the sender can also exhaust its window white vaiting. So the
product of these two factors can be used to define the efficiency of an RQ system. The
bandwidth-delay product is a measure of the number of bits we an send out of our system
while waiting for news from the receiver.
The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and then back again. However, the system sends only 1000 bits.
We can say that the link utilization is only 1000/20,000, or 5%. For this reason, for a link
with high bandwidth or long delay, use of Stop-and-Wait ARQ wastes the capacity of the
link.
Example 1
What is the utilization percentage of the link in Example I if the link uses Go-Back-N
ARQ with a 15-frame sequence?
Solution
The bandwidth-delay product is still 20,000. The system can send up to 15 frames or
15,000 bits during a round trip. This means the utilization is 15,000/20,000, or 75
percent. Of course, if there are damaged frames, the utilization percentage is much less
because frames have to be resent.

Pipelining
In networking and in other areas, a task is often begun before the previous task has ended.
This is known as pipelining. There is no pipelining in Stop-and-Wait ARQ because we
need to wait for a frame to reach the destination and be acknowledged before the next
frame can be sent. However, pipelining does apply to Go-Back-N ARQ and Selective
Repeat ARQ because several frames can be sent before we receive news about the
previous frames.
Pipelining improves the efficiency of the transmission if the number of bits in tran sition
is large with respect to the bandwidth-delay product.
HDLC
High-level Data Link Control (HDLC) is an actual protocol designed to support both
half-duplex and full-duplex communication over point-to-point and multipoint links. It
implements the ARQ mechanisms we discussed in this chapter.
Configurations and Transfer Modes
HDLC provides two common modes of transmission: NRM and ABM.
NRM
In normal response mode (NRM), the station configuration is unbalanced. We have one
primary station and multiple secondary stations. A primary station can send com mands;
a secondary station can only respond. The NRM is used for both point-to-point and
multiple-point links. See Figure 11.15.
ABM
In asynchronous balanced mode (ABM), the configuration is balanced. The link is point-
to-point, and each station can function as a primary and a secondary, as shown in Figure

Figure NRM
Frame
To provide the flexibility necessary to support all the options possible in the modes and
configurations described above, FIDLC defines three types of frames: information frames
(1-frames), supervisory frames (S-frames), and unnumbered frames (U-frames). Each
type of frame works as an envelope for the transmission of a different type of message. J-
frames are used to transport user data and control information relating to user data
(piggybacking). S-frames are used only to transport control information. U-frames are
reserved for system management. Information carried by U-frames is intended for
managing the link itself.

Frame Format
Each frame in FIDLC may contain up to six fields, as shown in Figure 1.1.17: a
beginning flag field, an address field, a control field, an information field, a frame check
sequence (FCS) field, and an ending flag field. In multiple-frame transmissions, the
ending flag of one frame can serve as the beginning flag of the next frame.
Flag Field
The flag field of an HDLC frame is an 8-bit sequence with a bit pattern 01111110 that
identifies both the beginning and end of a frame and serves as a synchronization
Figure HDLC frame

pattern for the receiver. The flag field is discussed further in the section on data
transparency.
Address Field
The second field of an FIDLC frame contains the address of the secondary station that is
either the originator or the destination of the frame (or the station acting as secondary in
the case of combined stations). If a primary station creates a frame, it contains a to
address. If a secondary creates the frame, it contains a from address. An address field can
be 1 byte or several bytes long, depending on the needs of the network. One byte can
identify up to 128 stations (because I bit is used for another purpose). Larger net works
requite multiple-byte address fields.
If the address field is only 1 byte, the last bit is always a 1. If the address is more than 1
byte, all bytes but the last one will end with 0; only the last will end with 1. Ending each
intermediate byte with 0 indicates to the receiver that there are more address bytes to
come. The networks that do not use primary/secondary configuration, such as Ethernet
(see Chapter 14), use two address fields: the sender address and the receiver address.
Control Field
The control field is a 1- or 2-byte segment of the frame used for flow and error control.
The interpretation of bits in this field is different for different frame types. We discuss
this field when we discuss the frame types.
Information Field
The information field contains the user’s data from the network layer or network
management information. Its length can vary from one network to another but is always
fixed within each network.
FCS Field
The frame check sequence (FCS) is 1-IDLC’s error detection field. It can contain either a
2- or 4-byte ITU-T CRC.
Frame Type
HDLC defines three types of frames: the 1-frame, S-frame, and U-frame, as shown in
Figure 11.18.
Figure HDLC frame types

I-Frame
I-frames are designed to carry user data from the network layer. In addition, they
can include flow and error control information (piggybacking). Figure 11.19 shows the
format of the control field for an I-frame.
Figure I-frame

The bits in the control field of the I-frame are interpreted as follows:
• If the first bit of the control field is 0, this means the frame is an I-frame.
• The next 3 bits, called N(S), define the sequence number of the frame in travel. Note
that with 3 bits, we can only define a sequence numbbr between 0 and 7. The value of
this field corresponds to the value of control variable 5, as discussed for the three ARQ
mechanisms.
• The next bit is called the P/F bit. The P/F field is a single bit with a dual purpose. It has
meaning only when it is set (bit = 1) and can mean poll or final. It means poii when the
frame is sent by a primary station to a secondary (when the address field contains the
address of the receiver). It meansflnal when the frame is sent by a secondary to a primary
(when the address field contains the address of the sender).
• The next 3 bits, called N(R), correspond to the value of the ACK when piggyback ing is
used.
S-Frames
Supervisoiy frames are used for flow and error control whene ver piggybacking is either
impossible or inappropriate (when the station either has no data of its own to send or
needs to send a command or response other than an acknowledgment). S-frames do not
have information fields. Figure 11.20 shows the format of the control field for an S-
frame.
Figure S-frame control field in HDLC

The bits in the control field are interpreted as follows:


• If the first 2 bits of the control field are 10, this means the frame is an S-frame.
• The second 2 bits of the control field of an S-frame is a code that defines the four types
of S-frames: receive ready (RR), receive not ready (RNR), reject (REJ), and selective
reject (SREJ).
a. Receive ready (RR). If the value of the code subfield is 00, it is an H S-frame. This
kind of frame acknowledges a safe and sound frame or group of frames.
b. Receive not ready (RNR). If the value of the code subfield is 10, it is an R S-frame.
This kind of frame is the RR frame with additional duties. It acknowl edges the receipt of
a frame or group of frames, and it announces that the
receiver is busy and cannot receive more frames. It acts as a kind of congestion control
mechanism by asking the sender to slow down.
c. Reject (REJ). If the value of the code subfield is 01, it is a REJ S-frame. This is a NAK
frame, but not like the one used for Selective Repeat ARQ. It is a NAK
that can be used in Go-Back-N ARQ to improve the efficiency of the process by
informing the sender, before the sender time expires, that the last frame is lost or
damaged.
d. Selective reject (SREJ). If the value of the code subfield is 11, it is an SREJ S-frame.
This is a NAK frame used in Selective Repeat ARQ. Note that the HDLC protocol uses
the term selective reject instead of selective repeat.
• The fifth bit in the control field is the P/F bit, as discussed before.
• The next 3 bits, called N(R), correspond to the ACK or NAK value.
U-Frames
Unnumbered frames are used to exchange session management and control information
between connected devices. Unlike S-frames, U-frames contain an information field, but
one used for system management information, not user data. As with S-frames, however,
much of the information carried by U-frames is contained in codes included in the control
field. U-frame codes are divided into two sections: a 2-bit prefix before the P/F bit and a
3-bit suffix after the P/F bit. Together, these two segments (5 bits) can be used to create
up to 32 different types of U-frames. Some of the more common com binations are
shown in Figure.
Figure U-frame control field in HDLC

The U-frame commands and responses listed in Table 11 1 can be used for different
purposes such as mode setting, exchanging unnumbered frames, and connection and
disconnection of the link.

Data Transparency
The data field of the HDLC frame can carry text as well as non textual information such
as graphics, audio, video, or other bit sequences. Unfortunately, some message types can
create problems during transmission. For example, if the data field of an HDLC frame
contains a pattern that is the same as the sequence reserved for the flag field 01111110,
the receiver interprets that sequence as the ending flag. The rest of the bits are assumed to
belong to the next frame. This phenomenon is called a lack of data transparency. When
data are transparent, all data are recognized as data and all control information is
recognized as control information. There is no ambiguity as to which is which.
Bit Stuffing
To guarantee that the flag field sequence does not appear inadvertently anywhere else in
the frame, HDLC uses a process called bit stuffing. Every time a sender wants to transmit
a bit sequence having more than five consecutive is, it inserts (stuffs) one redundant 0
after the fifth 1. For example, the sequence 011111111000 becomes 0111110111000.
This extra U is inserted regardless of whether the sixth bit is another 1. Its presence tells
the receiver that the current sequence is not a flag. Once the receiver has seen the sniffed
0, the 0 is dropped from the data and the original bit stream is restored.
Bit stuffing s tI’e process of adthng one extn 41 whenever there are five consecutive
Is in the data so that the receiver does not m the data for a flag
Figure shows bit stuffing at the sender and bit removal at the receiver. Note that even if
we have a 0 after five Is, we still stuff a 0. The 0 will be removed by the receiver.
Figure Bit stuffing and removal

With three exceptions, bit stuffing is required whenever five is occur consecu tively. The
exceptions are when the bit sequence really is a flag, when the transmission is being
aborted, and when the channel is idle. The flowchart in Figure 11.25 shows the
Figure Bit stuffing in HDLC
² IEEE 802 STANDARD
The standards are divided into parts, each published as a separate book. The 802.1
standard gives an introduction to the set of standards and defines the inter-face primitives.
The 802.2 standard describes the upper part of the data link layer, which uses the LLC
(Logical Link Control) protocol.

Parts 802.3 through 802.5 describe the three LAN standards, the CSMA/CD,
token bus, and token ring standards, respectively. Each standard covers the physical layer
and MAC sub-layer protocol.

IEEE Standard 802.3 and Ethernet

The IEEE 802.3 standard is for a 1-persistent CSMA/CD LAN. To review the
idea, when a station wants to transmit, it listens to the cable. If the cable is busy, the
station waits until it goes idle; otherwise it transmits immediately.

If two or more stations simultaneously begin transmitting on an idle cable, they


will collide. All colliding stations then terminate their transmission, wait a random time,
and repeat the whole process all over again.

The Xerox Ethernet was so successful that Xerox, DEC, and Intel drew up a
standard for a 1O-Mbps Ethernet.) This standard formed the basis for 802.3. The
published 802.3 standard differs from the Ethernet specification in that it describes a
whole family of l-persistent (CSMA/CD systems, running at speeds from 1 to 1O-Mbps
on various media).

At the one header field differs between the two (the 802.3 length field is used for
packet type in Ethernet). The initial standard also gives the parameters for a 10 Mbps
baseband system using 53-ohm coaxial cable. Parameter sets for other media and speeds
came later.

Three kinds of 802.3 cabling:

10Base5, 10Base2, 10Base-T

With lOBase2, the connection to the cable is just a passive BNC T-junction
connector. The transceiver electronics are on the controller board, and each sta tion
always has its own transceiver.
With IOBase-T, there is no cable at all. just the hub (a box full of electronics).
Adding or removing a station is simpler in this configuration, and cable breaka can be
detected easily.
The disadvantage of lOBase-T is that the maximum cable run from the hub is
only 100 meters, maybe 150 meters if high-quality (category 5) twisted pairs are used.
Also a large hub costs thousands of dollars.
A fourth cabling option for 802.3 is lOBase-F, which uses fiber optics. This
alternative is expensive due to the cost of the connectors and terminators, hut it has
excellent noise immunity and is the method of choice when running between buildings or
widely separated hubs.
Cable Topologies
Linear Spine Tree Segmented

Encoding:

Binary encoding Manchester encoding Differential-Manchester


Encoding
Frame format:

Switched 802.3 LANs


When a station wants to transmit an 802.3 frame, it outputs a standard frame to
the switch. The plug-in card getting the frame checks to see if it is destined for one of the
other stations connected to the same card. If so, the frame is copied there. If not, the
frame is sent over the high-speed backplane to the destination station’s card. The
backplane typically runs at over 1Gbps using a proprietary protocol.
IEEE Standard 802.4: Token Bus

A simple system with a known worst case is a ring in which the stations take turns
sending frames. If there are, n stations and it takes T sec to send a frame, no frame will
ever have to wait more than nT sec to be sent.

The factory automation people in the 802 committee liked the conceptual idea of
a ring but did not like the physical implementation because a break in the ring cable
would bring the whole network down.

Furthermore, they noted that a ring is a poor fit to the linear topology of most
assembly lines. As a result, a new standard was developed having the robustness of the
802.3 broadcast cable, but the known worst-case behavior of a ring.

Token Bus
The 802.4 MAC protocol is very complex, with each station having to maintain
different timers and more than two dozen internal state variables. The 802.4 standard is
much longer than 802.3, filling more than 200 pages.

The two standards are also quite different in style, with 802.3 giving the protocols as
Pascal procedures, whereas 802.4 gives them as finite state machines, with the actions
written in Ada®.
• The token bus frame format is shown below. It is unfortunately different from the
802.3 frame format.
• The preamble is used to synchronize the receiver’s clock, as in 802.3, except that
here it may be as short as 1 byte.
• The Starting delimiter and Ending delimiter fields are used to mark the frame
boundaries.
• Both of these fields contain analog encoding of symbols other than 0s and 1s, so
that they cannot occur accidentally in the user data.
• As a result, no length field is needed.

802.4 frame format

² FDDI
Fiber distributed data interface (FDDI) is a local area network protocol s by ANSI
and the ITU-T (ITU-T X.3). It supports data rates of 100 Mbps and provides a high-speed
alternative to Ethernet and Token Ring.

When FDDI was designed, speeds of 100 Mbps required fiber-optic cable.
Today, however, comparable speeds are available using copper cable. The copper
version of FDDI is known as CDDI.

Access Method: Token Passing

In FDDI, access is limited by time. A station may send as many frames as it can
within its allotted access period, with the proviso that real-time data be sent first.

To implement this access mechanism, FDDI differentiates between two types of data
frames: synchronous and asynchronous.

§ Synchronous here refers to information that is real-time.


§ Asynchronous refers to information that is not.

These frames are usually called S-frames and A-frames.


Each station that captures the token is required to send S-frames first. In fact, it
must send its S-frames whether or not its time allotment has run out (see below). Any
remaining time may then be used to send A-frames.

To understand how this mechanism ensures fair and timely link access, it is
necessary to understand the FDDI time registers and timers.
Synchronous Allocation (SA): The SA register indicates the length of time allowed each
station for sending synchronous data. This value is different for each station and is
negotiated during initialization of the ring.

Target Token Rotation Time (TTRJT): The TTRT register indicates the average time
required for a token to circulate around the ring exactly once (the elapsed time between a
token’s arrival at a given station and its next arrival at the same station). Because it is an
average, the actual time of any rotation may be greater or less than this value.

Absolute Maximum Time (AMT): The AMT register holds a value equal to twice the
TTRT. A token may not take longer than this time to make one rotation of the ring. If it
does, some station or stations are monopolizing the network and the ring must be
reinitialized.

FDDI operation

Token Rotation Timer (TRT): The TRT runs continuously and measures the actual tune
taken by the token to complete a cycle. In our implementation, we use an incrementing
TRT for simplicity, although some implementations may use a decrementing timer.

Token Holding Timer (THT): The THT begins running as soon as the token is received,
its function is to show how much time remains for sending as synchronous frames once
the synchronous frames have been sent.
FDDI Layers

Logical Link Control


The LLC layer is similar to that defined in the IEEE 802.2 protocols.
Media Access Control
The FDDI MAC layer is almost identical to that defined for Token Ring.
However, although the functions are similar, the FDDI MAC frame itself is different
enough to warrant an independent discussion of each field.

FDDI frame types


Frame Fields There are eight fields in the FDDI frame:

U Start delimiter (SD). The first byte of the field is the frame’s starting flag. As in
Token Ring, these bits are replaced in the physical layer by the control codes (violations)
J and K.
Frame control (FC): The second byte of the frame identifies the frame type. The next
two fields are the destination and source addresses. Each address consists of two to six
bytes.
Data: Each data frame can carry up to 4500 bytes of data.
CRC: FDDI uses the standard IEEE four-byte cyclic redundancy check.
End delimiter (ED): This field consists of half a byte in the data frame or a full byte in
the token frame. It is changed in the physical layer with one T violation symbol in the
data/command frame or two T symbols in the token frame.
Frame status (FS): The FDDI FS field is similar to that of Token Ring. It is included
only in the data/command frame and consists of 1.5 bytes.
Implementation: Physical Medium Dependent (PMD) Layer

The physical medium dependent (PMD) layer defines the required connections
and electronic components. Specifications for this layer depend on whether the
transmission medium used is fiber-optic or copper cable.
Data Ring
FDDI is implemented as a dual ring. In most cases, data transmission is confined
to the primary ring. The secondary ring is provided in case the primary fails.
FDDI Ring

The secondary ring makes FDDI self-healing. Whenever a problem occurs on the
primary ring, the secondary can be activated to complete data circuits and maintain
service.

FDDI ring after a failure

v BRIDGES:
Bridges operate in both the physical and the data link layers of the OSI model.
Bridges can divide a large network into smaller segments. Unlike repeaters, however,
bridges contain logic that allows them to keep the traffic for each segment separate.
In this way, they filter traffic, a fact that makes them useful for controlling
congestion and isolating problem links. Bridges can also provide security through this
partitioning of traffic.
A bridge operates at the data link layer, giving it access to the physical addresses
of all stations connected to it.
When a frame enters a bridge, the bridge not only regenerates the signal but
checks the address of the destination and forwards the new copy only to the segment to
which the address belongs.
As a bridge encounters a packet, it reads the address contained in the frame and
compares that address with a table of all the stations on both segments. When it finds a
match, it discovers to which segment the station belongs and relays the packet only to
that segment.

A Bridge

A packet from station A addressed to station D arrives at the bridge. Station A is


on the same segment as station D; therefore, the packet is blocked from crossing into the
lower segment. Instead the packet is relayed to the entire upper segment and received by
station D.
A packet generated by station A is intended for station G. The bridge allows the
packet to cross and relays it to the entire lower segment, where it is received by station G.
Functions of a Bridge:

A Packet from A to D A Packet from A to G


Types of Bridges:
To select between segments, a bridge must have a look-up table that contains the
physical addresses of every station connected to it.
Simple Bridge:
Simple bridges are the most primitive and least expensive type of bridge. A
simple b links two segments and contains a table that lists the addresses of all the stations
included in each of them.
What makes it primitive is that these addresses must be entered manually. Before
a simple bridge can be used, an operator must sit down and enter the addresses of every
station.
Whenever a new station is added, the table must be modified. If a station is
removed, the newly invalid address must be deleted.
Multiport Bridge:
A multiport bridge can be used to connect more than two LANs. The below
figure shows that, the bridge has three tables, each one holding the physical addresses of
stations reachable through the corresponding port.
Transparent Bridge:
A transparent, or learning, bridge builds its table of station addresses on its own as
it performs its bridge functions. When the transparent bridge is first installed, its table is
empty.
Multiport Bridge

As it encounters each packet, it looks at both the destination and the source
addresses. It checks the destination to decide where to send the packet.
If it does not yet recognize the destination address, it relays the packet to all of
the stations on both segments. It uses the source address to build its table.
As it reads the source address, it notes which side the packet came from and
associates that address with the segment to which it belongs.
Spanning Tree Algorithm: Bridges are normally installed redundantly, which means
that two LANs may be connected by more than one bridge.
In this case, if the bridges are transparent bridges, they may create a loop, which
means a packet may be going round and round, from one LAN to another and back again
to the first LAN. To avoid this situation, bridges today use what is called the spanning
tree algorithm.
Source Routing: Another solution to prevent loops in the LANs connected by bridges is
source routing. In this method, the source of the packet defines the bridges and the LANs
through which the packet should go before reaching the destination.
Bridges Connecting Different LANs:
Format: Frames sent by different LANs have different formats (compare an Ethernet
frame with a Token Ring frame).
Payload size: The size of the data that can be encapsulated in a frame varies from
protocol to protocol (compare the maximum payload size of an Ethernet frame with that
of a Token Ring).
Data rate: Different protocols use different data rates (compare the 10-Mbps data rate of
an Ethernet with the 16-Mbps data rate of a Token Ring); the bridge should buffer the
frame to compensate for this difference.
Address bit order. The bit order of addresses in different LAN protocols is not the same;
for example, a bridge should reverse an address if it is connecting an Ether net LAN to a
Token Ring LAN.
Other issues: There are other issues that should be resolved, such as acknowledgment,
collision, and priority, which may be part of one LAN protocol but not the other.

Das könnte Ihnen auch gefallen