Beruflich Dokumente
Kultur Dokumente
2. What are the three criteria necessary for an effective and ef ficient network?
The most important criteria are per formance, reliability and security.
Performance of the network depends on number of users, type of transmission
medium, the capabilities of the connected h/w and the efficiency of the s/w.
Reliability is measured by frequency of failure, the time it takes a link to recover
from the failure and the network’s robustness in a catastrophe.
Security issues include protecting data from unauthorized access and viruses.
3. What are the three fundamental charact eristics determine the effectiveness
of the data communication system?
The effectiveness of the data communication system depends on three fundamental
characteristics:
Delivery : Th e system must deliver data to the correct destination.
Accuracy : The system must deliver data accurately.
Timeliness : The system must deliver data in a timely manner.
7. For n devices in a network, what is the number of cable links required for a
mesh and ring topology?
Mesh topology – n (n-1)/2 Ring topology – n
10. Assume 6 devices are arranged in a mesh topology. How many cables are
needed? How many ports are needed for each device?
Number of cables=n (n-1)/2=6(6-1)/2=15
Number of ports per device=n-1=6-1=5
12. What are header and trailers and how do they get added and removed?
Each layer in the sending machine adds its own information to the message it
receives from the layer just above it and passes the whole package to the layer just
below it. This information is added in the form of headers or trailers. Headers are
added to the message at the layers 6,5,4,3, and 2. A trailer is added at layer2. At the
receiving machine, the headers or trailers attached to the data unit at the
corresponding sending layers are removed, and actions appropriate to that layer are
taken.
13. The transport layer creates a co mmunicat ion between the source and
destination. What are the three events involved in a connection?
Creating a connection involves three steps: connection establishment, data transfer
and connection release.
16. Using HDB3, encode the bit stream 10000000000100. Assume the number of
1s so far is odd and the first 1 is positive.
17. What are the functions of a DTE? What are the functions of a DCE?
Data terminal equipment is a device that is an information source or an
information sink. It is connected to a network through a DCE. Data circuit-
terminating equipment is a device used as an interface between a DTE and a
network.
19. Discuss the mode for propagating light along optical channels.
There are two modes for propagating light alon g optical channels, multimode
and single mode.
Multimode: Multiple beams from a light source move through the core in different
paths.
Single mode: Fiber with extremely small diameter that limits beams to a few angles,
resulting in an almost horizontal beam.
21.How are the guided media differing from unguided transmission media?
Guided transmission media Unguided transmission media 1.
Guided indicate, medium is contained
1. Unguided medium does not have any within physical boundary Physical
boundary
2. Transmission takes place through wire.
3. It is a wireless transmission.
24. Give the relationship between propagation speed and propagation time?
Propagation time = distance / propagation speed
The time required for a signal or a bit to travel f rom one point to another is
called Propagation time .
PART-B
1. Discuss the four basic network topologies and give the advantages and
disadvantages of each type. (A/M-2011)
Mesh Topologies:
n (n -1) /2
To accommodate that many links, every device on the network must have n –
1 input/output (VO) ports (see Figure 1.5) to be connected to the other n - 1 stations.
i. The use of dedicated links guarantees that each connection can carry its own data
load, thus eliminating the traffic problems that can occur when links must be shared
by multiple devices.
ii. A mesh topology is robust. If one link becomes unusable, it does not incapacitate
the entire system.
iii. There is the advantage of privacy or security.
iv. Point-to-point links make fault identification and fault isolation easy. Traffic can
be routed to avoid links with suspected problems.
Star Topology:
i. Star topology is the dependency of the whole topology on one single point, the
hub. If the hub goes down, the whole system is dead.
ii. Star topology requires far less cable than a mesh, each node must be linked to a
central hub.
Bus Topology:
A bus topology, on the other hand, is multipoint. One long cable acts as a
backbone to link all the devices in a network (see Figure 1.7).
Nodes are connected to the bus cable by drop lines and taps. A drop line is a
connection running between the device and the main cable. A tap is a connector that
either splices into the main cable or punctures the sheathing of a cable to create a
contact with the metallic core. As a signal travels along the backbone, some of its
energy is transformed into heat. Therefore, it becomes weaker and weaker as it
travels farther and farther. For this reason there is a limit on the number of taps a bus
can support and on the distance between those taps.
Ring Topology:
in the channel’s RTT rather than just its one-way latency), then the sender can send
up to two delay × bandwidths worth of data before hearing from the receiver that all
is well. The bits in the pipe are said to be “in flight,” which means that if the receiver
tells the sender to stop transmitting; it might receive up to a delay × bandwidth’s
worth of data before the sender manages to respond. In our example above, that
amount corresponds to 5.5 × 106 bits (671 KB) of data. On the other hand, if the
sender does not fill the pipe—send a whole delay×bandwidth product’s worth of data
before it stops to wait for a signal—the sender will not fully utilize the network.
High-Speed Networks:
The bandwidths available on today’s networks are increasing at a dramatic
rate, and there is eternal optimism that network bandwidth will continue to improve.
This causes network designers to start thinking about what happens in the limit, or
stated another way, what is the impact on network design of having infinite
bandwidth available.
Although high-speed networks bring a dramatic change in the bandwidth
available to applications, in many respects their impact on how we think about
networking comes in what does not change as bandwidth increases: the speed of
light. To quote Scotty from Star Trek, “You cannae change the laws of physics.” In
other words, “high speed” does not mean that latency improves at the same rate as
band width; the transcontinental RTT of a 1-Gbps link is the same 100 ms as it is for
a 1-Mbps link.
Consider the situation in which the source sends a packet once every 33 ms, as
would be the case for a video application transmitting frames 30 times a second. If
the packets arrive at the destination spaced out exactly 33 ms apart, then we can
deduce that the delay experienced by each packet in the network was exactly the
same. If the spacing between when packets arrive at the destination—sometimes
called the interpacket gap— is variable, however, then the delay experienced by the
sequence of packets must have also been variable, and the network is said to have
introduced jitter into the packet stream, as shown in Figure 1.23. Such variation is
generally not introduced in a single physical link, but it can happen when packets
experience different queuing delays in a multihop packet-switched network. This
queuing delay corresponds to the Queue component of latency defined earlier in this
section, which varies with time.
To understand the relevance of jitter, suppose that the packets being
transmitted over the network contain video frames, and in order to display these
frames on the screen the receiver needs to receive a new one every 33 ms. If a frame
arrives early, then it can simply be saved by the receiver until it is time to display it.
Unfortunately, if a frame arrives late, then the receiver will not have the frame it
needs in time to update the screen, and the video quality will suffer; it will not be
smooth.
■ A virtual circuit identifier (VCI) that uniquely identifies the connection at this
switch, and which will be carried inside the header of the packets that belong to this
connection;
■ An incoming interface on which packets for this VC arrive at the switch;
■ An outgoing interface in which packets for this VC leave the switch;
■ A potentially different VCI that will be used for outgoing packets.
Once the VC tables have been set up, the data transfer phase can proceed, as
illustrated in Figure 3.6. For any packet that it wants to send to host B, A puts the
VCI value of 5 in the header of the packet and sends it to switch 1. Switch 1 receives
any such packet on interface 2, and it uses the combination of the interface and the
VCI in the packet header to find the appropriate VC table entry. As shown in Table
3.2, the table entry in this case tells switch 1 to forward the packet out of interface 1
and to put the VCI value 11 in the header when the packet is sent. Thus, the packet
will arrive at switch 2 on interface 3 bearing VCI 11. Switch 2 looks up interface 3
and VCI 11 in its VC table (as shown in Table 3.3) and sends the packet on to switch
3 after updating the VCI value in the packet header appropriately, as shown in Figure
3.7. This process continues until it arrives at host B with the VCI value of 4 in the
packet. To host B, this identifies the packet as having come from host A.
There are several things to note about this approach. First, it assumes that
host A knows enough about the topology of the network to form a header that has
all the right directions in it for every switch in the path. This is somewhat analogous
to the problem of building the forwarding tables in a datagram network or figuring
out where to send a setup packet in a virtual circuit network. Second, observe that
we cannot predict how big the header needs to be, since it must be able to hold one
word of information for every switch on the path. This implies that headers are
probably of variable length with no upper bound, unless we can predict with
absolute certainty the maximum number of switches through which a packet will
ever need to pass. Third, there are some variations on this approach. For example,
rather than rotate the header, each switch could just strip the first element as it uses
it. Rotation has an advantage over stripping, however: Host B gets a copy of the
complete header, which may help it figure out how to get back to host A. Yet another
alternative is to have the header carry a pointer to the current “next port” entry, so
that each switch just updates the pointer rather than rotating the header;
this may be more efficient to implement. We show these three approaches in Figure
3.10. In each case, the entry that this switch needs to read is A, and the entry that the
next switch needs to read is B.
Source routing can be used in both datagram networks and virtual circuit
networks. For example, the Internet Protocol, which is a datagram protocol, includes
a source route option that allows selected packets to be source routed, while the
majority are switched as conventional datagrams. Source routing is also used in
some virtual circuit networks as the means to get the initial setup request along the
path from source to destination.
layers, each of which solves one part of the problem. Second, it provides a more
modular design. If you decide that you want to add some new service, you may only
need to modify the functionality at one layer, reusing the functions provided at all
the other layers.
Thinking of a system as a linear sequence of layers is an oversimplification,
however. Many times there are multiple abstractions provided at any given level of
the system, each providing a different service to the higher layers but building on the
same low-level abstractions. To see this, consider the two types of channels
discussed in Section 1.2.3: One provides a request/reply service and one supports a
message stream service. These two channels might be alternative offerings at some
level of a multilevel networking system, as illustrated in Figure 1.9.
OSI Architecture:
The ISO was one of the first organizations to formally define a common way
to connect computers. Their architecture, called the Open Systems Interconnection
(OSI) architecture and illustrated in Figure 1.13
Starting at the bottom and working up, the physical layer handles the transmission of
raw bits over a communications link. The data link layer then collects a stream of
bits into a larger aggregate called a frame. Network adaptors, along with device
drivers running in the node’s OS, typically implement the data link level. This means
that frames, not raw bits, are actually delivered to hosts. The network layer handles
routing among nodes within a packet-switched network. At this layer, the unit of
data exchanged among nodes is typically called a packet rather than a frame,
although they are fundamentally the same thing. The lower three layers are
DEPT. OF ECE/UNIT-I Page 19
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE
implemented on all network nodes, including switches within the network and hosts
connected along the exterior of the network. The transport layer then implements
what we have up to this point been calling a process-to process channel. Here, the
unit of data exchanged is commonly called a message rather than a packet or a
frame. The transport layer and higher layers typically run only on the end hosts and
not on the intermediate switches or routers.
Internet Architecture:
The Internet architecture, which is also sometimes called the TCP/IP
architecture after its two main protocols, is depicted in Figure 1.14. An alternative
representation is given in Figure 1.15. The Internet architecture evolved out of
experiences with an earlier packet-switched network called the ARPANET. Both the
Internet and the ARPANET were funded by the Advanced Research Projects Agency
(ARPA), one of the R&D funding agencies of the U.S. Department of Defense. The
Internet and ARPANET were around before the OSI architecture, and the experience
gained from building them was a major influence on the OSI reference model.
While the seven-layer OSI model can, with some imagination, be applied to
the Internet, a four-layer model is often used instead. At the lowest level are a wide
variety of network protocols, denoted NET1, NET2, and so on. In practice, these
protocols are implemented by a combination of hardware (e.g., a network adaptor)
and software (e.g. a network device driver). For example, you might find Ethernet or
Fiber Distributed.
Physical Properties :
An Ethernet segment is implemented on a coaxial cable of up to 500 m. This cable is
similar to the type used for cable TV, except that it typically has an impedance of 50
ohms instead of cable TV’s 75 ohms. Hosts connect to an Ethernet segment by
tapping into it; taps must be at least 2.5 m apart. A transceiver—a small device
directly attached to the tap—detects when the line is idle and drives the signal when
the host is transmitting. It also receives incoming signals. The transceiver is, in turn,
connected to an Ethernet adaptor, which is plugged into the host. All the logic that
Because the cable is so thin, you do not tap into a 10Base2 or 10BaseT cable
in the same way as you would with 10Base5 cable. With 10Base2, a T-joint is
spliced into the cable. In effect, 10Base2 is used to daisy-chain a set of hosts
together. With 10BaseT, the common configuration is to have several point-to-point
DEPT. OF ECE/UNIT-I Page 22
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE
Access Protocol:
The algorithm that controls access to the shared Ethernet link. This algorithm
is commonly called the Ethernet’s media access control (MAC). It is typically
implemented in hardware on the network adaptor.
Frame Format:
Each Ethernet frame is defined by the format given in Figure 2.30. The 64-bit
preamble allows the receiver to synchronize with the signal; it is a sequence of
alternating 0s and 1s. Both the source and destination hosts are identified with a 48-
bit address. The packet type field serves as the demultiplexing key, that is, it
identifies to which of possibly many higher-level protocols this frame should be
delivered. Each frame contains up to 1,500 bytes of data. Minimally, a frame must
contain at least 46 bytes of data, even if this means the host has to pad the frame
before transmitting it. The reason for this minimum frame size is that the frame must
be long enough to detect a collision; we discuss this more below. Finally, each frame
includes a 32-bit CRC. Like the HDLC protocol described in Section 2.3.2, the
Ethernet is a bit-oriented framing protocol. Note that from the host’s perspective, an
Ethernet frame has a 14-byte header: two 6-byte addresses and a 2-byte type field.
The sending adaptor attaches the preamble, CRC, and postamble before transmitting,
and the receiving adaptor removes them.
Effectiveness of CRC:
Figure 4.9. CRC process: The most significant bit enters first.
the left, and a 1-bit shift register shifts the bits to the right every
time a new bit is entered. The rightmost bit in a register feeds back
around at select points. At these points, the value of this feedback
bit is Exclusive-ORed, with the bits shifting left in the register.
Before a bit shifts right, if there is an Exclusive-OR to shift through,
the rightmost bit currently stored in the shift register wraps around
and is Exclusive-ORed with the moving bit.
Transparent Bridges:
A transparent bridge is a bridge in which the stations are completely unaware
of the bridge's existence. If a bridge is added or deleted from the system,
reconfiguration of the stations is unnecessary. According to the IEEE 802.1 d
specification, a system equipped with transparent bridges must meet three criteria:
To make a table dynamic, we need a bridge that gradually learns from the
frame movements. To do this, the bridge inspects both the destination and the source
addresses. The destination address is used for the forwarding decision (table
lookup); the source address is used for adding entries to the table and for updating
purposes. Let us elaborate on this process by using Figure 15.6.
1. When station A sends a frame to station D, the bridge does not have an entry for
either D or A. The frame goes out from all three ports; the frame floods the network.
However, by looking at the source address, the bridge learns that station A must be
located on the LAN connected to port 1. This means that frames destined for A, in
the future, must be sent out through port 1. The bridge adds this entry to its table.
The table has its first entry now.
2. When station E sends a frame to station A, the bridge has an entry for A, so it
forwards the frame only to port 1. There is no flooding. In addition, it uses the
source address of the frame, E, to add a second entry to the table.
3. When station B sends a frame to C, the bridge has no entry for C, so once again it
floods the network and adds one more entry to the table.
4. The process of learning continues as the bridge forwards frames.
In source routing, a sending station defines the bridges that the frame must
visit. The addresses of these bridges are included in the frame. In other words, the
frame contains not only the source and destination addresses, but also the addresses
of all bridges to be visited.
The source gets these bridge addresses through the exchange of special frames
with the destination prior to sending the data frame. Source routing bridges were
designed by IEEE to be used with Token Ring LANs.
These LANs are not very common today.
Bridges Connecting Different LANs:
Frame format. Each LAN type has its own frame format (compare an Ethernet
frame with a wireless LAN frame).
Maximum data size. If an incoming frame's size is too large for the destination
LAN, the data must be fragmented into several frames. The data then need to be
reassembled at the destination. However, no protocol at the data link layer allows
the fragmentation and reassembly of frames. We will see in Chapter 19 that this is
allowed in the network layer. The bridge must therefore discard any frames too
large for its system.
Data rate. Each LAN type has its own data rate. (Compare the 10-Mbps data rate
of an Ethernet with the I-Mbps data rate of a wireless LAN.) The bridge must buffer
the frame to compensate for this difference.
Bit order. Each LAN type has its own strategy in the sending of bits. Some send
the most significant bit in a byte first; others send the least significant bit first.
Security. Some LANs, such as wireless LANs, implement security measures in the
data link layer. Other LANs, such as Ethernet, do not. Security often involves
encryption (see Chapter 30). When a bridge receives a frame from a wireless LAN,
it needs to decrypt the message before forwarding it to an Ethernet LAN.
Multimedia support. Some LANs support multimedia and the quality of services
needed for this type of communication;
Multipoint:
A multipoint (also called multidrop) connection is one in which more
than two specific devices share a single link (see Figure 1.3b).
In a multipoint environment, the capacity of the channel is shared, either
spatially or temporally. If several devices can use the link simultaneously, it is a
spatially shared connection. If users must take turns, it is a timeshared connection.
send data to a user in the wired LAN, a user in the wireless LAN
first sends the data packet to the access point. The access point
recognizes the wireless user through a unique ID called the service-
set identification (SSID). SSID is like a password-protection system
that enables any wireless client to join the wireless LAN. Once the
wirelessuser is authenticated, the access point forwards data
packets to the desired wired user through the switch or
hub.
Figure 6.4 shows multiple access points being used to extend the
connectivity range of the wireless network. The area of coverage of
each access point can be overlapped to adjacent ones to provide
seamless user mobility without interruption. Radio signal levels in a
wireless LAN must be maintained at an optimum value. Normally, a
site survey must be conducted for these requirements. Site surveys
can include both indoor and outdoor sites. The surveys are
normally needed for power requirements, placement of access
points, RF coverage range, and available bandwidth.
the 2.4 GHz band and supports data rates of 5.5 Mb/s to 11 Mb/s.
IEEE 802.11g operates at 2.4 GHz and supports even higher data
rates.
The IEEE 802.11 physical layer is of four types.
1. Direct-sequence spread spectrum (DSSS) uses seven channels,
each supporting data rates of 1 Mb/s to Mb/s. The operating
frequency range is 2.4 GHz ISM band. DSSS uses three
nonoverlapping channels in the 2.4 GHz ISM band. The 2.4 GHz
frequency band used by 802.11 results in interference by certain
home appliances, such as microwave ovens and cordless
telephones, which operate in the same band.
protocols are used in ad hoc networks with highly bursty traffic. In centralized-
access protocols, the media-access issues are resolved by a central authority.
Central-access protocols are used in some wireless LANs that have a base-station
backbone structure and in applications that involve sensitive data. The IEEE 802.11
MAC algorithm provides both distributed- and centralized-access features.
Centralized access is built on top of distributed access and is optional.
The MAC layer consists of two sublayers: the distributed-coordination
function (DCF) algorithm and the point-coordination function algorithm (PCF).
MAC Frame:
The frame format for the 802.11 MAC is shown in Figure 6.5 and is described as
follows.
The frame control (FC) field provides information on the type of frame:
control frame, data frame, or management frame.
The three frame types in IEEE 802.11 are control frames, data-carrying frames, and
management frames. Control frames ensure reliable data delivery. The types of
control frames are
Power savepoll (PSPoll). A sender sends this request frame to the access
point. The sender requests from the access point a frame that had been
buffered by the access-point because the sender was in power-saving mode.
Request to send (RTS). The sender sends an RTS frame to the destination
before the data is sent. This is the first frame sent in the four-way handshake
implemented in IEEE 802.11 for reliable data delivery.
·
Clear to send (CTS). The destination sends a CTS frame to indicate that it is
ready to accept data frames.
ACK frame. The destination uses this frame to indicate to the sender a
successful frame receipt.
Contention-free end (CFE). The PCF uses this frame to signal the end of the
contention-free period.
CFE/End + CFE/ACK. PCF uses this frame to acknowledge the CFE end
frame.
Data. This is the regular data frame and can be used in both the contention and
contention-free periods.
Data/CFE-ACK. This is used for carrying data in the contention-free period and is
used toacknowledge received data.
Data/CFE-Poll. PFC uses this frame to deliver data to destinations and to request
data frames from users.
Data/CFE ACK/CFE-Poll. This frame combines the functionalities of the previous
three frames into one frame.