Sie sind auf Seite 1von 42

Q1.

Draw the block diagram of a general communication model and


explain the function of each block.
Ans. The fundamental purpose of communication system is the exchange the data
between two points. The key elements of basic communication model in the block
diagram can be discrete as followssource

transmitter

transmission
system

reciver

destination

Fig.a) block diagram of general communication system.


1) Source The device generate he data to be transmitted. Example Telephone
2) Transmitter The data generated by source system are not transmitted
directly in the form in which they are generated. A transmitter transforms
and encodes the information in such a manner so as to produce
electromagnetic waves and signals. These EM signals can be transmitted
across some sort of transmission systems a modem takes a digital bit stream
from an attached device such as personal computers and transforms that bit
stream into an analog signal which can be handled by a telephone network.
3) Transmission system- It can be a single transmission line or a complex
network connecting source and destination.
4) Receiver- it assets the signal from the transmission system and convert into
a form which by the destination device. Example- modem accepts an analog
signal coming from a network or a transmission line and coverts that into the
digital bit stream.
5) Destination- destination takes the incoming data from the receiver.

Q2 what are protocols? Why do we need layered protocols? Give at least two
reasons.
Ans In telecommunications, a communication protocol is a system of rules that
allow two or more entities of a communications system to transmit information via
any kind of variation of a physical quantity. These are the rules or standard that
defines the syntax, semantics and synchronization of communication and possible
error recovery methods. Protocols may be implemented by hardware, software, or
a combination of both.
Communicating systems use well-defined formats (protocol) for exchanging
various messages. Each message has an exact meaning intended to elicit a response
from a range of possible responses pre-determined for that particular situation.
The specified behavior is typically independent of how it is to be implemented.
Communications protocols have to be agreed upon by the parties involved. To
reach agreement, a protocol may be developed into a technical standard. A
programming language describes the same for computations, so there is a close
analogy between protocols and programming languages: protocols are to
communications what programming languages are to computations.
Multiple protocols often describe different aspects of a single communication. A
group of protocols designed to work together are known as a protocol suite; when
implemented in software they are a protocol stack.
Most recent protocols are assigned by the IETF for Internet communications, and
the IEEE, or the ISO organizations for other types. The ITU-T handles
telecommunications protocols and formats for the PSTN. As the PSTN and Internet
converge, the two sets of standards are also being driven towards convergence.
Two reasons for using layered protocols
1. Protocol layering is a common technique to simplify networking designs by
dividing them into functional layers, and assigning protocols to perform each layer's
task.

For example, it is common to separate the functions of data delivery and


connection management into separate layers, and therefore separate protocols.

Thus, one protocol is designed to perform data delivery, and another protocol,
layered above the first, performs connection management. The data delivery
protocol is fairly simple and knows nothing of connection management. The
connection management protocol is also fairly simple, since it doesn't need to
concern itself with data delivery.

2. Protocol layering produces simple protocols, each with a few well-defined tasks.
These protocols can then be assembled into a useful whole. Individual protocols
can also be removed or replaced as needed for particular applications.

Q3 what is the difference between broadband and baseband coaxial cable


transmission medium?
Ans Baseband transmissions typically use digital signaling over a single wire; the
transmissions themselves take the form of either electrical pulses or light. The
digital signal used in baseband transmission occupies the entire bandwidth of the
network media to transmit a single data signal. Baseband communication is
bidirectional, allowing computers to both send and receive data using a single
cable. However, the sending and receiving cannot occur on the same wire at the
same time.
Note: Ethernet and baseband
Ethernet networks use baseband transmissions; notice the word "base"for
example, 10BaseT or 10BaseFL.
Using baseband transmissions, it is possible to transmit multiple signals on a single
cable by using a process known as multiplexing. Baseband uses Time-Division
Multiplexing (TDM), which divides a single channel into time slots. The key thing
about TDM is that it doesn't change how baseband transmission works, only the
way data is placed on the cable.
Broadband
Whereas baseband uses digital signaling, broadband uses analog signals in the form
of optical or electromagnetic waves over multiple transmission frequencies. For
signals to be both sent and received, the transmission media must be split into two
channels. Alternatively, two cables can be used: one to send and one to receive
transmissions.
Multiple channels are created in a broadband system by using a multiplexing
technique known as Frequency-Division Multiplexing (FDM). FDM allows
broadband media to accommodate traffic going in different directions on a single
media at the same time.

Baseband:
- Digital signals are used
- Frequency division multiplexing is not possible
- Baseband is bi-directional transmission
- Short distance signal travelling
- Entire bandwidth of the cable is consumed by a single signal in a baseband
transmission.
Broadband:
- Analog signals are used
- Transmission of data is unidirectional
- Signal travelling distance is long
- Frequency division multiplexing is possible
- The signals are sent on multiple frequencies and allow all the multiple signals are
sent simultaneously in broadband transmission.

Q.4 What is OSI model? Explain it in detail. Also compare TCP & OSI
reference modle.
Ans. OSI Model:The Open Systems Interconnection model (OSI model) is a conceptual model that
characterizes and standardizes the communication functions of a
telecommunication or computing system without regard to their underlying
internal structure and technology. Its goal is the interoperability of diverse
communication systems with standard protocols. The model partitions a
communication system into abstraction layers. The original version of the model
defined seven layers.
Layer 1: Physical Layer
Layer 2: Data Link Layer
Layer 3: Network Layer
Layer 4: Transport Layer
Layer 5: Session Layer
Layer 6: Presentation Layer
Layer 7: Application Layer

1) PHYSICAL LAYER: - The physical layer defines the electrical and physical
specifications of the data connection. It defines the relationship between a
device and a physical transmission medium (e.g. fiber optical cable).
2) DATA LINK LAYER: - The data link layer provides node-to-node data
transfera link between two directly connected nodes. It detects and
possibly corrects errors that may occur in the physical layer.
3) NETWORK LAYER: - The network layer provides the functional and
procedural means of transferring variable length data sequences (called
datagrams) from one node to another connected to the same network. It
translates logical network address into physical machine address.
4) TRANSPORT LAYER: - The transport layer provides the functional and
procedural means of transferring variable-length data sequences from a
source to a destination host via one or more networks, while maintaining the
quality of service functions. An example of a transport-layer protocol in the
standard Internet stack is Transmission Control Protocol (TCP), usually built
on top of the Internet Protocol (IP).
5) SESSION LAYER: - The session layer controls the dialogues (connections)
between computers. It establishes, manages and terminates the connections
between the local and remote application. The OSI model made this layer
responsible for graceful close of sessions, which is a property of the
Transmission Control Protocol, and also for session checkpointing and
recovery, which is not usually used in the Internet Protocol Suite.
6) PRESENTATION LAYER: - The presentation layer establishes context between
application-layer entities, in which the application-layer entities may use
different syntax and semantics if the presentation service provides a
mapping between them.
7) APPLICATION LAYER: - The application layer is the OSI layer closest to the end
user, which means both the OSI application layer and the user interact
directly with the software application. Application-layer functions typically
include identifying communication partners, determining resource
availability, and synchronizing communication.

Comparison of OSI Reference Model and TCP/IP Reference Model


Following are some major differences between OSI Reference Model and TCP/IP
Reference Model, with diagrammatic comparison below.
OSI(Open System Interconnection)

TCP/IP(Transmission Control Protocol / Internet Protocol)

1. OSI is a generic, protocol independent


standard, acting as a communication gateway
between the network and end user.

1. TCP/IP model is based on standard protocols around which


the Internet has developed. It is a communication protocol,
which allows connection of hosts over a network.

2. In OSI model the transport layer guarantees the


delivery of packets.

2. In TCP/IP model the transport layer does not guarantees


delivery of packets. Still the TCP/IP model is more reliable.

3. Follows vertical approach.

3. Follows horizontal approach.

4. OSI model has a separate Presentation layer


and Session layer.

4. TCP/IP does not have a separate Presentation layer or


Session layer.

5. OSI is a reference model around which the


networks are built. Generally it is used as a
guidance tool.

5. TCP/IP model is, in a way implementation of the OSI model.

6. Network layer of OSI model provides both


connection oriented and connectionless service.

6. The Network layer in TCP/IP model provides connectionless


service.

7. OSI model has a problem of fitting the


protocols into the model.

7. TCP/IP model does not fit any protocol

8. Protocols are hidden in OSI model and are


easily replaced as the technology changes.

8. In TCP/IP replacing protocol is not easy.

9. OSI model defines services, interfaces and


protocols very clearly and makes clear distinction
between them. It is protocol independent.

9. In TCP/IP, services, interfaces and protocols are not clearly


separated. It is also protocol dependent.

10. It has 7 layers

10. It has 4 layers

Q5 What is sliding window protocol? Explain its operation in detail.


Ans A sliding window protocol is a feature of packet-based data
transmission protocols. Sliding window protocols are used where reliable in-order
delivery of packets is required, such as in the Data Link Layer (OSI model) as well as
in the Transmission Control Protocol (TCP).
Conceptually, each portion of the transmission (packets in most data link layers,
but bytes in TCP) is assigned a unique consecutive sequence number, and the
receiver uses the numbers to place received packets in the correct order, discarding
duplicate packets and identifying missing ones. The problem with this is that there
is no limit on the size of the sequence number that can be required.
By placing limits on the number of packets that can be transmitted or received at
any given time, a sliding window protocol allows an unlimited number of packets
to be communicated using fixed-size sequence numbers. The term "window" on
the transmitter side represents the logical boundary of the total number of packets
yet to be acknowledged by the receiver. The receiver informs the transmitter in
each acknowledgment packet the current maximum receiver buffer size (window
boundary). The TCP header uses a 16 bit field to report the receive window size to
the sender. Therefore, the largest window that can be used is 216 = 64 kilobytes. In
slow-start mode, the transmitter starts with low packet count and increases the
number of packets in each transmission after receiving acknowledgment packets
from receiver. For every ack packet received, the window slides by one packet
(logically) to transmit one new packet. When the window threshold is reached, the
transmitter sends one packet for one ack packet received. If the window limit is 10
packets then in slow start mode the transmitter may start transmitting one packet
followed by two packets (before transmitting two packets, one packet ack has to
be received), followed by three packets and so on until 10 packets. But after
reaching 10 packets, further transmissions are restricted to one packet transmitted
for one ack packet received. In a simulation this appears as if the window is moving
by one packet distance for every ack packet received. On the receiver side also the
window moves one packet for every packet received. The sliding window method
ensures that traffic congestion on the network is avoided. The application layer will
still be offering data for transmission to TCP without worrying about the network
traffic congestion issues as the TCP on sender and receiver side implement sliding
windows of packet buffer. The window size may vary dynamically depending on
network traffic.

Principal of operation
The transmitter and receiver each have a current sequence number nt and nr,
respectively. They each also have a window size wt and wr. The window sizes may
vary, but in simpler implementations they are fixed. The window size must be
greater than zero for any progress to be made.
As typically implemented, nt is the next packet to be transmitted, i.e. the sequence
number of the first packet not yet transmitted. Likewise, nr is the first packet not
yet received. Both numbers are monotonically increasing with time; they only ever
increase.
The receiver may also keep track of the highest sequence number yet received; the
variable ns is one more than the sequence number of the highest sequence number
received. For simple receivers that only accept packets in order (wr = 1), this is the
same as nr, but can be greater if wr > 1. Note the distinction: all packets
below nr have been received, no packets above ns have been received, and
between nr and ns, some packets have been received.
When the receiver receives a packet, it updates its variables appropriately and
transmits an acknowledgment with the new nr. The transmitter keeps track of the
highest acknowledgment it has received na. The transmitter knows that all packets
up to, but not including na have been received, but is uncertain about packets
between na and ns; i.e. na nr ns.
The sequence numbers always obey the rule that na nr ns nt na + wt. That is:

na nr: The highest acknowledgement received by the transmitter cannot be


higher than the highest nr acknowledged by the receiver.
nr ns: The span of fully received packets cannot extend beyond the end of the
partially received packets.
ns nt: The highest packet received cannot be higher than the highest packet
sent.
nt na + wt.: The highest packet sent is limited by the highest acknowledgement
received and the transmit window size.

Q6 Explain character stuffing in detail. What are the draw backs of character
stuffing?
Ans. Byte stuffing or character stuffing is a process that transforms a sequence of
data bytes that may contain 'illegal' or 'reserved' values (such as packet delimiter)
into a potentially longer sequence that contains no occurrences of those values.
The extra length of the transformed sequence is typically referred to as the
overhead of the algorithm. The COBS algorithm tightly bounds the worst-case
overhead, limiting it to no more than one byte in 254. The algorithm is
computationally inexpensive and its average overhead is low compared to other
unambiguous framing algorithms.
When packetized data is sent over any serial medium, a protocol is needed by
which to demarcate packet boundaries. This is done by using a framing marker,
which is a special bit-sequence or character value that indicates where the
boundaries between packets fall. Data stuffing is the process that transforms the
packet data before transmission to eliminate all occurrences of the framing marker,
so that when the receiver detects a marker, it can be certain that the marker
indicates a boundary between packets.

DRAWBACKS
In byte stuffing a special byte is add to the data part, this is known as escape
character (ESC). The escape characters have a predefined pattern.
The receiver removes the escape character and keeps the data part. It cause
to another problem, if the text contains escape characters as part of data.
To deal with this, an escape character is prefixed with another escape
character.

Q.7 Explain in detail the design issues of data link layer. Explain the
services provided by data link layer to network layer.
Ans. If we don't follow the OSI reference model as gospel, we can imagine
providing several alternative service semantics:
Reliable Delivery:
Frames are delivered to the receiver reliably and in the same order as
generated by the sender.
Connection state keeps track of sending order and which frames require
retransmission. For example, receiver state includes which frames have been
received, which ones have not, etc.
Best Effort:
The receiver does not return acknowledgments to the sender, so the sender
has no way of knowing if a frame has been successfully delivered.
When would such a service be appropriate?
1. When higher layers can recover from errors with little loss in
performance. That is, when errors are so infrequent that there is little
to be gained by the data link layer performing the recovery. It is just
as easy to have higher layers deal with occasional lost packet.
2. For real-time applications requiring better never than late'' semantics.
Old data may be worse than no data. For example, should an airplane
bother calculating the proper wing flap angle using old altitude and
wind speed data when newer data is already available
Acknowledged Delivery:
The receiver returns an acknowledgment frame to the sender indicating that
a data frame was properly received. This sits somewhere between the other
two in that the sender keeps connection state, but may not necessarily
retransmit unacknowledged frames. Likewise, the receiver may hand
received packets to higher layers in the order in which the arrive, regardless
of the original sending order.
Typically, each frame is assigned a unique sequence number, which the
receiver returns in an acknowledgment frame to indicate which frame the
ACK refers to. The sender must retransmit unacknowledged (e.g., lost or
damaged) frames.

Services Provided To Network Layer


Network layer is the layer 3 of OSI model and lies above the data link layer. Data
link layer provides several services to the network layer.
The one of the major service provided is the transferring the data from network
layer on the source machine to the network layer on destination machine.
On source machine data link layer receives the data from network layer and on
destination machine pass on this data to the network layer as shown in Fig.
The path shown in fig (a) is the virtual path. But the actual path is Network layer
-> Data link layer -> Physical layer on source machine, then to physical media and
thereafter physical layer -> Data link layer -> Network layer on destination machine

The three major types of services offered by data link layer are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection oriented service.
1. Unacknowledged Connectionless Service

(a) In this type of service source machine sends frames to destination machine but
the destination machine does not send any acknowledgement of these frames back
to the source. Hence it is called unacknowledged service.
(b) There is no connection establishment between source and destination machine
before data transfer or release after data transfer. Therefore it is known as
connectionless service.
(c) There is no error control i.e. if any frame is lost due to noise on the line, no
attempt is made to recover it.
(d) This type of service is used when error rate is low.
(e) It is suitable for real time traffic such as speech.
2. Acknowledged Connectionless Service
(a) In this service, neither the connection is established before the data transfer nor
is it released after the data transfer between source and destination.
(b) When the sender sends the data frames to destination, destination machine
sends back the acknowledgement of these frames.
(c) This type of service provides additional reliability because source machine
retransmit the frames if it does not receive the acknowledgement of these frames
within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.
3. Acknowledged Connection - Oriented Service
(a) This service is the most sophisticated service provided by data link layer to
network layer.
(b) It is connection-oriented. It means that connection is establishment between
source & destination before any data is transferred.
(c) In this service, data transfer has three distinct phases:(i) Connection establishment
(ii) Actual data transfer
(iii) Connection release
(d) Here, each frame being transmitted from source to destination is given a specific
number and is acknowledged by the destination machine.

(e) All the frames are received by destination in the same order in which they are
send by the source.

Q.8. which are the four generations of Ethernet? Also compare the pure aloha
and slotted aloha.
Ans. The original Ethernet was created in 1976 at Xeroxs Palo Alto Research Center
(PARC). Since then, it has gone through four generations: standard Ethernet
(10+mbps), fast Ethernet (100 mbps), gigabyte Ethernet (1gbps), ten gigabyte
Ethernet (10gbps).

Standard Ethernet
All standard implementations use digital signaling (baseband) at 10 mbps. It is
considered in four types-

10 Base 5 thick Ethernet


The first implementation is called 10 Base 5 ethernet or thick Ethernet or thicknet.
The nick name derives from the size of the cable, which is equal to the size of a
garden hose and too stiff to bend. It is arranges in bus topology.

10 Base 2 thin ethernet


The second implementation is called 10 Base 2 or thin ethernet or cheapernet. 10
base 2 ethernet also uses a bus topology, but the cable is much thicker and more
flexible. It can be bent and send to the nearby stations easily.

10 Base T twisted pair ethernet


The third implementation is called 10 base T or twisted pair ethernet. It uses star
topology. The stations are connected to a hub via two pairs of twisted cable.

10 Base F fiber ethernet


Although there are several types of optical fiber 10 mbps ethernet the most
common is 10 base f. It uses the star topology to connect the station to hub. The
station is connected to the hub using two fiber optic cables.

Fast Ethernet
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber
Channel. IEEE created Fast Ethernet under the name 802.3u. Fast Ethernet is
backward-compatible with Standard Ethernet, but it can transmit data 10 times
faster at a rate of 100 Mbps.
It is characterized in three parts- 100 base TX
- 100 Base FX
- 100 base T4

Gigabit Ethernet protocol


The need for an even higher data rate resulted in the design of the Gigabit Ethernet
protocol (1000 Mbps). The IEEE committee calls the standard 802.3z.
It is the four wire version uses the 5 category twisted pair cable. In other words, it
has four implementations- as shown

ALOHA
ALOHA net, also known as the ALOHA System, or simply ALOHA, was a pioneering
computer networking system developed at the University of Hawaii. ALOHAnet
became operational in June, 1971, providing the first public demonstration of a
wireless packet data network. ALOHA originally stood for Additive Links On-line
Hawaii Area.
Pure ALOHA
The first version of the protocol (now called "Pure ALOHA", and the one
implemented in ALOHAnet) was quite simple:

If you have data to send, send the data


If, while you are transmitting data, you receive any data from another
station, there has been a message collision. All transmitting stations will need
to try resending "later".

Note that the first step implies that Pure ALOHA does not check whether the
channel is busy before transmitting. Since collisions can occur and data may have
to be sent again, ALOHA cannot use 100% of the capacity of the communications
channel.

FIG. Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have
collided.

Slotted ALOHA
An improvement to the original ALOHA protocol was "Slotted ALOHA", which
introduced discrete timeslots and increased the maximum throughput. A station
can send only at the beginning of a timeslot, and thus collisions are reduced. In this
case, only transmission-attempts within 1 frame-time and not 2 consecutive frametimes need to be considered, since collisions can only occur during each timeslot.

FIG. Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are
in the same slots.

Q9. Write a short note on 1) piggybacking 2) stop and wait protocol 3) HDLC 4)
point to point protocol.
Ans 1) Piggy backing
In two way communication, whenever a data frame is received, the receiver waits
and does not send the control frame (acknowledgement or ACK) back to the sender
immediately.
The receiver waits until its network layer passes in the next data packet. The
delayed acknowledgement is then attached to this outgoing data frame.
This technique of temporarily delaying the acknowledgement so that it can be
hooked with next outgoing data frame is known as piggybacking.
Working Principle
Piggybacking data is a bit different from Sliding Window Protocol used in the OSI
model. In the data frame itself, we incorporate one additional field for
acknowledgment (called ACK).
Whenever party A wants to send data to party B, it will send the data along with
this ACK field. Considering the sliding window here of size 8 bits, if A has received
frames up to 5 correctly (from B), and wants to send frames starting from frame 6,
it will send ACK6 with the data.
Three rules govern the piggybacking data transfer.
If station A wants to send both data and an acknowledgment, it keeps both
fields there.
If station A wants to send just the acknowledgment, then a separate ACK
frame is sent.
If station A wants to send just the data, then the last acknowledgment field
is sent along with the data. Station B simply ignores this duplicate ACK frame
upon receiving.

2) Stop and wait protocol


Stop-and-wait ARQ also can be referred as alternating bit protocol is a method used
in telecommunications to send information between two connected devices. It
ensures that information is not lost due to dropped packets and that packets are
received in the correct order. It is the simplest kind of automatic repeat-request
(ARQ) method. A stop-and-wait ARQ sender sends one frame at a time; it is a special
case of the general sliding window protocol with both transmit and receive window
sizes equal to 1 and more than one respectively . After sending each frame, the
sender doesn't send any further frames until it receives an acknowledgement (ACK)
signal. After receiving a good frame, the receiver sends an ACK. If the ACK does not
reach the sender before a certain time, known as the timeout, the sender sends
the same frame again. Timer is set after each frame transmission. The above
behavior is the simplest Stop-and-Wait implementation. However, in a real life
implementation there are problems to be addressed.
Typically the transmitter adds a redundancy check number to the end of each
frame. The receiver uses the redundancy check number to check for possible
damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver
sees that the frame is damaged, the receiver discards it and does not send an ACK
pretending that the frame was completely lost, not merely damaged.
One problem is when the ACK sent by the receiver is damaged or lost. In this case,
the sender doesn't receive the ACK, times out, and sends the frame again. Now the
receiver has two copies of the same frame, and doesn't know if the second one is
a duplicate frame or the next frame of the sequence carrying identical data.
Another problem is when the transmission medium has such a long latency that the
sender's timeout runs out before the frame reaches the receiver. In this case the
sender resends the same packet. Eventually the receiver gets two copies of the
same frame, and sends an ACK for each one. The sender, waiting for a single ACK,
receives two ACKs, which may cause problems if it assumes that the second ACK is
for the next frame in the sequence.

3) High-Level Data Link Control (HDLC)


High-Level Data Link Control (HDLC) is a bit-oriented codetransparent synchronous data link layer protocol developed by the International
Organization for Standardization (ISO). The original ISO standards for HDLC are:

ISO 3309 Frame Structure


ISO 4335 Elements of Procedure
ISO 6159 Unbalanced Classes of Procedure
ISO 6256 Balanced Classes of Procedure

The current standard for HDLC is ISO 13239, which replaces all of those standards.
HDLC provides both connection-oriented and connectionless service.
HDLC can be used for point to multipoint connections, but is now used almost
exclusively to connect one device to another, using what is known as Asynchronous
Balanced Mode (ABM). The original master-slave modes Normal Response
Mode (NRM) and Asynchronous Response Mode (ARM) are rarely used.
The contents of an HDLC frame are shown in the following table:
Flag

Address

Control

8 bits 8 or more bits 8 or 16 bits

Information

FCS

Flag

Variable length, n * 8 16 or 32
8 bits
bits
bits

Note that the end flag of one frame may be (but does not have to be) the beginning
(start) flag of the next frame.
Data is usually sent in multiples of 8 bits, but only some variants require this; others
theoretically permit data alignments on other than 8-bit boundaries.
The frame check sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed
over the Address, Control, and Information fields. It provides a means by which the
receiver can detect errors that may have been induced during the transmission of
the frame, such as lost bits, flipped bits, and extraneous bits. However, given that
the algorithms used to calculate the FCS are such that the probability of certain
types of transmission errors going undetected increases with the length of the data
being checked for errors, the FCS can implicitly limit the practical size of the frame.

If the receiver's calculation of the FCS does not match that of the sender's,
indicating that the frame contains errors, the receiver can either send a
negative acknowledge packet to the sender, or send nothing. After either receiving
a negative acknowledge packet or timing out waiting for a positive acknowledge
packet, the sender can retransmit the failed frame.
The FCS was implemented because many early communication links had a relatively
high bit error rate, and the FCS could readily be computed by simple, fast circuitry
or software. More effective forward error correction schemes are now widely used
by other protocols.
4) point to point protocol
In computer networking, Point-to-Point Protocol (PPP) is a data link (layer 2)
protocol used to establish a direct connection between two nodes. It can provide
connection authentication, transmission encryption (using ECP, RFC 1968), and
compression.
PPP is used over many types of physical networks including serial cable, phone line,
trunk line, cellular telephone, specialized radio links, and fiber optic links such as
SONET. PPP is also used over Internet access connections. Internet service
providers (ISPs) have used PPP for customer dial-up access to the Internet, since IP
packets cannot be transmitted over a modem line on their own, without some data
link protocol. Two derivatives of PPP, Point-to-Point Protocol over Ethernet
(PPPoA) and Point-to-Point Protocol over ATM (PPPoA), are used most commonly
by Internet Service Providers (ISPs) to establish a Digital Subscriber Line (DSL)
Internet service connection with customers.
PPP is commonly used as a data link layer protocol for connection over synchronous
and asynchronous circuits, where it has largely superseded the older Serial Line
Internet Protocol (SLIP) and telephone company mandated standards (such as Link
Access Protocol, Balanced (LAPB) in the X.25 protocol suite). The only requirement
for PPP is that the circuit provided be duplex. PPP was designed to work with
numerous network layer protocols, including Internet Protocol (IP), TRILL, Novell's
Internetwork Packet Exchange (IPX), NBF, DECnet and AppleTalk. Like SLIP, this is a
full Internet connection over telephone lines via modem. It is more reliable than
SlIP because it double checks to make sure that Internet packets arrive intact. It
resends any damaged packets.

Q10. Explain CSMA/CD and its uses. Why do we prefer CSMA over ALOHA?
Explain in detail.
Ans. Carrier sense multiple access with collision detection (CSMA/CD) is a media
access control method used most notably in early Ethernet technology for local
area networking. It uses a carrier sensing scheme in which a transmitting station
detects collisions by sensing transmissions from other stations while transmitting a
frame. When this collision condition is detected, the station stops transmitting that
frame, transmits a jam signal, and then waits for a random time interval before
trying to resend the frame.
CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD
is used to improve CSMA performance by terminating transmission as soon as a
collision is detected, thus shortening the time required before a retry can be
attempted.
USES
CSMA/CD was used in now-obsolete shared media Ethernet variants (10BASE5,
10BASE2) and in the early versions of twisted-pair Ethernet which used repeater
hubs. Modern Ethernet networks, built with switches and full-duplex connections,
no longer need to utilize CSMA/CD because each Ethernet segment, or collision
domain, is now isolated. CSMA/CD is still supported for backwards compatibility
and for half-duplex connections. IEEE Std 802.3, which defines all Ethernet variants,
for historical reasons still bears the title "Carrier sense multiple access with collision
detection (CSMA/CD) access method and physical layer specifications".
ALOHA and CSMA
Main difference between Aloha and CSMA is that Aloha protocol does not try to
detect whether the channel is free before transmitting but the CSMA protocol
verifies that the channel is free before transmitting data. Thus CSMA protocol
avoids clashes before they happen while Aloha protocol detects that a channel is
busy only after a clash happens. Due to this, CSMA is more suitable for networks
such as Ethernet where multiple sources and destinations use the same channel.
Aloha is a simple communication scheme originally developed by the University of
Hawaii to be used for satellite communication. In the Aloha method, each source
in a communication network transmits data every time there is a frame to be
transmitted. If the frame successfully reaches the destination, the next frame is
transmitted. If the frame is not received at the destination, it will be transmitted
again. CSMA (Carrier Sense Multiple Access) is a Media Access Control (MAC)
protocol, where a node transmits data on a shared transmission media only after
verifying the absence of other traffic.

Q11 explain different aspects of medium access in IEEE 802.11. Do they differ in
802.11b and 802.11g? Why 802.11 frame format has four address fields.
Ans IEEE 802.11 is a set of media access control (MAC) and physical layer (PHY)
specifications for implementing wireless local area network (WLAN) computer
communication in the 900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands. They
are created and maintained by the Institute of Electrical and Electronics Engineers
(IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard
was released in 1997, and has had subsequent amendments. The standard and
amendments provide the basis for wireless network products using the Wi-Fi
brand. While each amendment is officially revoked when it is incorporated in the
latest version of the standard, the corporate world tends to market to the revisions
because they concisely denote capabilities of their products. As a result, in the
marketplace, each revision tends to become its own standard.
802.11b and 802.11g use the 2.4 GHz ISM band, operating in the United States
under Part 15 of the U.S. Federal Communications Commission Rules and
Regulations. Because of this choice of frequency band, 802.11b and g equipment
may occasionally suffer interference from microwave ovens, cordless telephones,
and Bluetooth devices. 802.11b and 802.11g control their interference and
susceptibility to interference by using direct-sequence spread spectrum (DSSS) and
orthogonal frequency-division multiplexing (OFDM) signaling methods,
respectively. 802.11a uses the 5 GHz U-NII band, which, for much of the world,
offers at least 23 non-overlapping channels rather than the 2.4 GHz ISM frequency
band offering only three non-overlapping channels, where other adjacent channels
overlapsee list of WLAN channels. Better or worse performance with higher or
lower frequencies (channels) may be realized, depending on the environment.
802.11n can use either the 2.4 GHz or the 5 GHz band; 802.11ac uses only the 5
GHz band.
The segment of the radio frequency spectrum used by 802.11 varies between
countries. In the US, 802.11a and 802.11g devices may be operated without a
license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used
by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur
radio band. Licensed amateur radio operators may operate 802.11b/g devices

under Part 97 of the FCC Rules and Regulations, allowing increased power output
but not commercial content or encryption.
An 802.11 frame can have up to four address fields. Each field can carry a MAC
address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used
for filtering purposes by the receiver.
The remaining fields of the header are:

The Sequence Control field is a two-byte section used for identifying message
order as well as eliminating duplicate frames. The first 4 bits are used for the
fragmentation number, and the last 12 bits are the sequence number.
An optional two-byte Quality of Service control field that was added
with 802.11e.

The payload or frame body field is variable in size, from 0 to 2304 bytes plus any
overhead from security encapsulation, and contains information from higher
layers.
The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11
frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for
integrity check of retrieved frames. As frames are about to be sent, the FCS is
calculated and appended. When a station receives a frame, it can calculate the
FCS of the frame and compare it to the one received. If they match, it is assumed
that the frame was not distorted during transmission.

Q12. write a short note on: 1) Adaptive tree walk protocol 2) Limited contention
protocol.
Ans. LIMITED CONTROL PROTOCOLS- Collision based protocols (ALOHA,
CSMA/CD) are good when the network load is low. Collision free protocols (bit
map, binary Countdown) are good when load is high. Limited contention
protocols behave like the ALOHA scheme under light load & behave like the
bitmap scheme under heavy load.

For small numbers of stations, the chances of success are good, but as soon as the
number of stations reaches even five, the probability has dropped.
From the figure: probability that some stations will acquire the channel can be
increased only by decreasing the amount of competition.
The limited contention protocols do just that by:
1. Dividing the stations into (not necessarily disjoint) groups.
2. Only the members of group 0 are permitted to compete for slot 0.
3. If one of them succeeds, it acquires the channel and transmits its frame.
4. If there is a collision the members of the group 1 contend for slot 1. Etc.
ADAPTIVE TREE WALK PROTOCOL
There are different methods of singulation, but the most common is "tree walking",
which involves asking all tags with a serial number that starts with either a 1 or 0
to respond. If more than one responds, the reader might ask for all tags with a serial
number that starts with 01 to respond, and then 010. It keeps doing this until it
finds the tag it is looking for. Note that if the reader has some idea of what tags it
wishes to interrogate, it can considerably optimizeS the search order. For example
with some designs of tags, if a reader already suspects certain tags to be present
then those tags can be instructed to remain silent, then tree walked without
interference from them, and then finally they can be queried individually.

This simple protocol leaks considerable information, because anyone able to


eavesdrop on the tag reader alone can determine all but the last bit of a tag's serial
number. Thus a tag can be (largely) identified so long as the reader's signal is
receivable, which is usually possible at much greater distance than simply reading
a tag directly. Because of privacy and security concerns related to this, the Auto-ID
Labs have developed two more advanced singulation protocols, called Class 0 UHF
and Class 1 UHF, which are intended to be resistant to these sorts of attacks. These
protocols, which are based on tree-walking but include other elements, have a
performance of up to 1000 tags per second.
The tree walking protocol may be blocked or partially blocked by RSA Security's
blocker tags.

Q13. what do you mean by congestion? Explain the difference between flow
control and congestion control.
Ans. An important issue in the packet switching network is the congestion.
Congestion in the network may occur if the load on the network the number of
packets sent to the network is greater than the capacity of the network. Congestion
happens in any system that involves waiting. For example, congestion happens on
a freeway because any abnormality in flow, such as an accident during rush hour
creates blockage.
Congestion in a network or internetwork occurs because routers and switches have
queues buffers the hold the packet before and after processing. A router, for
example has an input queue for each interface. When a packet arrives at the
incoming interface, it undergoes three steps before departing.
FLOW CONTROL & CONGESTION CONTROL
Flow control is a mechanism used in computer networks to control the flow of data
between a sender and a receiver, such that a slow receiver will not be outran by a
fast sender. Flow control provides methods for the receiver to control the speed of
transmission such that the receiver could handle the data transmitted by the
sender. Congestion control is a mechanism that controls data flow when
congestion actually occurs. It controls data entering in to a network such that the
network can handle the traffic within the network.
Although, Flow control and congestion control are two network traffic control
mechanisms used in computer networks, they have their key differences. Flow
control is an end to end mechanism that controls the traffic between a sender and
a receiver, when a fast sender is transmitting data to a slow receiver. On the other
hand, congestion control is a mechanism that is used by a network to control
congestion in the network. Congestion control prevents loss of packets and delay
caused due to congestion in the network. Congestion control can be seen as a
mechanism that makes sure that an entire network can handle the traffic that is
coming to the network. But, flow control refers to mechanisms used to handle the
transmission between a particular sender and a receiver.

Q.14.Difference between the static and dynamic routing with their pros and
cons. Give the example of some routing protocols which are used in both types
of routing.
Ans. Static routing is a form of routing that occurs when a router uses a manuallyconfigured routing entry, rather than information from a dynamic routing traffic. In
many cases, static routes are manually configured by a network administrator by
adding in entries into a routing table, though this may not always be the case.
Unlike dynamic routing, static routes are fixed and do not change if the network is
changed or reconfigured. Static routing and dynamic routing are not mutually
exclusive. Both dynamic routing and static routing are usually used on a router to
maximize routing efficiency and to provide backups in the event that dynamic
routing information fails to be exchanged. Static routing can also be used in stub
networks, or to provide a gateway of last resort.
Dynamic routing, also called adaptive routing, describes the capability of a system,
through which routes are characterized by their destination, to alter the path that
the route takes through the system in response to a change in conditions. The
adaptation is intended to allow as many routes as possible to remain valid (that is,
have destinations that can be reached) in response to the change.
People using a transport system can display dynamic routing. For example, if a local
railway station is closed, people can alight from a train at a different station and
use another method, such as a bus, to reach their destination. Another example of
dynamic routing can be seen within financial markets. For example, ASOR or
Adaptive Smart Order Router (developed by Quod Financial), takes routing
decisions dynamically and based on real-time market events.
The term is commonly used in data networking to describe the capability of a
network to 'route around' damage, such as loss of a node or a connection between
nodes, so long as other path choices are available. There are several protocols used
to achieve this:

RIP
OSPF
IS-IS
IGRP/EIGRP

Systems that do not implement dynamic routing are described as using static
routing, where routes through a network are described by fixed paths (statically).
A change, such as the loss of a node, or loss of a connection between nodes, is not
compensated for. This means that anything that wishes to take an affected path
will either have to wait for the failure to be repaired before restarting its journey,
or will have to fail to reach its destination and give up the journey.

Q15. What do you understand by fragmentation? Explain transparent


and non-transparent fragmentation.
Ans. The IPv4 internetworking layer has the capability to automatically fragment
the original datagram into smaller units for transmission. In this case, IP provides
re-ordering of fragments delivered out of order. Fragmentation is an Internet
Protocol (IP) process that breaks datagrams into smaller pieces (fragments), so that
packets may be formed that can pass through a link with a smaller maximum
transmission unit (MTU) than the original datagram size. The fragments are
reassembled by the receiving host.
The data packets can be fragmented in two ways namely:
1. Transparent and
2. Non transparent
Both these ways can be followed based on a network by network basis. We can also
say that no such end to end agreement exists based up on which it can be
decided which process is to be used.
Transparent Fragmentation: This type of fragmentation is followed when a packet
is split in to smaller fragments by a router. These fragments are sent to the next
router which does just the opposite i.e., it reassembles the fragments and combine
them to form original packet. Here, the next network does not come to know
whether any fragmentation has taken place. Transparency is maintained between
the small packet networks when compared to the other subsequent networks. For
example, transparent fragmentation is used by the ATM networks by means of
some special hardware. There are some issues with this type of fragmentation. It
puts some burden on the performance of the network since all the fragments have
to be transmitted through the same gateway. Also, sometimes the repeated
fragmentation and reassembling has to be done for small packet network in
series. Whenever an over-sized packet reaches a router, it is broken up in to small
fragments. These fragments are transported to the next exit router. The fragments
are assembled by this exit router which then forwards them to the next router.
Awareness regarding this fragmentation is not maintained for the subsequent
networks. For a single packet fragmentation is done many times before the
destination is finally reached. This of course consumes a lot of time because the

repeated fragmentation and assembling has to be carried out. Sometimes, it also


presents the reason of corrupting the packets integrity.
Non-Transparent Fragmentation: In this type, the packet is split in to fragments by
one router. But the difference is that these fragments are not reassembled until
the fragments reach their destination. They remain split till then. Since in this type
of fragmentation the fragments are assembled only at the destination host, the
fragments can be routed independent of each other. This type of fragmentation
also experiences some problems such as header has to be carried by each of the
fragments till they reach their destination. Numbering has to be done for all the
fragments so that no problem is experienced in reconstructing the data stream.

Q16. Define socket. List various types of sockets. What are the steps used for
socket programming? Explain in detail- 1) stream socket 2) raw socket 3)
datagram socket.
Ans. A network socket is an endpoint of a connection in a computer network. In
Internet Protocol (IP) networks, these are often called Internet sockets. It is a
handle (abstract reference) that a program can pass to the networking application
programming interface (API) to use the connection for receiving and sending data.
Sockets are often represented internally as integers.
A socket API is an application programming interface, usually provided by the
operating system, which allows application programs to control and use network
sockets. Internet socket APIs are usually based on the Berkeley sockets standard.
In the Berkeley sockets standard, sockets are a form of file descriptor (a file handle),
due to the UNIX philosophy that "everything is a file", and the analogies between
sockets and files. Both have functions to read, write, open, and close. In practice
the differences mean the analogy is strained, and one instead uses different
interfaces (send and receive) on a socket. In inter-process communication, each
end generally has its own socket, but these may use different APIs: they are
abstracted by the network protocol.
Socket programming
In IETF Request for Comments, Internet Standards, in many textbooks, as well as in
this article, the term socket refers to an entity that is uniquely identified by the
socket number. In other textbooks,[1] the term socket refers to a local socket
address, i.e. a "combination of an IP address and a port number". In the original
definition of socket given in RFC 147, as it was related to the ARPA network in 1971,
"the socket is specified as a 32 bit number with even sockets identifying receiving
sockets and odd sockets identifying sending sockets." Today, however, socket
communications are bidirectional.
On Unix-like operating systems and Microsoft Windows, the command line tools
netstat and ss are used to list established sockets and related information.

This example, modeled according to the Berkeley socket interface, sends the string
"Hello, world!" via TCP to port 80 of the host with address 1.2.3.4. It illustrates the
creation of a socket (getSocket), connecting it to the remote host, sending the
string, and finally closing the socket:
Socket socket = getSocket(type = "TCP")
connect(socket, address = "1.2.3.4", port = "80")
send (socket, "Hello, world!")
close (socket)
Several types of Internet socket are available:

Datagram sockets
Stream sockets
Raw sockets
Berkeley sockets
XML Socket

Other socket types are implemented over other transport protocols, such as
Systems Network Architecture (SNA).
Stream socket-In computer operating systems, a stream socket is a type of interprocess communications socket or network socket which provides a connectionoriented, sequenced, and unique flow of data without record boundaries, with
well-defined mechanisms for creating and destroying connections and for
detecting errors. A stream socket transmits data reliably, in order, and with out-ofband capabilities. On the Internet, stream sockets are typically implemented on top
of TCP so that applications can run across any networks using TCP/IP protocol. SCTP
may also be used for stream sockets.
Raw socket In computer networking, a raw socket is an internet socket that allows
direct sending and receiving of Internet Protocol packets without any protocolspecific transport layer formatting. Raw sockets are used in security related
applications like nmap. One possible use case for raw sockets is the implementation
of new transport-layer protocols in user space.[1] Raw sockets are typically available
in network equipment, and used for routing protocols such as the Internet Group
Management Protocol (IGMPv4) and Open Shortest Path First (OSPF), and in the
Internet Control Message Protocol (ICMP, best known for the ping sub-operation)
for example, sends ICMP echo requests and receives ICMP echo replies.

Datagram -In computer operating systems, a datagram socket is a type of interprocess communications socket or network socket which provides a connectionless
point for sending or receiving data packets. [1] Each packet sent or received on a
datagram socket is individually addressed and routed. Order and reliability are not
guaranteed with datagram sockets, so multiple packets sent from one machine or
process to another may arrive in any order or might not arrive at all. The sending
of UDP broadcasts on a network are always enabled on a datagram socket. In order
to receive broadcast packets, a datagram socket should be bound to the wildcard
address. Broadcast packets may also be received when a datagram socket is bound
to a more specific address.

Q17. Explain the following issues of transport layer


1) Establishing a connection
2) Releasing a connection
Ans. Connection establishment
TCP transmit data in full duplex mode. When two TCPs in two machines are
connected, they are able to send segments to each other simultaneously. This
implies that the each party must initialize communication and get approval from
the other party before any data are transferred.
Three-way handshaking is the technique of establishing a connection. In our,
example, an application program, called the client, wants to make a connection
with another application program, called the server, using TCP as transport layer
protocol.
This procedure normally is initiated by one TCP and responded to by another TCP.
The procedure also works if two TCP simultaneously initiate the procedure. When
simultaneous attempt occurs, each TCP receives a "SYN" segment which carries no
acknowledgment after it has sent a "SYN". Of course, the arrival of an old duplicate
"SYN" segment can potentially make it appear, to the recipient, which a
simultaneous connection initiation is in progress. Proper use of "reset" segments
can disambiguate these cases.
The three-way handshake reduces the possibility of false connections. It is the
implementation of a trade-off between memory and messages to provide
information for this checking.

Connection release
The connection termination or connection realease phase uses a four-way
handshake, with each side of the connection terminating independently. When an
endpoint wishes to stop its half of the connection, it transmits a FIN packet, which
the other end acknowledges with an ACK. Therefore, a typical tear-down requires
a pair of FIN and ACK segments from each TCP endpoint. After the side that sent
the first FIN has responded with the final ACK, it waits for a timeout before finally
closing the connection, during which time the local port is unavailable for new
connections; this prevents confusion due to delayed packets being delivered during
subsequent connections.
A connection can be "half-open", in which case one side has terminated its end, but
the other has not. The side that has terminated can no longer send any data into
the connection, but the other side can. The terminating side should continue
reading the data until the other side terminates as well.
It is also possible to terminate the connection by a 3-way handshake, when host A
sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one)
and host A replies with an ACK.

Das könnte Ihnen auch gefallen