Beruflich Dokumente
Kultur Dokumente
transmitter
transmission
system
reciver
destination
Q2 what are protocols? Why do we need layered protocols? Give at least two
reasons.
Ans In telecommunications, a communication protocol is a system of rules that
allow two or more entities of a communications system to transmit information via
any kind of variation of a physical quantity. These are the rules or standard that
defines the syntax, semantics and synchronization of communication and possible
error recovery methods. Protocols may be implemented by hardware, software, or
a combination of both.
Communicating systems use well-defined formats (protocol) for exchanging
various messages. Each message has an exact meaning intended to elicit a response
from a range of possible responses pre-determined for that particular situation.
The specified behavior is typically independent of how it is to be implemented.
Communications protocols have to be agreed upon by the parties involved. To
reach agreement, a protocol may be developed into a technical standard. A
programming language describes the same for computations, so there is a close
analogy between protocols and programming languages: protocols are to
communications what programming languages are to computations.
Multiple protocols often describe different aspects of a single communication. A
group of protocols designed to work together are known as a protocol suite; when
implemented in software they are a protocol stack.
Most recent protocols are assigned by the IETF for Internet communications, and
the IEEE, or the ISO organizations for other types. The ITU-T handles
telecommunications protocols and formats for the PSTN. As the PSTN and Internet
converge, the two sets of standards are also being driven towards convergence.
Two reasons for using layered protocols
1. Protocol layering is a common technique to simplify networking designs by
dividing them into functional layers, and assigning protocols to perform each layer's
task.
Thus, one protocol is designed to perform data delivery, and another protocol,
layered above the first, performs connection management. The data delivery
protocol is fairly simple and knows nothing of connection management. The
connection management protocol is also fairly simple, since it doesn't need to
concern itself with data delivery.
2. Protocol layering produces simple protocols, each with a few well-defined tasks.
These protocols can then be assembled into a useful whole. Individual protocols
can also be removed or replaced as needed for particular applications.
Baseband:
- Digital signals are used
- Frequency division multiplexing is not possible
- Baseband is bi-directional transmission
- Short distance signal travelling
- Entire bandwidth of the cable is consumed by a single signal in a baseband
transmission.
Broadband:
- Analog signals are used
- Transmission of data is unidirectional
- Signal travelling distance is long
- Frequency division multiplexing is possible
- The signals are sent on multiple frequencies and allow all the multiple signals are
sent simultaneously in broadband transmission.
Q.4 What is OSI model? Explain it in detail. Also compare TCP & OSI
reference modle.
Ans. OSI Model:The Open Systems Interconnection model (OSI model) is a conceptual model that
characterizes and standardizes the communication functions of a
telecommunication or computing system without regard to their underlying
internal structure and technology. Its goal is the interoperability of diverse
communication systems with standard protocols. The model partitions a
communication system into abstraction layers. The original version of the model
defined seven layers.
Layer 1: Physical Layer
Layer 2: Data Link Layer
Layer 3: Network Layer
Layer 4: Transport Layer
Layer 5: Session Layer
Layer 6: Presentation Layer
Layer 7: Application Layer
1) PHYSICAL LAYER: - The physical layer defines the electrical and physical
specifications of the data connection. It defines the relationship between a
device and a physical transmission medium (e.g. fiber optical cable).
2) DATA LINK LAYER: - The data link layer provides node-to-node data
transfera link between two directly connected nodes. It detects and
possibly corrects errors that may occur in the physical layer.
3) NETWORK LAYER: - The network layer provides the functional and
procedural means of transferring variable length data sequences (called
datagrams) from one node to another connected to the same network. It
translates logical network address into physical machine address.
4) TRANSPORT LAYER: - The transport layer provides the functional and
procedural means of transferring variable-length data sequences from a
source to a destination host via one or more networks, while maintaining the
quality of service functions. An example of a transport-layer protocol in the
standard Internet stack is Transmission Control Protocol (TCP), usually built
on top of the Internet Protocol (IP).
5) SESSION LAYER: - The session layer controls the dialogues (connections)
between computers. It establishes, manages and terminates the connections
between the local and remote application. The OSI model made this layer
responsible for graceful close of sessions, which is a property of the
Transmission Control Protocol, and also for session checkpointing and
recovery, which is not usually used in the Internet Protocol Suite.
6) PRESENTATION LAYER: - The presentation layer establishes context between
application-layer entities, in which the application-layer entities may use
different syntax and semantics if the presentation service provides a
mapping between them.
7) APPLICATION LAYER: - The application layer is the OSI layer closest to the end
user, which means both the OSI application layer and the user interact
directly with the software application. Application-layer functions typically
include identifying communication partners, determining resource
availability, and synchronizing communication.
Principal of operation
The transmitter and receiver each have a current sequence number nt and nr,
respectively. They each also have a window size wt and wr. The window sizes may
vary, but in simpler implementations they are fixed. The window size must be
greater than zero for any progress to be made.
As typically implemented, nt is the next packet to be transmitted, i.e. the sequence
number of the first packet not yet transmitted. Likewise, nr is the first packet not
yet received. Both numbers are monotonically increasing with time; they only ever
increase.
The receiver may also keep track of the highest sequence number yet received; the
variable ns is one more than the sequence number of the highest sequence number
received. For simple receivers that only accept packets in order (wr = 1), this is the
same as nr, but can be greater if wr > 1. Note the distinction: all packets
below nr have been received, no packets above ns have been received, and
between nr and ns, some packets have been received.
When the receiver receives a packet, it updates its variables appropriately and
transmits an acknowledgment with the new nr. The transmitter keeps track of the
highest acknowledgment it has received na. The transmitter knows that all packets
up to, but not including na have been received, but is uncertain about packets
between na and ns; i.e. na nr ns.
The sequence numbers always obey the rule that na nr ns nt na + wt. That is:
Q6 Explain character stuffing in detail. What are the draw backs of character
stuffing?
Ans. Byte stuffing or character stuffing is a process that transforms a sequence of
data bytes that may contain 'illegal' or 'reserved' values (such as packet delimiter)
into a potentially longer sequence that contains no occurrences of those values.
The extra length of the transformed sequence is typically referred to as the
overhead of the algorithm. The COBS algorithm tightly bounds the worst-case
overhead, limiting it to no more than one byte in 254. The algorithm is
computationally inexpensive and its average overhead is low compared to other
unambiguous framing algorithms.
When packetized data is sent over any serial medium, a protocol is needed by
which to demarcate packet boundaries. This is done by using a framing marker,
which is a special bit-sequence or character value that indicates where the
boundaries between packets fall. Data stuffing is the process that transforms the
packet data before transmission to eliminate all occurrences of the framing marker,
so that when the receiver detects a marker, it can be certain that the marker
indicates a boundary between packets.
DRAWBACKS
In byte stuffing a special byte is add to the data part, this is known as escape
character (ESC). The escape characters have a predefined pattern.
The receiver removes the escape character and keeps the data part. It cause
to another problem, if the text contains escape characters as part of data.
To deal with this, an escape character is prefixed with another escape
character.
Q.7 Explain in detail the design issues of data link layer. Explain the
services provided by data link layer to network layer.
Ans. If we don't follow the OSI reference model as gospel, we can imagine
providing several alternative service semantics:
Reliable Delivery:
Frames are delivered to the receiver reliably and in the same order as
generated by the sender.
Connection state keeps track of sending order and which frames require
retransmission. For example, receiver state includes which frames have been
received, which ones have not, etc.
Best Effort:
The receiver does not return acknowledgments to the sender, so the sender
has no way of knowing if a frame has been successfully delivered.
When would such a service be appropriate?
1. When higher layers can recover from errors with little loss in
performance. That is, when errors are so infrequent that there is little
to be gained by the data link layer performing the recovery. It is just
as easy to have higher layers deal with occasional lost packet.
2. For real-time applications requiring better never than late'' semantics.
Old data may be worse than no data. For example, should an airplane
bother calculating the proper wing flap angle using old altitude and
wind speed data when newer data is already available
Acknowledged Delivery:
The receiver returns an acknowledgment frame to the sender indicating that
a data frame was properly received. This sits somewhere between the other
two in that the sender keeps connection state, but may not necessarily
retransmit unacknowledged frames. Likewise, the receiver may hand
received packets to higher layers in the order in which the arrive, regardless
of the original sending order.
Typically, each frame is assigned a unique sequence number, which the
receiver returns in an acknowledgment frame to indicate which frame the
ACK refers to. The sender must retransmit unacknowledged (e.g., lost or
damaged) frames.
The three major types of services offered by data link layer are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection oriented service.
1. Unacknowledged Connectionless Service
(a) In this type of service source machine sends frames to destination machine but
the destination machine does not send any acknowledgement of these frames back
to the source. Hence it is called unacknowledged service.
(b) There is no connection establishment between source and destination machine
before data transfer or release after data transfer. Therefore it is known as
connectionless service.
(c) There is no error control i.e. if any frame is lost due to noise on the line, no
attempt is made to recover it.
(d) This type of service is used when error rate is low.
(e) It is suitable for real time traffic such as speech.
2. Acknowledged Connectionless Service
(a) In this service, neither the connection is established before the data transfer nor
is it released after the data transfer between source and destination.
(b) When the sender sends the data frames to destination, destination machine
sends back the acknowledgement of these frames.
(c) This type of service provides additional reliability because source machine
retransmit the frames if it does not receive the acknowledgement of these frames
within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.
3. Acknowledged Connection - Oriented Service
(a) This service is the most sophisticated service provided by data link layer to
network layer.
(b) It is connection-oriented. It means that connection is establishment between
source & destination before any data is transferred.
(c) In this service, data transfer has three distinct phases:(i) Connection establishment
(ii) Actual data transfer
(iii) Connection release
(d) Here, each frame being transmitted from source to destination is given a specific
number and is acknowledged by the destination machine.
(e) All the frames are received by destination in the same order in which they are
send by the source.
Q.8. which are the four generations of Ethernet? Also compare the pure aloha
and slotted aloha.
Ans. The original Ethernet was created in 1976 at Xeroxs Palo Alto Research Center
(PARC). Since then, it has gone through four generations: standard Ethernet
(10+mbps), fast Ethernet (100 mbps), gigabyte Ethernet (1gbps), ten gigabyte
Ethernet (10gbps).
Standard Ethernet
All standard implementations use digital signaling (baseband) at 10 mbps. It is
considered in four types-
Fast Ethernet
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber
Channel. IEEE created Fast Ethernet under the name 802.3u. Fast Ethernet is
backward-compatible with Standard Ethernet, but it can transmit data 10 times
faster at a rate of 100 Mbps.
It is characterized in three parts- 100 base TX
- 100 Base FX
- 100 base T4
ALOHA
ALOHA net, also known as the ALOHA System, or simply ALOHA, was a pioneering
computer networking system developed at the University of Hawaii. ALOHAnet
became operational in June, 1971, providing the first public demonstration of a
wireless packet data network. ALOHA originally stood for Additive Links On-line
Hawaii Area.
Pure ALOHA
The first version of the protocol (now called "Pure ALOHA", and the one
implemented in ALOHAnet) was quite simple:
Note that the first step implies that Pure ALOHA does not check whether the
channel is busy before transmitting. Since collisions can occur and data may have
to be sent again, ALOHA cannot use 100% of the capacity of the communications
channel.
FIG. Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have
collided.
Slotted ALOHA
An improvement to the original ALOHA protocol was "Slotted ALOHA", which
introduced discrete timeslots and increased the maximum throughput. A station
can send only at the beginning of a timeslot, and thus collisions are reduced. In this
case, only transmission-attempts within 1 frame-time and not 2 consecutive frametimes need to be considered, since collisions can only occur during each timeslot.
FIG. Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are
in the same slots.
Q9. Write a short note on 1) piggybacking 2) stop and wait protocol 3) HDLC 4)
point to point protocol.
Ans 1) Piggy backing
In two way communication, whenever a data frame is received, the receiver waits
and does not send the control frame (acknowledgement or ACK) back to the sender
immediately.
The receiver waits until its network layer passes in the next data packet. The
delayed acknowledgement is then attached to this outgoing data frame.
This technique of temporarily delaying the acknowledgement so that it can be
hooked with next outgoing data frame is known as piggybacking.
Working Principle
Piggybacking data is a bit different from Sliding Window Protocol used in the OSI
model. In the data frame itself, we incorporate one additional field for
acknowledgment (called ACK).
Whenever party A wants to send data to party B, it will send the data along with
this ACK field. Considering the sliding window here of size 8 bits, if A has received
frames up to 5 correctly (from B), and wants to send frames starting from frame 6,
it will send ACK6 with the data.
Three rules govern the piggybacking data transfer.
If station A wants to send both data and an acknowledgment, it keeps both
fields there.
If station A wants to send just the acknowledgment, then a separate ACK
frame is sent.
If station A wants to send just the data, then the last acknowledgment field
is sent along with the data. Station B simply ignores this duplicate ACK frame
upon receiving.
The current standard for HDLC is ISO 13239, which replaces all of those standards.
HDLC provides both connection-oriented and connectionless service.
HDLC can be used for point to multipoint connections, but is now used almost
exclusively to connect one device to another, using what is known as Asynchronous
Balanced Mode (ABM). The original master-slave modes Normal Response
Mode (NRM) and Asynchronous Response Mode (ARM) are rarely used.
The contents of an HDLC frame are shown in the following table:
Flag
Address
Control
Information
FCS
Flag
Variable length, n * 8 16 or 32
8 bits
bits
bits
Note that the end flag of one frame may be (but does not have to be) the beginning
(start) flag of the next frame.
Data is usually sent in multiples of 8 bits, but only some variants require this; others
theoretically permit data alignments on other than 8-bit boundaries.
The frame check sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed
over the Address, Control, and Information fields. It provides a means by which the
receiver can detect errors that may have been induced during the transmission of
the frame, such as lost bits, flipped bits, and extraneous bits. However, given that
the algorithms used to calculate the FCS are such that the probability of certain
types of transmission errors going undetected increases with the length of the data
being checked for errors, the FCS can implicitly limit the practical size of the frame.
If the receiver's calculation of the FCS does not match that of the sender's,
indicating that the frame contains errors, the receiver can either send a
negative acknowledge packet to the sender, or send nothing. After either receiving
a negative acknowledge packet or timing out waiting for a positive acknowledge
packet, the sender can retransmit the failed frame.
The FCS was implemented because many early communication links had a relatively
high bit error rate, and the FCS could readily be computed by simple, fast circuitry
or software. More effective forward error correction schemes are now widely used
by other protocols.
4) point to point protocol
In computer networking, Point-to-Point Protocol (PPP) is a data link (layer 2)
protocol used to establish a direct connection between two nodes. It can provide
connection authentication, transmission encryption (using ECP, RFC 1968), and
compression.
PPP is used over many types of physical networks including serial cable, phone line,
trunk line, cellular telephone, specialized radio links, and fiber optic links such as
SONET. PPP is also used over Internet access connections. Internet service
providers (ISPs) have used PPP for customer dial-up access to the Internet, since IP
packets cannot be transmitted over a modem line on their own, without some data
link protocol. Two derivatives of PPP, Point-to-Point Protocol over Ethernet
(PPPoA) and Point-to-Point Protocol over ATM (PPPoA), are used most commonly
by Internet Service Providers (ISPs) to establish a Digital Subscriber Line (DSL)
Internet service connection with customers.
PPP is commonly used as a data link layer protocol for connection over synchronous
and asynchronous circuits, where it has largely superseded the older Serial Line
Internet Protocol (SLIP) and telephone company mandated standards (such as Link
Access Protocol, Balanced (LAPB) in the X.25 protocol suite). The only requirement
for PPP is that the circuit provided be duplex. PPP was designed to work with
numerous network layer protocols, including Internet Protocol (IP), TRILL, Novell's
Internetwork Packet Exchange (IPX), NBF, DECnet and AppleTalk. Like SLIP, this is a
full Internet connection over telephone lines via modem. It is more reliable than
SlIP because it double checks to make sure that Internet packets arrive intact. It
resends any damaged packets.
Q10. Explain CSMA/CD and its uses. Why do we prefer CSMA over ALOHA?
Explain in detail.
Ans. Carrier sense multiple access with collision detection (CSMA/CD) is a media
access control method used most notably in early Ethernet technology for local
area networking. It uses a carrier sensing scheme in which a transmitting station
detects collisions by sensing transmissions from other stations while transmitting a
frame. When this collision condition is detected, the station stops transmitting that
frame, transmits a jam signal, and then waits for a random time interval before
trying to resend the frame.
CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD
is used to improve CSMA performance by terminating transmission as soon as a
collision is detected, thus shortening the time required before a retry can be
attempted.
USES
CSMA/CD was used in now-obsolete shared media Ethernet variants (10BASE5,
10BASE2) and in the early versions of twisted-pair Ethernet which used repeater
hubs. Modern Ethernet networks, built with switches and full-duplex connections,
no longer need to utilize CSMA/CD because each Ethernet segment, or collision
domain, is now isolated. CSMA/CD is still supported for backwards compatibility
and for half-duplex connections. IEEE Std 802.3, which defines all Ethernet variants,
for historical reasons still bears the title "Carrier sense multiple access with collision
detection (CSMA/CD) access method and physical layer specifications".
ALOHA and CSMA
Main difference between Aloha and CSMA is that Aloha protocol does not try to
detect whether the channel is free before transmitting but the CSMA protocol
verifies that the channel is free before transmitting data. Thus CSMA protocol
avoids clashes before they happen while Aloha protocol detects that a channel is
busy only after a clash happens. Due to this, CSMA is more suitable for networks
such as Ethernet where multiple sources and destinations use the same channel.
Aloha is a simple communication scheme originally developed by the University of
Hawaii to be used for satellite communication. In the Aloha method, each source
in a communication network transmits data every time there is a frame to be
transmitted. If the frame successfully reaches the destination, the next frame is
transmitted. If the frame is not received at the destination, it will be transmitted
again. CSMA (Carrier Sense Multiple Access) is a Media Access Control (MAC)
protocol, where a node transmits data on a shared transmission media only after
verifying the absence of other traffic.
Q11 explain different aspects of medium access in IEEE 802.11. Do they differ in
802.11b and 802.11g? Why 802.11 frame format has four address fields.
Ans IEEE 802.11 is a set of media access control (MAC) and physical layer (PHY)
specifications for implementing wireless local area network (WLAN) computer
communication in the 900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands. They
are created and maintained by the Institute of Electrical and Electronics Engineers
(IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard
was released in 1997, and has had subsequent amendments. The standard and
amendments provide the basis for wireless network products using the Wi-Fi
brand. While each amendment is officially revoked when it is incorporated in the
latest version of the standard, the corporate world tends to market to the revisions
because they concisely denote capabilities of their products. As a result, in the
marketplace, each revision tends to become its own standard.
802.11b and 802.11g use the 2.4 GHz ISM band, operating in the United States
under Part 15 of the U.S. Federal Communications Commission Rules and
Regulations. Because of this choice of frequency band, 802.11b and g equipment
may occasionally suffer interference from microwave ovens, cordless telephones,
and Bluetooth devices. 802.11b and 802.11g control their interference and
susceptibility to interference by using direct-sequence spread spectrum (DSSS) and
orthogonal frequency-division multiplexing (OFDM) signaling methods,
respectively. 802.11a uses the 5 GHz U-NII band, which, for much of the world,
offers at least 23 non-overlapping channels rather than the 2.4 GHz ISM frequency
band offering only three non-overlapping channels, where other adjacent channels
overlapsee list of WLAN channels. Better or worse performance with higher or
lower frequencies (channels) may be realized, depending on the environment.
802.11n can use either the 2.4 GHz or the 5 GHz band; 802.11ac uses only the 5
GHz band.
The segment of the radio frequency spectrum used by 802.11 varies between
countries. In the US, 802.11a and 802.11g devices may be operated without a
license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used
by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur
radio band. Licensed amateur radio operators may operate 802.11b/g devices
under Part 97 of the FCC Rules and Regulations, allowing increased power output
but not commercial content or encryption.
An 802.11 frame can have up to four address fields. Each field can carry a MAC
address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used
for filtering purposes by the receiver.
The remaining fields of the header are:
The Sequence Control field is a two-byte section used for identifying message
order as well as eliminating duplicate frames. The first 4 bits are used for the
fragmentation number, and the last 12 bits are the sequence number.
An optional two-byte Quality of Service control field that was added
with 802.11e.
The payload or frame body field is variable in size, from 0 to 2304 bytes plus any
overhead from security encapsulation, and contains information from higher
layers.
The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11
frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for
integrity check of retrieved frames. As frames are about to be sent, the FCS is
calculated and appended. When a station receives a frame, it can calculate the
FCS of the frame and compare it to the one received. If they match, it is assumed
that the frame was not distorted during transmission.
Q12. write a short note on: 1) Adaptive tree walk protocol 2) Limited contention
protocol.
Ans. LIMITED CONTROL PROTOCOLS- Collision based protocols (ALOHA,
CSMA/CD) are good when the network load is low. Collision free protocols (bit
map, binary Countdown) are good when load is high. Limited contention
protocols behave like the ALOHA scheme under light load & behave like the
bitmap scheme under heavy load.
For small numbers of stations, the chances of success are good, but as soon as the
number of stations reaches even five, the probability has dropped.
From the figure: probability that some stations will acquire the channel can be
increased only by decreasing the amount of competition.
The limited contention protocols do just that by:
1. Dividing the stations into (not necessarily disjoint) groups.
2. Only the members of group 0 are permitted to compete for slot 0.
3. If one of them succeeds, it acquires the channel and transmits its frame.
4. If there is a collision the members of the group 1 contend for slot 1. Etc.
ADAPTIVE TREE WALK PROTOCOL
There are different methods of singulation, but the most common is "tree walking",
which involves asking all tags with a serial number that starts with either a 1 or 0
to respond. If more than one responds, the reader might ask for all tags with a serial
number that starts with 01 to respond, and then 010. It keeps doing this until it
finds the tag it is looking for. Note that if the reader has some idea of what tags it
wishes to interrogate, it can considerably optimizeS the search order. For example
with some designs of tags, if a reader already suspects certain tags to be present
then those tags can be instructed to remain silent, then tree walked without
interference from them, and then finally they can be queried individually.
Q13. what do you mean by congestion? Explain the difference between flow
control and congestion control.
Ans. An important issue in the packet switching network is the congestion.
Congestion in the network may occur if the load on the network the number of
packets sent to the network is greater than the capacity of the network. Congestion
happens in any system that involves waiting. For example, congestion happens on
a freeway because any abnormality in flow, such as an accident during rush hour
creates blockage.
Congestion in a network or internetwork occurs because routers and switches have
queues buffers the hold the packet before and after processing. A router, for
example has an input queue for each interface. When a packet arrives at the
incoming interface, it undergoes three steps before departing.
FLOW CONTROL & CONGESTION CONTROL
Flow control is a mechanism used in computer networks to control the flow of data
between a sender and a receiver, such that a slow receiver will not be outran by a
fast sender. Flow control provides methods for the receiver to control the speed of
transmission such that the receiver could handle the data transmitted by the
sender. Congestion control is a mechanism that controls data flow when
congestion actually occurs. It controls data entering in to a network such that the
network can handle the traffic within the network.
Although, Flow control and congestion control are two network traffic control
mechanisms used in computer networks, they have their key differences. Flow
control is an end to end mechanism that controls the traffic between a sender and
a receiver, when a fast sender is transmitting data to a slow receiver. On the other
hand, congestion control is a mechanism that is used by a network to control
congestion in the network. Congestion control prevents loss of packets and delay
caused due to congestion in the network. Congestion control can be seen as a
mechanism that makes sure that an entire network can handle the traffic that is
coming to the network. But, flow control refers to mechanisms used to handle the
transmission between a particular sender and a receiver.
Q.14.Difference between the static and dynamic routing with their pros and
cons. Give the example of some routing protocols which are used in both types
of routing.
Ans. Static routing is a form of routing that occurs when a router uses a manuallyconfigured routing entry, rather than information from a dynamic routing traffic. In
many cases, static routes are manually configured by a network administrator by
adding in entries into a routing table, though this may not always be the case.
Unlike dynamic routing, static routes are fixed and do not change if the network is
changed or reconfigured. Static routing and dynamic routing are not mutually
exclusive. Both dynamic routing and static routing are usually used on a router to
maximize routing efficiency and to provide backups in the event that dynamic
routing information fails to be exchanged. Static routing can also be used in stub
networks, or to provide a gateway of last resort.
Dynamic routing, also called adaptive routing, describes the capability of a system,
through which routes are characterized by their destination, to alter the path that
the route takes through the system in response to a change in conditions. The
adaptation is intended to allow as many routes as possible to remain valid (that is,
have destinations that can be reached) in response to the change.
People using a transport system can display dynamic routing. For example, if a local
railway station is closed, people can alight from a train at a different station and
use another method, such as a bus, to reach their destination. Another example of
dynamic routing can be seen within financial markets. For example, ASOR or
Adaptive Smart Order Router (developed by Quod Financial), takes routing
decisions dynamically and based on real-time market events.
The term is commonly used in data networking to describe the capability of a
network to 'route around' damage, such as loss of a node or a connection between
nodes, so long as other path choices are available. There are several protocols used
to achieve this:
RIP
OSPF
IS-IS
IGRP/EIGRP
Systems that do not implement dynamic routing are described as using static
routing, where routes through a network are described by fixed paths (statically).
A change, such as the loss of a node, or loss of a connection between nodes, is not
compensated for. This means that anything that wishes to take an affected path
will either have to wait for the failure to be repaired before restarting its journey,
or will have to fail to reach its destination and give up the journey.
Q16. Define socket. List various types of sockets. What are the steps used for
socket programming? Explain in detail- 1) stream socket 2) raw socket 3)
datagram socket.
Ans. A network socket is an endpoint of a connection in a computer network. In
Internet Protocol (IP) networks, these are often called Internet sockets. It is a
handle (abstract reference) that a program can pass to the networking application
programming interface (API) to use the connection for receiving and sending data.
Sockets are often represented internally as integers.
A socket API is an application programming interface, usually provided by the
operating system, which allows application programs to control and use network
sockets. Internet socket APIs are usually based on the Berkeley sockets standard.
In the Berkeley sockets standard, sockets are a form of file descriptor (a file handle),
due to the UNIX philosophy that "everything is a file", and the analogies between
sockets and files. Both have functions to read, write, open, and close. In practice
the differences mean the analogy is strained, and one instead uses different
interfaces (send and receive) on a socket. In inter-process communication, each
end generally has its own socket, but these may use different APIs: they are
abstracted by the network protocol.
Socket programming
In IETF Request for Comments, Internet Standards, in many textbooks, as well as in
this article, the term socket refers to an entity that is uniquely identified by the
socket number. In other textbooks,[1] the term socket refers to a local socket
address, i.e. a "combination of an IP address and a port number". In the original
definition of socket given in RFC 147, as it was related to the ARPA network in 1971,
"the socket is specified as a 32 bit number with even sockets identifying receiving
sockets and odd sockets identifying sending sockets." Today, however, socket
communications are bidirectional.
On Unix-like operating systems and Microsoft Windows, the command line tools
netstat and ss are used to list established sockets and related information.
This example, modeled according to the Berkeley socket interface, sends the string
"Hello, world!" via TCP to port 80 of the host with address 1.2.3.4. It illustrates the
creation of a socket (getSocket), connecting it to the remote host, sending the
string, and finally closing the socket:
Socket socket = getSocket(type = "TCP")
connect(socket, address = "1.2.3.4", port = "80")
send (socket, "Hello, world!")
close (socket)
Several types of Internet socket are available:
Datagram sockets
Stream sockets
Raw sockets
Berkeley sockets
XML Socket
Other socket types are implemented over other transport protocols, such as
Systems Network Architecture (SNA).
Stream socket-In computer operating systems, a stream socket is a type of interprocess communications socket or network socket which provides a connectionoriented, sequenced, and unique flow of data without record boundaries, with
well-defined mechanisms for creating and destroying connections and for
detecting errors. A stream socket transmits data reliably, in order, and with out-ofband capabilities. On the Internet, stream sockets are typically implemented on top
of TCP so that applications can run across any networks using TCP/IP protocol. SCTP
may also be used for stream sockets.
Raw socket In computer networking, a raw socket is an internet socket that allows
direct sending and receiving of Internet Protocol packets without any protocolspecific transport layer formatting. Raw sockets are used in security related
applications like nmap. One possible use case for raw sockets is the implementation
of new transport-layer protocols in user space.[1] Raw sockets are typically available
in network equipment, and used for routing protocols such as the Internet Group
Management Protocol (IGMPv4) and Open Shortest Path First (OSPF), and in the
Internet Control Message Protocol (ICMP, best known for the ping sub-operation)
for example, sends ICMP echo requests and receives ICMP echo replies.
Datagram -In computer operating systems, a datagram socket is a type of interprocess communications socket or network socket which provides a connectionless
point for sending or receiving data packets. [1] Each packet sent or received on a
datagram socket is individually addressed and routed. Order and reliability are not
guaranteed with datagram sockets, so multiple packets sent from one machine or
process to another may arrive in any order or might not arrive at all. The sending
of UDP broadcasts on a network are always enabled on a datagram socket. In order
to receive broadcast packets, a datagram socket should be bound to the wildcard
address. Broadcast packets may also be received when a datagram socket is bound
to a more specific address.
Connection release
The connection termination or connection realease phase uses a four-way
handshake, with each side of the connection terminating independently. When an
endpoint wishes to stop its half of the connection, it transmits a FIN packet, which
the other end acknowledges with an ACK. Therefore, a typical tear-down requires
a pair of FIN and ACK segments from each TCP endpoint. After the side that sent
the first FIN has responded with the final ACK, it waits for a timeout before finally
closing the connection, during which time the local port is unavailable for new
connections; this prevents confusion due to delayed packets being delivered during
subsequent connections.
A connection can be "half-open", in which case one side has terminated its end, but
the other has not. The side that has terminated can no longer send any data into
the connection, but the other side can. The terminating side should continue
reading the data until the other side terminates as well.
It is also possible to terminate the connection by a 3-way handshake, when host A
sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one)
and host A replies with an ACK.