Sie sind auf Seite 1von 23

DATA COMMUNICATION & COMPUTER NETWORKS Introduction

Data communication is all about transmitting information from one device to another in the network. All the controls and procedures for communicating information are handled by communication protocols. At the most basic level, information is converted into signals that can be transmitted across a guided (copper or fiber-optic cable) or unguided (radio transmission) medium. At the highest level, users interact with applications. In between software that defines and controls how applications take advantage of the underlying network. The data communication system depends on three fundamental properties: 1. Delivery 2. Accuracy 3. Timeliness Delivery refers the system must deliver the message to the destination and it is received by the proper device or the user. Accuracy specifies that the system must deliver the message accurately and message should not be altered in transmission period. Finally timeliness specifies the system must deliver the message in a specified period of time. 1. Data Communication system components: Data communication system consists of five components as shown in Fig 1.1.

Fig 1.1 Components of system The message is the data to be communicated and it consists of text, numbers, pictures, sound or video and so on. The sender is a device, that sends the message and it can be a computer or workstation and so on. The receiver receives the message and it can be a computer or workstation or any other devices that are connected to the network. Medium is the actual path over which an electrical signal travels as it moves from one

component to another. A protocol is a set of rules that governs the data communication. It represents an agreement between the communicating devices. 2. Network: A Network is a set of devices connected by media links. A node can be a computer, printer, or any other devices capable of sending and receiving data generated by other nodes on the network. The links connecting the devices are often called communication channels. (Or) A system consisting of connected nodes made to share data, hardware and software. It is important to understand the relationship between the communicating devices and how data is transmitted from one system to another system. The following general concepts provide the basis for this relationship. Line configuration specifies the way in which two or more communication devices are attached to a link. The link is the physical communication pathway that transfers message (data) from one system to another system. It can be classified as point-to-point and multipoint link. A point-to-point link as shown in Fig 1.2 provides a dedicated link between two devices in the network. The entire capacity of the channel (medium) is reserved for transmission between those two devices. A real point-to-point connection is a wire that directly connects two systems.

Fig 1.2 Point-to-point link A multipoint line configuration is one in which more that two devices share a single link and the capacity of the channel is shared as shown Fig 1.3.

Fig 1.3 Multipoint link

3. Topology Topology refers to the way a network is laid out, either physically or logically. It is a geometric representation of the relationship of all the links and linking devices (usually called nodes) to each other. The different types of topologies are bus, star, ring, tree, and mesh and so on. BUS Topology In bus configuration, all stations are linked along a single transmission medium. A bus consists of a number of computers in a row connected by a single cable that is called a trunk, also known as a backbone or segment as shown in Fig 1.4. At a time only one computer can transmit a packet.

Fig 1.4 Bus topology Computers in a bus topology listen to all the traffic on the network but only accept the packets that are addressed to them. If there is any failure in the communication link or channel then the entire network is down or collapsed. Nodes are connected to the bus cable by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a connector that splices into the main cable to create a contact with the metallic core. ADVANTAGES 1. Reliable in very small networks as well as easy to use. 2. For this configuration only least amount of cable is required and hence inexpensive. 3. It is easy to extend, two cables can be easily joined with a connector, making a longer cable for more computers to join in the network. 4. A repeater can be used to extend a bus configuration. DISADVANTAGES 1. Heavy traffic makes the bus slow. 2. Only one communication channel exists to service all the devices on the network. 3. Electrical signals are weakened between the connections of two cables. 4. Even small malfunctioning will stop the entire network from functioning.

STAR Topology In a star topology, all the computers are linked to main hub, which establishes the connection between stations. It offers centralized resources and management and throughput of computer based star network is based on the capacity of the central Hub as shown in Fig 1.5.

Fig 1.5 Star Topology The number of nodes that can be added is determined by the capacity of the central Hub. Failure of the hub causes the entire network to fail. Single cable failures affect only the single stations. ADVANTAGES 1. Failure or replacing the computers does not affect the network. 2. It is easy to troubleshoot or diagnose the network problems. 3. Many types of cables can be used in this configuration. 4. It is easy to add and modify new computers. DISADVANTAGES 1. Since hub is responsible for all functions, failure of the hub will disrupt the network. 2. Cabling costs are more as compared to a bus and ring configuration. RING Topology In this configuration, all nodes are connected in the form of a ring as shown in Fig 1.6. Like a circle that has no start and no end, in ring topology terminators are not necessary. Signals travel in one direction on a ring while they are passed from one computer to the next. Every computer checks the packet for its destination and passes it on as a repeater would. The entire ring network goes down even when one of the nodes fails. Each additional node involves system disruption and reduces performances.

Fig 1.6 Ring Topology ADVANTAGES 1. All nodes are given equal access to network as a token is passed around the ring indicating authorization to transmit. 2. In this configuration network degrades gracefully. DISADVANTAGES 1. Faults in a network cannot be easily detected. 2. Message delay increases as more nodes are attached to the ring. 3. Failure or replacement of computers will disrupt the whole network. TREE Topology A tree topology is a variation of star as shown in Fig 1.7. As in a star nodes in a tree are linked to a central hub that controls the traffic to the network. However, not every device plugs directly into the central hub. The majority of devices connect to a secondary hub that is in-turn connected to a central hub.

Fig 1.7 Tree Topology

The central hub in the tree is an active hub & it contains a repeater, which is a hardware device that regenerates the received bit patterns before sending them out. The secondary hubs may be active or passive hubs & passive hub provides a simple physical connection between the attached devices. The advantages and disadvantages of the tree topology are generally the same as those of star. The addition of secondary hubs brings further two advantages. First it allows more devices to be attached to a single central hub & can therefore increase the distance a signal can travel between devices. Second it allows the network to isolate and prioritize communications from different computers. MESH Topology In this topology as shown in Fig 1.8, every node is connected to every other node and it contains multiple redundant pathways through interconnected networks. If one path fails or is crowded, a packet can use different path to its destination.

Fig 1.8 Mesh Topology ADVANTAGES 1. Minor faults will not disrupt the network. 2. Reliable communication channel capacity. 3. Fault can be easily identified and diagnosed. DISADVANTAGES 1. Adding and replacing a machine will disrupt the network. 2. Installation cost is more and maintenance cost is very high. 4. Transmission Mode The transmission mode is used to define the direction of signal flow between two linked devices. There are three types of transmission modes. They are Simplex, half duplex and full duplex. In Simplex mode, a transmission mode in which communication is one way. The examples of simplex mode are keyboards and traditional monitors. The keyboard

can only introduce input and the monitor can only accept output. In half-duplex mode, each station can both transmit and receive, but not at the same time. When one device is sending and other can only receive and vice versa. Walkie-talkie is an example for half duplex mode. In full duplex mode also called as duplex, both stations in the network can transmit and receive simultaneously. One common example for duplex mode is the telephone network. When two people are communicating by a telephone line, both can talk and listen at the same time. Computer Networks are classified into their size, complexity and geographical spread. They can be classified into Local Area Network (LAN), Wide Area Network (WAN) and Metropolitan Area Network (MAN). A Local Area Network (LAN) is a collection of networked computers within a distance of approximately one kilometer. Major parameters are considered in Local Area Networks are the topology, the transmission media and the speed of transmission. Common topologies such as star, bus and ring are used. The media includes twisted-pair cable, fiber optic cable and coaxial cable. Local Area Networks have data transmission speed at 4 16 Mbps. LANs offer computer users many advantages, including shared access to devices and applications, file exchange between connected users, and communication between users via electronic mail and other applications. Local Area Networks can be characterized with the following points:

1. The network spreads over a small area, e.g. a single building or a cluster of
buildings

2. The network runs at a high speed from 10Mbit to Gigabit. 3. It is a peer-to-peer network, that is, any device within the network can exchange
data with any other device

4. It is owned by a single organization, which is responsible for its operation


Local Area Networks can be distinguished in four major areas:

1. The topology of the network: bus or ring 2. The


used transmission medium: twisted pair, coaxial cable (baseband, broadband), optical fiber

3. The used medium access control technique: CSMA/CD or token-passing


As the Local Area Network of a company grows and expands to computers and the users in other locations and it becomes a Wide Area Network (WAN).

A Wide Area Network (WAN) as shown in Fig.1.9 is a data communications network that covers a relatively broad geographic area and that often uses transmission facilities provided by common carriers, such as telephone companies . Fig.1.9 Wide Area Network A

Metropolitan Area Network (MAN) is a backbone network that spans a metropolitan area and may be regulated by local or state authorities. The telephone company, cable services, and other suppliers provide MAN services to companies that need to build networks that span public rights-of-way in metropolitan areas. 5. Protocol & Standards: A protocol is an agreement between defines the communicating is parties on how it is is to proceed. Protocol

communication

what

communicated,

how

communicated, and when it is communicated. The key elements of a protocol are syntax, Semantics and timing. Syntax refers to the structure and format of the data, meaning the order in which they are presented. Semantics specifies the meaning of each section of bits. Finally timing specifies how data should be sent and how fast they can be sent. Standards play an important role in networking. Without standards, manufacturers of networking products have no common ground on which to build their systems. Interconnecting products from various vendors would be difficult, if not impossible. Standardization can make or break networking products. Vendors want to know there will be some measure of interoperability for their hardware and software. Further standards are classified into De facto and De jure standards.

Open System Interconnection (OSI) Model


The Open System Interconnection (OSI) Model is a layered frame work for design of network systems that allows for communication across all types of computers systems as shown in Fig 1.10. .1 An Open System is a model that allows any two different computers or systems to communicate regardless of their underlying architecture. .2 The OSI model is not a protocol. .3 It is a model for understanding and designing a network architecture that is flexible, robust and interoperable. .4 The OSI model acts as a baseline for creating and comparing networking protocols. .5 The goal of this model is to break down the task of data communication into simple steps, these steps are called layers. The OSI model consists of seven distinct layers and each layer has certain responsibilities. The benefits of the OSI model are as follows: 1. Any hardware or software that meets the OSI standard will be able to communicate with any other hardware or software that also meets the standard. 2. Hardware or software from any manufacturer will work together. 3. The Protocols for OSI are defined at each stage. 4. Any errors that occur are handled in each layer. 5. The different layers can operate automatically. Data >> Segments >> Packets >> Frames >> Bits

1. Upper layers convert and format the information into data and send it to the
Transport Layer.

2. The Transport layer turns the data into segments and adds headers then sends
them to the Network layer.

3. The Network layer receives the segments and converts them into packets and adds
header information (logical addressing) and sends them to the Data Link Layer.

4. The Data Link layer receives the packets and converts them into frames and adds
header information (physical source and destination addresses) and sends the frames to the Physical Layer.

5. The Physical layer receives the frames and converts them into bits to be put on the
network medium.

Fig 1.10 OSI Model & layers Between each pair of adjacent layers is an interface. The interface defines which primitive operations and services the lower layer makes available to the upper one. The OSI model consists of seven layers namely Physical layer, Datalink layer, Network layer, Transport layer, Session layer, Presentation layer and Application Layer. PHYSICAL LAYER This layer determines how the sending and receiving bits of data move along the networks wire. It coordinates the functions required to transmit a bit stream over a physical medium. It deals with the mechanical and electrical specifications of the interface and transmission medium. It also defines the Procedures and functions that physical devices and interfaces have to perform for transmission to occur. In summary 1. Transmits the unstructured raw bit stream over a physical medium. 2. Relates the electrical, optical mechanical and functional interfaces to the cable. 3. Defines how the cable is attached to the network adapter card. 4. Defines data encoding and bit synchronization.

DATALINK LAYER The Data-link layer transforms the physical layer, a raw stream facility, to a reliable link and is responsible for node-to-node delivery. It makes the physical layer appear error free to the upper layer (Network layer). The specific responsibilities of data-link layer are framing, physical addressing, flow control, error control and access control. It can be sub-divided into two sub-layers; they are Medium Access Control (MAC) and Logical Link Control (LLC). In summary 1. Sends data frames from the Network layer to the Physical layer. 2. Packages raw bits into frames for the Network layer at the receiving end. 3. Responsible for providing error free transmission of frames through the Physical layer. NETWORK LAYER The Network layer is responsible for the source-to-destination delivery of packet possibly across multiple networks (links), whereas the data-link layer oversees the delivery of the packet between two systems on the same network (links). The Network layer ensures that each packet gets from its point of origin to its final destination. (Or) It is responsible for routing the packet based on its logical address. In summary 1. Responsible for addressing messages and translating logical addresses and names into physical addresses. 2. Determines the route from the source to the destination computer. 3. Manages traffic such as packet switching, routing and controlling the congestion of data. TRANSPORT LAYER The Transport layer is responsible for source-to-destination (end-to-end) delivery of the entire message, whereas the Network layer oversees end-to-end delivery of individual packets and it does not recognize any relationship between those packets. On other hand the Transport layer ensures that the whole message arrives intact and in-order and it monitors both error control and flow control at the source-to-destination level. The major responsibilities of this layer include service point addressing, segmentation and reassembly, connection control. In summary 1. Responsible for packet creation. 2. Provides an additional connection level beneath the Session layer. 3. Ensures that packets are delivered error free, in sequence with no losses or duplications.

4. Unpacks, reassembles and sends receipt of messages at the receiving end. 5. Provides flow control, error handling, and solves transmission problems. SESSION LAYER This layer is the network dialog controller and it establishes, maintains, synchronizes the interaction between communicating systems. The Session layer protocols set up connections and it covers how to establish the connection, how to use the connection and how to break down the connection when a session is completed and it also check for transmission errors. In summary 1. Allows two applications running on different computers to establish use and end a connection called a Session. 2. Performs name recognition and security. 3. Provides synchronization by placing checkpoints in the data stream. 4. Implements dialog control between communicating processes. PRESENTATION LAYER The primary job of this layer is to ensure that the message gets transmitted in a language or syntax that the receiving computer can understand. The major responsibilities of this layer include translation, encryption, decryption and compression. In summary 1. Determines the format used to exchange data among the networked computers. 2. Translates data from a format from the Application layer into an intermediate format. 3. Responsible for protocol conversion, data translation, data encryption, data compression, character conversion, and graphics expansion. 4. Redirector operates at this level. APPLICATION LAYER The purpose of this layer is to manage communications between applications. It provides user interfaces and support for services such as Electronic Mail, remote file access & transfer, shared database management and other types of distributed information services. This layer provides 1. Serves as a window for applications to access network services.

2. Handles general network access, flow control and error recovery.

UNIT III Error Control & Flow Control: Error control ensures the proper sequencing and safe delivery of frames at the
destination, an acknowledgement should be sent by the destination network. The receiver sends back special control frames bearing positive or negative acknowledgement about the incoming frames. If sender receives positive acknowledgement it means the frame arrived safely. If a negative acknowledgement means that error has been occurred in the incoming frame and frame is to be retransmitted. A timer at senders and receiver end is introduced. A sequence number to the outgoing frames are maintained, so that the receiver can distinguish retransmissions from originals. This is one of the most important parts of data link layer duties. When the sender is running on fast machine or lightly loaded machine and the receiver is on slow or heavily loaded machine. Then the transmitter will transmit frames faster than the receiver can accept them.

Even if the transmission is error free at a certain point the receiver will simply not be
able to handle the frames as they arrive and will start to lose some. To prevent this flow control, mechanism is incorporated which includes a feedback mechanism requesting transmitter a retransmission of incorrect message block.

The most common retransmission technique is known as Automatic Repeat Request


(ARQ). ARQ is an error control method for data transmission in which the receiver detects transmission errors in a message and automatically requests a retransmission from the transmitter. Usually, when the transmitter receives the ARQ, the transmitter retransmits the message until it is either correctly received or the error persists beyond a predetermined number of retransmissions.

Error control in Data Link Layer (DLL) is based on Automatic Repeat Request (ARQ),
retransmission of data in three cases. 1. Damaged frames 2. Lost frames 3. Lost acknowledgements.

An Automatic Repeat Request (ARQ) protocol is characterized by four functional


steps. ARQ techniques can be categorized into two types as shown in below figure. 1. Transmission of frames. 2. Error checking at the receiver end.

3. Acknowledgement 4. Retransmission if acknowledgement is negative (NAK) or if no acknowledgement is received within a stipulated time.

ARQ Techniques

Stop & Wait ARQ

Sliding Window ARQ

GO-Back-N

Selective Repeat

Stop and Wait Protocol

The simplest retransmission protocol is stop and wait protocol ARQ. Whenever a
station sends the frame over the communication line or channel and then waits for a positive or negative acknowledgement from the station B. If no errors occurred in the incoming message intended to the station B, the receiving station sends a positive acknowledgement to the station A. The transmitter can now start to send the next frame. If the frame is received at station B with errors, then a negative acknowledgment is sent to the station A. in this case station A must retransmits the old packet or message. There is also the possibility that information or acknowledgements can be lost. To avoid this conflict, the sender is equipped with a timer. If no recognizable acknowledgment is received when the timer expires at the end of time out interval, then the same frame is sent again.

Sliding Window ARQ

The purpose of the sliding-window flow control is to efficiently use the network
bandwidth that is wasted in the stop-and-wait technique when the sender waits for an acknowledgment.

The sliding-window technique basically lets the sender transmit multiple frames at
a time to utilize the transmission channel as much as possible. At the same time,

the flow-control technique provides a way for the receiver to indicate to the sender that its buffers are getting full.

The essence of all sliding window protocols is that at any instant of time, the sender
maintains a set of sequence numbers corresponding to frames it is permitted to send. These frames are said to fall within the sending window. Similarly, the receiver also maintains a receiving window as shown in below figure corresponding to the set of frames it is permitted to accept. The sender's window and the receiver's window need not have the same lower and upper limits or even have the same size. In some protocols they are fixed in size, but in others they can grow or shrink over the course of time as frames are sent and received.

Protocol using Go back N ARQ When the receiving station detects an error in a frame as shown in below figure, it sends a negative acknowledgment (NAK) to the sender station for that frame. All further incoming frames at the receiver are then discarded until the frame in error is correctly received. Thus when the sender station receives the negative acknowledgment (NAK), it must retransmit that frame plus all succeeding frames. Hence the name go-back-N, the last N previously transmitted frames must be retransmitted when the error occurs.

Protocol using Selective Repeat

In contrast to the go-back-N method, the selective repeat protocol only needs to
retransmit the frame that was negatively acknowledged or for which the timer has expired. Hence the receiver requires storage buffers to contain the out-of-order frames until the frame in error is correctly received.

In addition to that the receiver must also have the appropriate logic circuitry needed
for reinserting the frames in the correct order. A selective repeat ARQ system differs from go-back-N ARQ system in the following ways:

The receiving system must contain sorting logic to enable it to reorder frames received out of sequence. It must also be able to store frames received after a NAK has been sent until the damaged frame has been replaced.

The sending system must contain a searching mechanism that allows it to find and select only the requested frame for retransmission.

A buffer in the receiver must keep all previously received frames on hold until all retransmissions have been sorted and any duplicate frames have been identified and discarded.

To aid selectivity, ACK numbers, like NAK numbers must refer to the frame received or lost instead of next frame expected.

High Level Data Link Control (HDLC):


High-Level Data Link Control (HDLC) is a bit oriented, switched and nonswitched protocol. HDLC is a protocol developed by the International Organization for Standardization (ISO). It is a data link control protocol, and falls within layer 2, the Data Link Layer, HDLC is bit-oriented, meaning that the data is monitored bit by bit. of the Open Systems Interconnection (OSI) model. Transmissions consist of binary data without any special control codes.

It has been so widely implemented because it supports both half duplex and

full duplex communication lines, point to point (peer to peer) and multi-point networks, and switched or non-switched channels. The technical overview of High Level Data Link Control (HDLC) is concerned with the following aspects: 1. Stations 2. Configurations 3. Operational Modes 4. Non-Operational Modes 5. Frame Structure

HDLC specifies the following three types of stations for data link control. They are
1. Primary Station 2. Secondary Station 3. Combined Station Within a network using HDLC as its data link protocol, if a configuration is used in which there is a primary station, it is used as the controlling station on the link. It has the responsibility of controlling all other stations on the link (usually secondary stations). Despite this important aspect of being on the link, the primary station is also responsible for the organization of data flow on the link. It also takes care of error recovery at the data link level (layer 2 of the OSI model). If the data link protocol being used is HDLC, and a primary station is present, a secondary station must also be present on the data link. The secondary station is under the control of the primary station. It has no ability, or direct responsibility for controlling the

link. It is only activated when requested by the primary station. It only responds to the primary station. The secondary station's frames are called responses. It can only send response frames when requested by the primary station. A combined station is a combination of a primary and secondary station. On the link, all combined stations are able to send and receive commands and responses without any permission from any other stations on the link. Each combined station is in full control of itself, and does not rely on any other stations on the link. No other stations can control any combined station. HDLC also defines three types of configurations for the three types of stations: 1. Unbalanced Configuration 2. Balanced Configuration 3. Symmetrical Configuration The unbalanced configuration in an HDLC link consists of a primary station and one or more secondary stations. The unbalanced occurs because one stations controls the other stations. An example of an unbalanced configuration can be found below in fig.2.12.

Fig.2.12 Unbalanced Configuration The balanced configuration in an HDLC link consists of two or more combined stations as shown in fig.2.13. Each of the stations has equal and complimentary responsibility compared to each other.

Fig.2.13 Balanced Configuration

The Symmetrical Configuration is not widely in use today. It consists of two independent point to point and unbalanced station configurations. In these configurations, each station has a primary and secondary status. Each station is logically considered as two stations. HDLC offers three different modes of operation. These three modes of operations are: 1. Normal Response Mode (NRM) 2. Asynchronous Response Mode (ARM) 3. Asynchronous Balanced Mode (ABM) In Normal Response Mode (NRM), in which the primary station initiates transfers to the secondary station. The secondary station can only transmit a response when, and only when, it is instructed to do so by the primary station. In other words, the secondary station must receive explicit permission from the primary station to transfer a response. After receiving permission from the primary station, the secondary station initiates its transmission. In Asynchronous Response Mode (ARM), the primary station doesn't initiate transfers to the secondary station. In fact, the secondary station does not have to wait to receive explicit permission from the primary station to transfer any frames. Due to the fact that this mode is Asynchronous, the secondary station must wait until it detects and idle channel before it can transfer any frames. This is when the ARM link is operating at halfduplex. If the ARM link is operating at full-duplex, the secondary station can transmit at any time. In this mode, the primary station still retains responsibility for error recovery, link setup, and link disconnection. The Asynchronous Balanced Mode (ABM) uses combined stations. There is no need for permission on the part of any station in this mode. This is because combined stations do not require any sort of instructions to perform any task on the link. Normal Response Mode is used most frequently in multi-point lines, where the primary station controls the link. Asynchronous Response Mode is better for point to point links, as it reduces overhead. Asynchronous Balanced Mode is not used widely today. The "asynchronous" in both ARM and ABM does not refer to the format of the data on the link. It refers to the fact that any given station can transfer frames without explicit permission or instruction from any other station. Frame format Flag Field: Every frame on the link must begin and end with a flag sequence field (F) as shown in below figure. Stations attached to the data link must continually listen for a flag sequence. The flag sequence is an octet looking like 01111110.

Address Field: The address field (A) identifies the primary or secondary stations involvement in the frame transmission or reception. Each station on the link has a unique address. In an unbalanced configuration, the A field in both commands and a response refers to the secondary station. In a balanced configuration, the command frame contains the destination station address and the response frame has the sending station's address. Control Field: The control field (C) is to determine how to control the communications process. This field contains the commands, responses and sequences numbers used to maintain the data flow accountability of the link, defines the functions of the frame and initiates the logic to control the movement of traffic between sending and receiving stations. There are three control field formats: 1. Information 2. Supervisory 3. Unnumbered The Information frame is used to transmit end-user data between two devices . The Supervisory frame performs control functions such as acknowledgment of frames, requests for re-transmission, and requests for temporary suspension of frames being transmitted. Its use depends on the operational mode being used. This Unnumbered frame is also used for control purposes. It is used to perform link initialization, link disconnection and other link control functions.

Fig.2.14 (a) Information Frame (b) Supervisory Frame (c) Unnumbered Frame.

The P/F bit stands for Poll/Final. It is used when a computer is polling a group of terminals. When used as P, the computer is inviting the terminal to send data. All the frames sent by the terminal, except the final one, have the P/F bit set to P. The final one is set to F. Data (or) Information: The Data field may contain any information. It may be arbitrarily long, although the efficiency of the checksum falls off with increasing frame length due to the greater probability of multiple burst errors. Checksum: This field contains a 16 bit, or 32 bit cyclic redundancy check. It is used for error detection.

Multiplexing:
Multiplexing combines multiple channels of information over a single circuit or transmission path. Set of techniques that allow the simultaneous transmission of multiple signals across a single data link. It is a process in which several independent signals combine into a form that can be transmitted across a communication links and then separated into its original components.

Multiplexing techniques can be broadly classified into the following categories.

Frequency Division Multiplexing: Frequency Division Multiplexing (FDM) works by transmitting all of the signals along the same high speed link simultaneously with each signal set at a different frequency. It

means that the total bandwidth available to the system is divided into a series of nonoverlapping frequency sub-bands that are then assigned to each communicating source and user pair. The following figure shows how this division is accomplished for a case of three sources at one end of a system that are communicating with three separate users at the other end. Note that each transmitter modulates its source's information into a signal that lies in a different frequency sub-band. The signals are then transmitted across a common channel. At the receiving end of the system, band pass filters are used to pass the desired signal (the signal lying in the appropriate frequency sub-band) to the appropriate user and to block all the unwanted signals.

Time Division Multiplexing: In this type, digitized information from several sources are multiplexed in time and transmitted over a single communication channel. Instead of sharing a portion of bandwidth as in FDM, time is shared. Each connection occupies a portion of time in the link. It allows several connections to share the high bandwidth of a link. Time slots pre-assigned to sources and fixed. Time slots allocated even if no data is to transmit.

The multiplexer takes incoming traffic and delays it into its assigned time slot. This is commonly done one character or one bit at a time. The receiving demultiplexer must carefully clock incoming data to separate it into slots corresponding to the originals which are shown in above figure. The effective transmission rate made available to each device through its sub-channel is the total speed of the link divided by the number of sub-channels and by the number of devices that could share the link. When the allocation of slots to create sub-channels is a fixed assignment. The sharing scheme is called synchronous TDM. Synchronous time-division multiplexing is possible when the achievable data rate of the medium exceeds the data rate of digital signals to be transmitted. Multiple digital signals can be carried on a single transmission path by interleaving portions of each signal in time. The data are organized into frames. Each frame contains a cycle of time slots. In each frame, one or more slots are dedicated to each data source. The sequence of slots dedicated to one source, from frame to frame is called a channel. Statistical Time Division Multiplexing uses dynamic reassignment of idle subchannels to accommodate devices with data waiting to be sent. Statistical TDM is an improvement only in situations where the connected devices can take advantage of the extra link capacity available, when some devices are idle. A single terminal operating at 300 bps cannot benefit from assignment of all the slots on a 1200 bps link because it can only provide data to the link at the lower rate. However more than four 300 bps terminals might share a single 1200 bps link if their patterns of usage were such that on the average, four or fewer have traffic to transmit simultaneously. This ability to serve more devices than fixed assignment of the transmission capacity would allow is called concentration.

Das könnte Ihnen auch gefallen