Sie sind auf Seite 1von 32

DATA NETWORK COMMUNICATION (3429)

Q.No.1 Define network topology? Explain different types of topologies in detail with examples. Network Topologies
In computer networking, topology refers to the layout of connected devices. This article introduces the standard topologies of networking. Topologies remain an important part of network design theory. You can probably build a home or small business computer network without understanding the difference between a bus design and a star design, but becoming familiar with the standard topologies gives you a better understanding of important networking concepts like hubs, broadcasts, and routes. Topology in Network Design Think of a topology as a network's virtual shape or structure. This shape does not necessarily correspond to the actual physical layout of the devices on the network. For example, the computers on a home LAN may be arranged in a circle in a family room, but it would be highly unlikely to find a ring topology there.

TYPES
Network topologies are categorized into the following basic types: bus ring star tree mesh More complex networks can be built as hybrids of two or more of the above basic topologies. Bus Topology Bus networks (not to be confused with the system bus of a computer) use a common backbone to connect all devices. A single cable, the backbone functions as a shared communication medium that devices attach or tap into with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet cabling options many years ago for bus topologies. However, bus networks work best with a limited number of devices. If more than a few dozen computers are added to a network bus, performance problems will likely result. In addition, if the backbone cable fails, the entire network effectively becomes unusable.

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429)

Ring Topology In a ring network, every device has exactly two neighbors for communication purposes. All messages travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any cable or device breaks the loop and can take down the entire network. To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology. Ring topologies are found in some office buildings or school campuses.

Star Topology Many home networks use the star topology. A star network features a central connection point called a "hub" that may be a hub, switch or router. Devices typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet. Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. (If the hub fails, however, the entire network also fails.)

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429) Tree Topology Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection points) alone.

Mesh Topology Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. (Recall that even in a ring, although two cable paths exist, messages can only travel in one direction.) Some WANs, most notably the Internet, employ mesh routing. A mesh network in which every device connects to every other is called a full mesh. As shown in the illustration below, partial mesh networks also exist in which some devices connect only indirectly to others.

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429)

Q.No.2 Differentiate asynchronous and synchronous transmission in detail. Also explain different types of multiplexing.
Asynchronous Communication The asynchronous communication technique is a transmission technique which is most widely used by personal computers to provide connectivity to printers, modems, fax machines, etc. This allows a series of bytes (or ASCII characters) to be sent along a single wire (actually a ground wire is required to complete the circuit). The data is sent as a series of bits. A shift register (in either hardware or software) is used to serialise each information byte into the series of bits which are then sent on the wire using an I/O port and a bus driver to connect to the cable. At the receiver, the remote system reassembles the series of bits to form a byte and forwards the frame for processing by the link layer. A clock (timing signal) is needed to identify the boundaries between the bits (in practice it is preferable to identify the centre of the bit - since this usually indicates the point of maximum signal power). There are two systems used to providing timing: 1. Asynchronous Communication (independent transmit & receive clocks) Simple interface (limited data rate, typically < 64 kbps) Used for connecting: Printer, Terminal, Modem, home connections to the Internet No clock sent (Tx & Rx have own clocks) Requires start and stop bits which provides byte timing and increases overhead Parity often used to validate correct reception 1. Synchronous Communication (synchronised transmit & receive clocks) More complex interface (high data rates supported up to 10 Gbps) Used for: Connections between computer and telephony networks Clock sent with data (more configuration options) The principle difference between the synchronous and asynchronous modes of transmission is that in the synchronous case, the receiver uses a clock which is synchronised to the transmitter clock. Synchronous transmission has the advantage that the timing information is accurately aligned to the received data, allowing operation at much higher data rates. It also has the advantage that the receiver tracks any clock drift which may arise (for instance due to temperature variation). The penalty is however a more complex interface design, and potentially a more difficult interface to configure (since there are many more interface options). Most computers support asynchronous communication, not all computers support synchronous serial communication. The most significant aspect of asynchronous communications is that the transmitter and receiver clock are independent and are not synchronised. In fact, there need be no timing relationship between successive characters (or bytes of data). Individual characters may be separated by any arbitrary idle period.

Asynchronous transmission of a series of characters

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429) An asynchronous link communicates data as a series of characters of fixed size and format. Each character (usually represented by an ASCII code) is preceded by a start bit and followed by 1-2 stop bits. Parity is often added to provide some limited protection against errors occurring on the link. The use of independent transmit and receive clocks constrains transmission to relatively short characters (<8 bits) and moderate data rates (< 64 kbps, but typically lower). The asynchronous transmitter delimits each character by a start sequence and a stop sequence. The start bit (0), data (usually 8 bits plus parity) and stop bit(s) (1) are transmitted using a shift register clocked at the nominal data rate.

Asynchronous transmission - each character is framed by a start and one or more stop bits At the receiver, a clock of the same nominal frequency is constructed and used to clock-in the data to the receive shift register. Only data that are bounded by the correct start and stop bits are accepted. This operation is normally performed using a UART (Universal Asynchronous Receiver Transmitter). UART chips are available as Integrated Circuits (ICs) or may form a part of a more complex component. Some CPUs include UARTs as a standard feature. The receiver is started by detecting the edge of the first start bit as shown below:

The transition from the idle state triggers the UART at the receiver to start reception The reconstructed receive clock (receive (rx) clock) is normally generated using a local stable high rate clock, frequently operating at 16 or 32 times the intended data rate. Such a clock signal (square wave signal) may be created using a crystal oscillator circuit. In most cases, the computer will already have a square wave clock generator (e.g. connected to the clock input of the CPU), and rather than using a separate clock, this clock signal may be routed to the UART. In general, whatever method is used, the clock will be of too

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429) high a frequency for the UART, the clock frequency may however be easily reduced using a succession of frequency dividers (each a flip-flop wired to divide the input clock by 2). Reception proceeds by detecting the edge of the start bit and counting sufficient clock cycle from the high frequency clock (16 times the transmission clock in the example here) to identify the mid position of the start bit. The number of bits to be converted is the number corresponding to one half the original bit period, 8 high frequency clock cycles in this example). From there the centre of the successive bits are located by counting cycles corresponding to the original data speed (16 in this example).

Reconstruction of the clock (red) , by matching of phase to the transmitted data (blue) to the local stable high rate clock (black) Asynchronous serial links may be used to connect computers via modems to an Internet Service Provfider (ISP). When asynchronous transmission is used to support packet data links (e.g. the Internet), then special characters have to be used ("framing") to indicate the start and end of each frame transmitted. One character (known as an escape character) is reserved to mark any occurrence of the special characters within the frame. In this way the receiver is able to identify which characters are part of the frame and which are part of the "framing". Packet communication over asynchronous links is used by most home users to get access to a network using a modem. The set of rules governing what sequence of characters are sent is known as the Point-to-Point Protocol (or PPP for short).

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429)

Q.No.3. Explain different flow control techniques. Also define error detection and error control techniques.
Flow control For the air traffic control term "air traffic flow control", see Air traffic control. Not to be confused with Control flow. In data communications, flow control is the process of managing the pacing of data transmission between two nodes to prevent a fast sender from outrunning a slow receiver. It provides a mechanism for the receiver to control the transmission speed, so that the receiving node is not overwhelmed with data from transmitting node. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred [1]. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node. Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process them. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. TYPES OF FLOW CONTROL Network congestion A prevention mechanism that provides control over the quantity of data transmission that enters a device. Windowing Flow control Mechanism used with TCP. Data Buffer A prevention control mechanism that provides storage to contain data- bursts from other network devices, compensating for the variation of data transmission speeds. Transmit flow control Transmit flow control may occur: between data "on and off"[citation needed] terminal equipment (DTE) and a switching center, via data circuit-terminating equipment (DCE), the opposite types interconnected straightforwardly, or between two devices of the same type (two DTEs, or two DCEs), interconnected by a crossover cable. The transmission rate may be controlled because of network or DTE requirements. Transmit flow control can occur independently in the two directions of data transfer, thus permitting the transfer rates in one direction to be different from the transfer rates in the other direction. Transmit flow control can be Either stop-and-go, Or use a sliding window. Flow control can be performed Either by control signal lines in a data communication interface (see serial port and RS 232), Or by reserving in-band control characters to signal flow start and stop (such as the ASCII codes for XON/XOFF).

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429)

Hardware flow control In common RS 232 there are pairs of control lines: RTS flow control, RTS (Request To Send)/CTS (Clear To Send) and DTR flow control, DTR (Data Terminal Ready)/DSR (Data Set Ready), which are usually referred to as hardware flow control. Hardware flow control is typically handled by the DTE or "master end", as it is first raising or asserting its line to command the other side: In case of RTS control flow, DTE sets its RTS, which signals the opposite end (the slave end such as a DCE) to begin monitoring its data input line. When ready for data, the slave end will raise its complementary line, CTS in this example, which signals the master to start sending data, and for the master to begin monitoring the slave's data output line. If either end needs to stop the data, it lowers its respective "data readyness" line. For PC-to-modem and similar links, the case of DTR flow control, DTR/DSR are raised for the entire modem session (say a dialup internet call), and RTS/CTS are raised for each block of data. Software flow control Oppositely, XON/XOFF is usually referred to as software flow control Open-loop flow control The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a prior reservation or hop-to-hop type. The Open-loop flow control has inherent problems with maximizing the utilization of network resources. Resource allocation is made at connection setup using a CAC (Connection Admission Control) and this allocation is made using information that is already old news during the lifetime of the connection. Often there is an over-allocation of resources and reserved but unused capacities are wasted. Open-Loop flow control is used by ATM in its CBR, VBR and UBR services (see traffic contract and congestion control) Closed-loop flow control The Closed Loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed Loop flow control is used by ABR (see traffic contract and congestion control). Transmit Flow Control described above is a form of Closed-loop flow control. ERROR DETECTION AND CORRECTION In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data. The general definitions of the terms are as follows:

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429) Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the detection of errors and reconstruction of the original, error-free data. The general idea for achieving error detection and correction is to add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message, and to recover data determined to be erroneous. Error-detection and correction schemes can be either systematic or non-systematic: In a systematic scheme, the transmitter sends the original data, and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some deterministic algorithm. If only error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. In a system that uses a nonsystematic code, the original message is transformed into an encoded message that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memoryless models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and correcting codes can be generally distinguished between random-errordetecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel capacity cannot be determined, or is highly varying, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding. Implementation Error correction may generally be realized in two different ways: Automatic repeat request (ARQ) (sometimes also referred to as backward error correction): This is an error control technique whereby an error detection scheme is combined with requests for retransmission of erroneous data. Every block of data received is checked using the error detection code used, and if the check fails, retransmission of the data is requested this may be done repeatedly, until the data can be verified. Forward error correction (FEC): The sender encodes the data using an errorcorrecting code (ECC) prior to transmission. The additional information (redundancy) added by the code is used by the receiver to recover the original data. In general, the reconstructed data is what is deemed the "most likely" original data. ARQ and FEC may be combined, such that minor errors are corrected without retransmission, and major errors are corrected via a request for retransmission: this is called hybrid automatic repeat-request (HARQ). Error detection schemes Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers

M. KHALID SHEIKH

ROLL NO. AH-525009

DATA NETWORK COMMUNICATION (3429) to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors). Random-error-correcting codes based on minimum distance coding can provide a suitable alternative to hash functions when a strict guarantee on the minimum number of errors to be detected is desired. Repetition codes, described below, are special cases of errorcorrecting codes: although rather inefficient, they find applications for both error correction and detection due to their simplicity. Repetition codes A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data is divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern "1011", the four-bit block can be repeated three times, thus producing "1011 1011 1011". However, if this twelve-bit pattern was received as "1010 1011 1011" where the first block is unlike the other two it can be determined that an error has occurred. Repetition codes are very inefficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations. Parity bits A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Extensions and variations on the parity bit mechanism are horizontal redundancy checks, vertical redundancy checks, and "double," "dual," or "diagonal" parity (used in RAIDDP). Checksums A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a one's-complement prior to transmission to detect errors resulting in all-zero messages. Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Luhn algorithm and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Cyclic redundancy checks (CRCs) A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic code and nonsecure hash function designed to detect accidental changes to digital data in computer networks. It is characterized by specification of a so-called generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend, and where the remainder becomes the result.

M. KHALID SHEIKH

ROLL NO. AH-525009

10

DATA NETWORK COMMUNICATION (3429) Cyclic codes have favorable properties in that they are well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives. Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is generated by the divisor x+1. Cryptographic hash functions A cryptographic hash function can provide strong assurances about data integrity, provided that changes of the data are only accidental (i.e., due to transmission errors). Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is infeasible to find some input data (other than the one given) that will yield the same hash value. Message authentication codes, also called keyed cryptographic hash functions, provide additional protection against intentional modification by an attacker. Error-correcting codes Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d-1 errors in a code word. Using minimumdistance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d=2 are degenerate cases of error-correcting codes, and can be used to detect single errors. The parity bit is an example of a singleerror-detecting code. The Berger code is an early example of a unidirectional error(-correcting) code that can detect any number of errors on an asymmetric channel, provided that only transitions of cleared bits to set bits or set bits to cleared bits can occur.

Error correction
Automatic repeat request Automatic Repeat reQuest (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity. Error-correcting code An error-correcting code (ECC) or forward error correction (FEC) code is a system of adding redundant data, or parity data, to a message, such that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) were

M. KHALID SHEIKH

ROLL NO. AH-525009

11

DATA NETWORK COMMUNICATION (3429) introduced, either during the process of transmission, or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a back-channel is not required in forward error correction, and it is therefore suitable for simplex communication such as broadcasting. Error-correcting codes are frequently used in lowerlayer communication, as well as for reliable storage in media such as CDs, DVDs, hard disks, and RAM. Error-correcting codes are usually distinguished between convolutional codes and block codes: Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding. Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed-Solomon codes being the most notable due to their current widespread use. Turbo codes and lowdensity parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency. Hybrid schemes Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches: Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information, and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check). Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ, and uses it to reconstruct the original message. The latter approach is particularly attractive on an erasure channel when using a rateless erasure code.

The Internet In a typical TCP/IP stack, error control is performed at multiple levels: Each Ethernet frame carries a CRC-32 checksum. Frames received with incorrect checksums are discarded by the receiver hardware. The IPv4 header contains a checksum protecting the contents of the header. Packets with mismatching checksums are dropped within the network or at the receiver. The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819). UDP has an optional checksum covering the payload and addressing information from the UDP and IP headers. Packets with incorrect checksums are discarded by the operating system network stack. The checksum is optional under IPv4, only, because the IP layer checksum may already provide the desired level of error protection. TCP provides a checksum for protecting the payload and addressing information from the TCP and IP headers. Packets with incorrect checksums are discarded M. KHALID SHEIKH ROLL NO. AH-525009 12

DATA NETWORK COMMUNICATION (3429) within the network stack, and eventually get retransmitted using ARQ, either explicitly (such as through triple-ack) or implicitly due to a timeout. Deep-space telecommunications Development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting from 1968 digital error correction was implemented in the form of (suboptimally decoded) convolutional codes and Reed-Muller codes. The Reed-Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve), and was implemented at the Mariner spacecraft for missions between 1969 and 1977. The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging amongst scientific information of Jupiter and Saturn. This resulted in increased coding requirements, and thus the spacecraft were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 probe additionally supported an implementation of a Reed-Solomon code: the concatenated Reed-Solomon-Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune. The CCSDS currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes. Satellite broadcasting (DVB) The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and High Definition TV) and IP data. Transponder availability and bandwidth constraints have limited this growth, because transponder capacity is determined by the selected modulation scheme and Forward error correction (FEC) rate. Data storage Error detection and correction codes are often used to improve the reliability of data storage media. A "parity track" was present on the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in group code recording tapes not only detects but also corrects single-bit errors. Some file formats, particularly archive formats, include a checksum (most often CRC32) to detect corruption and truncation and can employ redundancy and/or parity files to recover portions of corrupted data. Reed Solomon codes are used in compact discs to correct errors caused by scratches. Modern hard drives use CRC codes to detect and Reed-Solomon codes to correct minor errors in sector reads, and to recover data from sectors that have "gone bad" and store that data in the spare sectors. RAID systems use a variety of error correction techniques, to correct errors when a hard drive completely fails.

M. KHALID SHEIKH

ROLL NO. AH-525009

13

DATA NETWORK COMMUNICATION (3429)

Q.No.4 Define LAN architecture. Differentiate between CSMA/CD and Gigabit LANs. Ethernet Definition
Ethernet is by far the most commonly used local area network (LAN) architecture. A LAN is a network that connects computers and other devices in a relatively small area, typically a single building or a group of buildings. Ethernet features high speeds, robustness (i.e., high reliability), low cost and adaptability to new technologies. These features have helped it maintain its popularity despite being one of the oldest of the LAN technologies. A key feature of Ethernet is the breaking of data into packets, also referred to as frames, which are then transmitted using the CSMA/CD (carrier sense multiple access/collision detection) protocol until they arrive at the destination without colliding with any other packets. The currently most commonly used form of Ethernet is 100Base-T, also referred to as fast Ethernet, which can accommodate data transfer speeds of up to about 100Mbps (million bits per second). The newer gigabit Ethernet supports data rates of one gigabit (1,000 megabits) per second. Fiber Ethernet uses optical fiber cables to carry data. Optical fiber allows transmission over very long distances (over 2,000 meters), has a very large capacity and is completely immune to electrical interference. However, it is relatively expensive. Wireless Ethernet transmits and data via a low-power microwave radios built into computers and other devices. It allows communication within a radius of approximately 100 meters. Construction of an Ethernet network is relatively simple. For example, in the case of a basic, wired Ethernet, the hardware requirements include a network interface card (NIC) for each computer, a hub or switch, and some Cat 5 cables to connect the computers to the hub or switch. Expansion is also easy, and can be conducted by adding additional hubs and/or switches. The software is generally built into the operating system, so that all that is necessary is a few configuration steps, including assigning IP addresses for the network hosts. Each node (i.e., computer, printer, hub, switch or other device) on the network has a hardware address, also called the MAC address. The address for any computer running an Unix-like operating system on an Ethernet can be found by using the ifconfig command with its -a option as follows:
/sbin/ifconfig -a

Ethernet was originally invented at the Xerox Palo Alto Research Center (PARC) in the early 1970s in order to interconnect that organization's innovative Alto desktop computers, and it was then further refined in cooperation with Digital Equipment Corporation (DEC) and Intel Corporation. It was named by Robert Metcalfe, one of its developers, after the passive substance called ether that was once thought to pervade the universe. The word Ethernet is a trademark owned by Xerox Corporation. Consequently, it is typically written with an initial capital letter. Other LAN architectures include token ring and FDDI (fiber distributed data interface).

M. KHALID SHEIKH

ROLL NO. AH-525009

14

DATA NETWORK COMMUNICATION (3429)

Gigabit Ethernet
The 10 gigabit Ethernet (10GE or 10GbE or 10 GigE) computer networking standard was first published in 2002. It defines a version of Ethernet with a nominal data rate of 10 Gbit/s (billion bits per second), ten times as fast as gigabit Ethernet. 10 gigabit Ethernet defines only full duplex point to point links which are generally connected by network switches. Half duplex operation, hubs and CSMA/CD (carrier sense multiple access with collision detection) do not exist in 10GbE. The 10 gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards. A networking device may support multiple PHY types by using pluggable PHY modules. Over time market forces will determine the most popular 10GE PHY types. At the time that the 10 gigabit Ethernet standard was developed, interest in 10GbE as a wide area network (WAN) transport led to the introduction of a WAN PHY for 10GbE. This operates at a slightly slower data-rate than the local area network (LAN) PHY and adds some extra encapsulation. Both share the same Physical Medium Dependent sublayers so can use the same optics. From 2007 when one million ports were shipped, 10 gigabit Ethernet deployments rose to slightly greater than two million ports in 2009

Standards Over the years the Institute of Electrical and Electronics Engineers (IEEE) 802.3 working group has published several standards relating to 10GbE. These included: 802.3ae-2002 (fiber -SR, -LR, -ER and -LX4 PMDs), 802.3ak-2004 (-CX4 copper twin-ax InfiniBand type cable), 802.3an-2006 (10GBASE-T copper twisted pair), 802.3ap-2007 (copper backplane -KR and -KX4 PMDs) and 802.3aq-2006 (fiber -LRM PMD with enhanced equalization). The 802.3ae-2002 and 802.3ak-2004 amendments were consolidated into the IEEE 802.32005 standard. IEEE 802.3-2005 and the other amendments were consolidated into IEEE Std 802.3-2008. Physical layer modules To deal with the many different physical layer standards, many interfaces take a modular approach. Physical layer modules are not specified in an official standards body but by multi-source agreements (MSAs) that can be negotiated more quickly. Relevant MSAs for 10GbE include XENPAK (and related X2 and XPAK), XFP and SFP+. When M. KHALID SHEIKH ROLL NO. AH-525009 15

DATA NETWORK COMMUNICATION (3429) choosing a PHY module, a designer considers cost, reach, media type, power consumption, and size (form factor). XENPAK was the first MSA for 10GE and had the largest form factor. X2 and XPAK were later competing standards with smaller form factors. X2 and XPAK have not been as successful in the market as XENPAK. XFP came after X2 and XPAK and it is also smaller. The newest module standard was the enhanced small form-factor pluggable transceiver, generally called SFP+. Based on the small form-factor pluggable transceiver (SFP) and developed by the ANSI T11 fibre channel group, it was smaller still and lower power than XFP. SFP+ became the most popular socket on 10GE systems. SFP+ modules do only optical to electrical conversion, no clock and data recovery, putting a higher burden on the host's channel equalization. SFP+ modules share a common physical form factor with legacy SFP modules, allowing higher port density than XFP and the re-use of existing designs for 24 or 48 ports in a 19" rack width blade. Optical modules are connected to a host by either a XAUI, XFI or SFI interface. XENPAK, X2, and XPAK modules use XAUI to connect to their hosts. XAUI (XGXS) uses a four-lane data channel and is specified in IEEE 802.3 Clause 48. XFP modules use a XFI interface and SFP+ modules use an SFI interface. XFI and SFI use a single lane data channel and the encoding specified in IEEE 802.3 Clause 49. SFP+ modules can further be grouped into two types of host interfaces: linear or limiting. Limiting modules are preferred except when using old fiber infrastructure which requires the use of the linear interface provided by 10GBASE-LRM modules

Optical fiber There are two classifications for optical fiber: single-mode (SMF) and multi-mode (MMF).[7] In SMF light follows a single path through the fiber while in MMF it takes multiple paths resulting in differential mode delay (DMD). SMF is used for long distance communication and MMF is used for distances of less than 300 m. SMF has a narrower core (8.3 m) which requires a more precise termination and connection method. MMF has a wider core (50 or 62.5 m). The advantage of MMF is that it can be driven by low cost VCSEL lasers for short distances, and multimode connectors are cheaper and easier to terminate reliably in the field. Its disadvantage is that due to DMD it can work only over short distances. To distinguish SMF from MMF cables, SMF cables are usually yellow, while MMF cables are orange (OM1 & OM2) or aqua (OM3 & OM4) however it must be noted that in fibre optics there is no agreed colour for any specific optical speed or technology with the exception being angular physical connector (APC), it being an agreed colour of green. New structured cabling installations use OM3 or OM4 50 m MMF. OM3 cable can carry 10GbE 300 m (OM4 can manage 400 m) using low cost 10GBASE-SR optics. See ISO 11801 and multi mode fibre.

M. KHALID SHEIKH

ROLL NO. AH-525009

16

DATA NETWORK COMMUNICATION (3429) Older installations use FDDI grade 62.5 m MMF which is harder for the 10GbE optical modules to drive. For -LX4 a mode conditioning patch cord is needed that adds extra cost to an installation. There are also active optical cables. These have the optical electronics already connected eliminating the connectors between the cable and the optical module. They plug into standard optical module sockets. They are lower cost than other optical solutions because the manufacturer can match the electronics to the required length and type of cable.

10GBASE-SR
10GBASE-SR ("short range") uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 850 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of 10.3125 Gbit/s. Over obsolete 62.5 micron multi-mode fiber cabling (OM1), it has a maximum range of 2682 metres (85269 ft), depending on cable type. Over standard 50 m 2000 MHzkm OM3 multi-mode fiber (MMF), it has a maximum range of 300 metres (980 ft). The transmitter can be implemented with a vertical-cavity surface-emitting laser (VCSEL) which is low cost and low power. MMF has the advantage of having lower cost connectors than SMF because of its wider core. OM3 or OM4 is the preferred choice for structured optical cabling within buildings. 10GBASE-SR delivers the lowest cost, lowest power and smallest form factor optical modules. For 2011, 10GBASE-SR is projected to make up a quarter of the total 10GbE adapter ports shipped.

10GBASE-LR
10GBASE-LR ("long reach") uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 1310 nm lasers. It delivers serialized data over single-mode fiber at a line rate of 10.3125 Gbit/s. 10GBASE-LR has a specified reach of 10 kilometres (6.2 mi), but 10GBASE-LR optical modules can often manage distances of up to 25 kilometres (16 mi) with no data loss. FabryProt lasers are commonly used in 10GBASE-LR optical modules. FabryProt lasers are more expensive than VCSELs but their high power and longer wavelength allow efficient coupling into the small core of single mode fiber over greater distances.

10GBASE-LRM
10GBASE-LRM, (Long Reach Multimode) also known as 802.3aq uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 1310 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of 10.3125 Gbit/s. 10GBASE-LRM supports distances up to 220 metres (720 ft) on FDDI-grade 62.5 m multi-mode fibre originally installed in the early 1990s for FDDI and 100BaseFX networks and 260 metres (850 ft) on OM3. 10GBASE-LRM reach is not quite as far as the older 10GBASE-LX4 standard. 10GBASE-LRM uses electronic dispersion control (EDC) for receive equalization. 10GBASE-LRM has been a failure in the market.

10GBASE-ER
10GBASE-ER ("extended reach") uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 1550 nm lasers. It delivers serialized data over single-mode fiber at a line rate of 10.3125 Gbit/s.

M. KHALID SHEIKH

ROLL NO. AH-525009

17

DATA NETWORK COMMUNICATION (3429) 10GBASE-ER has a reach of 40 kilometres (25 mi).

10GBASE-ZR
Several manufacturers have introduced 80 km (50 mi) range ER pluggable interfaces under the name 10GBASE-ZR. This 80 km PHY is not specified within the IEEE 802.3ae standard and manufacturers have created their own specifications based upon the 80 km PHY described in the OC-192/STM-64 SDH/SONET specifications. The 802.3 standard will not be amended to cover the ZR PHY.

10GBASE-LX4
10GBASE-LX4 uses the IEEE 802.3 Clause 48 Physical Coding Sublayer (PCS) and coarse WDM. It supports ranges of between 240 metres (790 ft) and 300 metres (980 ft) over legacy multi-mode cabling. This is achieved through the use of four separate laser sources operating at 3.125 Gbit/s in the range of 1300 nm on unique wavelengths. 10GBASELX4 also supports 10 kilometres (6.2 mi) over SMF. Until 2005 10GBASE-LX4 optical modules were cheaper than 10GBASE-LR optical modules. 10GBASE-LX4 was used by people who wanted to support both MMF and SMF with a single optical module. 10GBASE-LX4 is now an obsolete technology and has no significant market presence.

Copper
10G Ethernet can also run over twin-ax cabling, twisted pair cabling, and backplanes.

10GBASE-CX4
10GBASE-CX4 was the first 10G copper standard published by 802.3 (as 802.3ak2004). It uses the XAUI 4-lane PCS (Clause 48) and copper cabling similar to that used by InfiniBand technology. It is specified to work up to a distance of 15 m (49 ft). Each lane carries 3.125 G baud of signaling bandwidth. 10GBASE-CX4 offers the advantages of low power, low cost and low latency, but has a bigger form factor and more bulky cables than the newer single lane SFP+ standard and a much shorter reach than fibre or 10GBASE-T. Shipments of 10GBASE-CX4 today are very low.

SFP+ Direct Attach


Also known as 10GSFP+Cu, 10GBase-CR, or 10GBase-CX1, SFP+ Direct Attach uses a passive twin-ax cable assembly and connects directly into an SFP+ housing. It has a range of 7 m and like 10GBASE-CX4, is low power, low cost and low latency with the added advantages of using less bulky cables and of having the small form factor of SFP+. SFP+ Direct Attach today is tremendously popular, with more ports installed than 10GBASESR.

Backplane
Backplane Ethernet also known by its task force name 802.3ap is used in backplane applications such as blade servers and routers/switches with upgradable line cards. 802.3ap implementations are required to operate in an environment comprising up

M. KHALID SHEIKH

ROLL NO. AH-525009

18

DATA NETWORK COMMUNICATION (3429) to 1 metre (39 in) of copper printed circuit board with two connectors. The standard defines two port types for 10 Gbit/s (10GBASE-KX4 and 10GBASE-KR) and a 1 Gbit/s port type (1000BASE-KX). It also defines an optional layer for FEC, a backplane autonegotiation protocol and link training for 10GBASE-KR where the receiver can set a three tap transmit equalizer. The autonegotiation protocol selects between 1000BASEKX, 10GBASE-KX4, 10GBASE-KR or 40GBASE-KR4 operation. 40GBASE-KR4 is defined in 802.3ba. New backplane designs use 10GBASE-KR rather than 10GBASE-KX4.

10GBASE-KX4
This operates over four backplane lanes and uses the same physical layer coding (defined in IEEE 802.3 Clause 48) as 10GBASE-CX4.

10GBASE-KR
This operates over a single backplane lane and uses the same physical layer coding (defined in IEEE 802.3 Clause 49) as 10GBASE-LR/ER/SR.

10GBASE-T
10GBASE-T, or IEEE 802.3an-2006, is a standard released in 2006 to provide 10 Gbit/s connections over unshielded or shielded twisted pair cables, over distances up to 100 metres (330 ft). 10GBASE-T cable infrastructure can also be used for 1000BASE-T allowing a gradual upgrade from 1000BASE-T using autonegotiation to select which speed to use. 10GBASE-T has higher latency (1 microsecond) and consumes more power than other 10 gigabit Ethernet physical layers. As of 2010 10GBASE-T silicon is available from several manufacturers with claimed power dissipation of 3-4 W at structure widths of 40 nm. and with 28 nm in development, power will continue to decline.

Connectors
10GBASE-T uses the IEC 60603-7 8P8C (commonly known as RJ45) connectors already widely used with Ethernet. Transmission characteristics are now specified to 500 MHz.

Cables
Category 6A or better balanced twisted pair cables specified in ISO 11801 amendment 2 or ANSI/TIA-568-C.2 are needed to carry 10GBASE-T up to distances of 100 m. Category 6 cables can carry 10GBASE-T for shorter distances when qualified according to the guidelines in ISO TR 24750 or TIA-155-A

M. KHALID SHEIKH

ROLL NO. AH-525009

19

DATA NETWORK COMMUNICATION (3429)

Q.No.5 Write short notes on the following with respect to their functionality: a. Switch b. Router c. Bridge d. Proxy Server SWITCH
A network switch or switching hub is a computer networking device that connects network segments. The term commonly refers to a multi-port network bridge that processes and routes data at the data link layer (layer 2) of the OSI model. Switches that additionally process data at the network layer (Layer 3) and above are often referred to as Layer 3 switches or multilayer switches. The first Ethernet switch was introduced by Kalpana in 1990.

Function
The network switch plays an integral part in most modern Ethernet local area networks (LANs). Mid-to-large sized LANs contain a number of linked managed switches. Small office/home office (SOHO) applications typically use a single switch, or an all-purpose converged device such as a gateway to access small office/home broadband services such as DSL or cable internet. In most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. User devices may also include a telephone interface for VoIP. An Ethernet switch operates at the data link layer of the OSI model to create a separate collision domain for each switch port. With 4 computers (e.g., A, B, C, and D) on 4 switch ports, A and B can transfer data back and forth, while C and D also do so simultaneously, and the two conversations will not interfere with one another. In the case of a hub, they would all share the bandwidth and run in half duplex, resulting in collisions, which would then necessitate retransmissions. Using a switch is called microsegmentation. This allows computers to have dedicated bandwidth on a point-to-point connections to the network and to therefore run in full duplex without collisions Role of switches in networks Switches may operate at one or more layers of the OSI model, including data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is known as a multilayer switch. In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While Layer 2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and token ring are easier at Layer 3.

M. KHALID SHEIKH

ROLL NO. AH-525009

20

DATA NETWORK COMMUNICATION (3429) Interconnection of different Layer 3 networks is done by routers. If there are any features that characterize "Layer-3 switches" as opposed to general-purpose routers, it tends to be that they are optimized, in larger switches, for high-density Ethernet connectivity. In some service provider and other environments where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall, network intrusion detection, and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules. In other cases, the switch is used to create a mirror image of data that can go to an external device. Since most switch port mirroring provides only one mirrored stream, network hubs can be useful for fanning out data to several read-only analyzers, such as intrusion detection systems and packet sniffers. Layer-specific functionality While switches may learn about topologies at many layers, and forward at one or more layers, they do tend to have common features. Other than for high-performance applications, modern commercial switches use primarily Ethernet interfaces, which can have different input and output bandwidths of 10, 100, 1000 or 10,000 megabits per second. At any layer, a modern switch may implement power over Ethernet (PoE), which avoids the need for attached devices, such as an VoIP phone or wireless access point, to have a separate power supply. Since switches can have redundant power circuits connected to uninterruptible power supplies, the connected device can continue operating even when regular office power fails.

Layer 1 hubs versus higher-layer switches


A network hub, or repeater, is a simple network device. Hubs do not manage any of the traffic that comes through them. Any packet entering a port is broadcast out or "repeated" on every other port, except for the port of entry. Since every packet is repeated on every other port, packet collisions affect the entire network, limiting its capacity. There are specialized applications where a hub can be useful, such as copying traffic to multiple network sensors. High end switches have a feature which does the same thing called port mirroring. By the early 2000s, there was little price difference between a hub and a low-end switch.

Layer 2
A network bridge, operating at the data link layer, may interconnect a small number of devices in a home or the office. This is a trivial case of bridging, in which the bridge learns the MAC address of each connected device. Single bridges also can provide extremely high performance in specialized applications such as storage area networks. Classic bridges may also interconnect using a spanning tree protocol that disables links so that the resulting local area network is a tree without loops. In contrast to routers, spanning tree bridges must have topologies with only one active path between two points. The older IEEE 802.1D spanning tree protocol could be quite slow, with forwarding stopping for 30 seconds while the spanning tree would reconverge. A Rapid Spanning

M. KHALID SHEIKH

ROLL NO. AH-525009

21

DATA NETWORK COMMUNICATION (3429) Tree Protocol was introduced as IEEE 802.1w, but the newest edition of IEEE 802.1D adopts the 802.1w extensions as the base standard. The IETF is specifying the TRILL protocol, which is the application of link-state routing technology to the layer-2 bridging problem. Devices which implement TRILL, called RBridges, combine the best features of both routers and bridges. While "layer 2 switch" remains more of a marketing term than a technical term, the products that were introduced as "switches" tended to use micro-segmentation and Full duplex to prevent collisions among devices connected to Ethernet. By using an internal forwarding plane much faster than any interface, they give the impression of simultaneous paths among multiple devices. Once a bridge learns the topology through a spanning tree protocol, it forwards data link layer frames using a layer 2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth method were performance-increasing methods when used on "switch" products with the same input and output port bandwidths: 1. Store and forward: The switch buffers and verifies each frame before forwarding it. 2. Cut through: The switch reads only up to the frame's hardware address before starting to forward it. Cut-through switches have to fall back to store and forward if the outgoing port is busy at the time the packet arrives. There is no error checking with this method. 3. Fragment free: A method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frames that are in error because of a collision will not be forwarded. This way the frame will always reach its intended destination. Error checking of the actual data in the packet is left for the end device. 4. Adaptive switching: A method of automatically selecting between the other three modes. While there are specialized applications, such as storage area networks, where the input and output interfaces are the same bandwidth, this is rarely the case in general LAN applications. In LANs, a switch used for end user access typically concentrates lower bandwidth (e.g., 10/100 Mbit/s) into a higher bandwidth (at least 1 Gbit/s). Alternatively, a switch that provides access to server ports usually connects to them at a much higher bandwidth than is used by end user devices.

Layer 3
Within the confines of the Ethernet physical layer, a layer 3 switch can perform some or all of the functions normally performed by a router. The most common layer-3 capability is awareness of IP multicast through IGMP snooping. With this awareness, a layer-3 switch can increase efficiency by delivering the traffic of a multicast group only to ports where the attached device has signaled that it wants to listen to that group.

Layer 4
While the exact meaning of the term Layer-4 switch is vendor-dependent, it almost always starts with a capability for network address translation, but then adds some type of load distribution based on TCP sessions.

M. KHALID SHEIKH

ROLL NO. AH-525009

22

DATA NETWORK COMMUNICATION (3429) The device may include a stateful firewall, a VPN concentrator, or be an IPSec security gateway.

Layer 7
Layer 7 switches may distribute loads based on URL or by some installation-specific technique to recognize application-level transactions. A Layer-7 switch may include a web cache and participate in a content delivery network.

Rack-mounted 24-port 3Com switch

TYPES OF SWITCHES
Form factor
Desktop, not mounted in an enclosure, typically intended to be used in a home or office environment outside of a wiring closet Rack mounted Chassis with swappable "switch module" cards. e.g. Alcatel's Omni Switch 9000; Cisco Catalyst switch 4500 and 6500; 3Com 7700, 7900E, 8800. DIN rail mounted, normally seen in industrial environments or panels

Configuration options
Unmanaged switches These switches have no configuration interface or options. They are plug and play. They are typically the least expensive switches, found in home, SOHO, or small businesses. They can be desktop or rack mounted. Managed switches These switches have one or more methods to modify the operation of the switch. Common management methods include: a command-line interface (CLI) accessed via serial console, telnet or Secure Shell, an embedded Simple Network Management Protocol (SNMP) agent allowing management from a remote console or management station, or a web interface for management from a web browser. Examples of configuration changes that one can do from a managed switch include: enable features such as Spanning Tree Protocol, set port bandwidth, create or modify Virtual LANs (VLANs), etc. Two sub-classes of managed switches are marketed today: o Smart (or intelligent) switches These are managed switches with a limited set of management features. Likewise "web-managed" switches

M. KHALID SHEIKH

ROLL NO. AH-525009

23

DATA NETWORK COMMUNICATION (3429) are switches which fall in a market niche between unmanaged and managed. For a price much lower than a fully managed switch they provide a web interface (and usually no CLI access) and allow configuration of basic settings, such as VLANs, port-bandwidth and duplex.[9] o Enterprise Managed (or fully managed) switches These have a full set of management features, including CLI, SNMP agent, and web interface. They may have additional features to manipulate configurations, such as the ability to display, modify, backup and restore configurations. Compared with smart switches, enterprise switches have more features that can be customized or optimized, and are generally more expensive than smart switches. Enterprise switches are typically found in networks with larger number of switches and connections, where centralized management is a significant savings in administrative time and effort. A stackable switch is a version of enterprise-managed switch.

Traffic monitoring on a switched network


Unless port mirroring or other methods such as RMON or SMON are implemented in a switch,[10] it is difficult to monitor traffic that is bridged using a switch because only the sending and receiving ports can see the traffic. These monitoring features are rarely present on consumer-grade switches. Two popular methods that are specifically designed to allow a network analyst to monitor traffic are: Port mirroring the switch sends a copy of network packets to a monitoring network connection. SMON "Switch Monitoring" is described by RFC 2613 and is a protocol for controlling facilities such as port mirroring. Another method to monitor may be to connect a Layer-1 hub between the monitored device and its switch port. This will induce minor delay, but will provide multiple interfaces that can be used to monitor the individual switch port.

Typical switch management features

Linksys 48-port switch

M. KHALID SHEIKH

ROLL NO. AH-525009

24

DATA NETWORK COMMUNICATION (3429)

HP Procure rack-mounted switches Mounted in a standard 19inch Telco Rack 19-inch rack with network cables Turn particular port range on or off Link bandwidth and duplex settings Priority settings for ports IP Management by IP Clustering. MAC filtering and other types of "port security" features which prevent MAC flooding Use of Spanning Tree Protocol SNMP monitoring of device and link health Port mirroring (also known as: port monitoring, spanning port, SPAN port, roving analysis port or link mode port) Link aggregation (also known as bonding, trunking or teaming) VLAN settings 802.1X network access control IGMP snooping Link aggregation allows the use of multiple ports for the same connection achieving higher data transfer rates. Creating VLANs can serve security and performance goals by reducing the size of the broadcast domain.

M. KHALID SHEIKH

ROLL NO. AH-525009

25

DATA NETWORK COMMUNICATION (3429)

ROUTER
A hardware device designed to take incoming packets, analyzing the packets and then directing them to the appropriate locations, moving the packets to another network, converting the packets to be moved across a different network interface, dropping the packets, or performing any other number of other actions. In the picture to the right, is a Linksys BEFSR11 router and is what most home routers look like. A router has a lot more capabilities than other network devices such as a hub or a switch that are only able to perform basic network functions. For example, a hub is often used to transfer data between computers or network devices, but does not analyze or do anything with the data it is transferring. Routers however can analyze the data being sent over a network, change how it is packaged and send it to another network or over a different network. For example, routers are commonly used in home networks to share a single Internet connection with multiple computers.

In the above example of a home network there are two different examples of a router, the router and the wireless router. As can be seen in the example the router is what allows all the computers and other network devices access the Internet. Below are some additional examples of different types of routers used in a large network. Brouter Short for Bridge Router, a "brouter" is a networking device that serves as both a bridge and a router. Core router A core router is a router in a computer network that routes data within a network but not between networks.

M. KHALID SHEIKH

ROLL NO. AH-525009

26

DATA NETWORK COMMUNICATION (3429) Edge router An edge Router is a router in a computer network that routes data between one or more networks. Virtual router A Virtual Router is a backup router used in a VRRP setup.

BRIDGE
Bridging is a forwarding technique used in packet-switched computer networks. Unlike routing, bridging makes no assumptions about where in a network a particular address is located. Instead, it depends on flooding and examination of source addresses in received packet headers to locate unknown devices. Once a device has been located, its location is recorded in a table where the MAC address is stored so as to preclude the need for further broadcasting. The utility of bridging is limited by its dependence on flooding, and is thus only used in local area networks. Bridging generally refers to Transparent bridging or Learning bridge operation which predominates in Ethernet. Another form of bridging, Source route bridging, was developed for token ring networks. A network bridge connects multiple network segments at the data link layer (Layer 2) of the OSI model. In Ethernet networks, the term bridge formally means a device that behaves according to the IEEE 802.1D standard. A bridge and a switch are very much alike; a switch being a bridge with numerous ports. Switch or Layer 2 switch is often used interchangeably with bridge. Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical layer (Layer 1) of the OSI model; however, with bridging, traffic from one network is managed rather than simply rebroadcast to adjacent network segments. Bridges are more complex than hubs or repeaters. Bridges can analyze incoming data packets to determine if the bridge is able to send the given packet to another segment of the network. Transparent bridging operation A bridge uses a forwarding database to send frames across network segments. The forwarding database is initially empty and entries in the database are built as the bridge receives frames. If an address entry is not found in the forwarding database, the frame is flooded to all other ports of the bridge, forwarding the frame to all segments except the source address. By means of these broadcast frames, the destination network will respond and a forwarding database entry will be created. As an example, consider three hosts, A, B and C and a bridge. The bridge has three ports. A is connected to bridge port 1, B is connected bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge. The bridge examines the source address of the frame and creates an address and port number entry for A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the destination address and ignores the frame. Host B recognizes

M. KHALID SHEIKH

ROLL NO. AH-525009

27

DATA NETWORK COMMUNICATION (3429) a destination address match and generates a response to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between A and B without any further flooding. Note that both source and destination addresses are used in this algorithm. Source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. The technology was originally developed by the Digital Equipment Corporation (DEC) in the 1980s. Filtering database To translate between two segments, a bridge reads a frame's destination MAC address and decides to either forward or filter. If the bridge determines that the destination node is on another segment on the network, it forwards (retransmits) the packet to that segment. If the destination address belongs to the same segment as the source address, the bridge filters (discards) the frame. As nodes transmit data through the bridge, the bridge establishes a filtering database (also known as a forwarding table) of known MAC addresses and their locations on the network. The bridge uses its filtering database to determine whether a packet should be forwarded or filtered. Destination lookup failure Layer 2 (L2) Ethernet Switch is looking at the MAC Destination address of the Ethernet frame in order to switch it to the appropriate port/s. In case that MAC address exists in the Switch L2 Table, it transmits the Frame only to the port which is tied to that entry. In case that MAC address doesnt exist in the Switch L2 Table, the frame is considered DLF and it been transmitted to all forwarding ports of that VLAN. (Also Broadcasts such as ARP Request messages are transmitted to the same ports) Advantages of network bridges Simple bridges are inexpensive Isolate collision domains with micro-segmentation Access control and network management capabilities Bandwidth scales as network grows Disadvantages of network bridges Does not limit the scope of broadcasts Does not scale to extremely large networks Buffering and processing introduces delays A complex network topology can pose a problem for transparent bridges. For example, multiple paths between transparent bridges and LANs can result in bridge loops. The spanning tree protocol helps to reduce problems with complex topologies. Bridging versus routing Bridging and routing are both ways of performing data control, but work through different methods. Bridging takes place at OSI Model Layer 2 (data-link layer) while routing takes place at the OSI Model Layer 3 (network layer). This difference means that

M. KHALID SHEIKH

ROLL NO. AH-525009

28

DATA NETWORK COMMUNICATION (3429) a bridge directs frames according to hardware assigned MAC addresses while a router makes its decisions according to arbitrarily assigned IP Addresses. As a result of this, bridges are not concerned with and are unable to distinguish networks while routers can. When designing a network, one can choose to put multiple segments into one bridged network or to divide it into different networks interconnected by routers. If a host is physically moved from one network area to another in a routed network, it has to get a new IP address; if this system is moved within a bridged network, it doesn't have to reconfigure anything.

PROXY SERVER
In computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly. The proxy concept was invented in the early days of distributed systems as a way to simplify and control their complexity. Today, most proxies are a web proxy, allowing access to content on the World Wide Web. A proxy server has a large variety of potential purposes, including: To keep machines behind it anonymous, mainly for security To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server. To apply access policy to network services or content, e.g. to block undesired sites. To log / audit usage, i.e. to provide company employee Internet usage reporting. To bypass security / parental controls. To scan transmitted content for malware before delivery. To scan outbound content, e.g., for data leak protection. To circumvent regional restrictions. To allow a web site to make web requests to externally hosted resources (e.g. images, music files, etc.) when cross-domain restrictions prohibit the web site from linking directly to the outside domains.

A proxy server that passes requests and responses unmodified is usually called a gateway or sometimes tunneling proxy. A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet. M. KHALID SHEIKH ROLL NO. AH-525009 29

DATA NETWORK COMMUNICATION (3429) A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.

TYPES OF PROXY
Forward proxies

A forward proxy taking requests from an internal network and forwarding them to the Internet. Forward proxies are proxies where the client server names the target server to connect to. Forward proxies are able to retrieve from a wide range of sources (in most cases anywhere on the Internet). The terms "forward proxy" and "forwarding proxy" are a general description of behaviour (forwarding traffic) and thus ambiguous. Except for Reverse proxy, the types of proxies described on this article are more specialized sub-types of the general forward proxy concept. Open proxies

An open proxy forwarding requests from and to anywhere on the Internet. An open proxy is a forwarding proxy server that is accessible by any Internet user. Gordon Lyon estimates there are "hundreds of thousands" of open proxies on the Internet. An anonymous open proxy allows users to conceal their IP address while browsing the Web or using other Internet services.

M. KHALID SHEIKH

ROLL NO. AH-525009

30

DATA NETWORK COMMUNICATION (3429) Reverse proxies

A reverse proxy taking requests from the Internet and forwarding them to servers in an internal network. Those making requests connect to the proxy and may not be aware of the internal network. A reverse proxy is a proxy server that appears to clients to be an ordinary server. Requests are forwarded to one or more origin servers which handle the request. The response is returned as if it came directly from the proxy server. Reverse proxies are installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers: Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing the need for a separate SSL Server Certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections. This problem can partly be overcome by using the SubjectAltName feature of X.509 certificates. Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations). Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content. Compression: the proxy server can optimize and compress the content to speed up the load time. Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages. Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat.

M. KHALID SHEIKH

ROLL NO. AH-525009

31

DATA NETWORK COMMUNICATION (3429) Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.

M. KHALID SHEIKH

ROLL NO. AH-525009

32

Das könnte Ihnen auch gefallen