Sie sind auf Seite 1von 20

Asynchronous Transfer Mode: An Overview

Scott A. Valcourt ATM Consortium Manager sav@unh.edu June 24, 1997

Introduction
Asynchronous Transfer Mode, or ATM, is a new networking technology that has come into a significant place in the networking industry. Asynchronous Transfer Mode (ATM) has been accepted universally as the transfer mode of choice for Broadband Integrated Services Digital Networks(BISDN). ATM is designed to provide fast packet (cell) switching over various types and speeds of media at variable rates from 64 kbps to 2 Gbps and beyond. ATM can handle any kind of information (voice, data, image, text and video) in an integrated manner. ATM provides a good bandwidth flexibility and can be used efficiently from desktop computers to local area and wide area networks. ATM is a connection-oriented packet switching technique in which all packets are of fixed length (53 bytes -- 5 bytes for header and 48 bytes for information). This connection-oriented technology allows for the passing of multiple types of service traffic over the media, making ATM the grand network technology covering all service types available today. This paper is an attempt to provide a brief summary of ATM, its beginnings, concepts and operation, as well as serve as a tool to bring the reader up to speed on what ATM is. Several opinions and future musings are drafted. Feel free to disagree. Finally, the paper will examine a glimpse into the future of ATM in the networking industry.

History
The beginnings of ATM were first developed by researchers at AT&T Bell Laboratories and France Telecoms Research Center in the early to mid-

1980s. These researchers were interested in packetizing voice information so that one switching fabric could be used for both data and voice. Researchers believed that a device capable of packetizing and switching voice information would have to move at least one million packets per second with millisecond-range queuing delays. Because no existing packet-switching technology was capable of these speeds, the researchers were forced to consider new paradigms. Until very large scale integration (VLSI) technology became prevalent, the content of a message was transparent to the intelligence of a switch. Switch intelligence was applied only to call setup and teardown signaling. Early ATM researchers realized that incorporating VLSI technology into a switching fabric enabled switches to examine and process the contents of a packet header and provide memory for buffering. In particular, the switch could quickly direct information through the network after analyzing simple addresses contained in packet headers. The researchers concluded that networks of these switches could conceivably achieve very high performance with minimal delay. Another concept that became important during this period was a new switching technique called fast packet switching. Fast packet switches differ from X.25-like packet-switching systems in that they minimize storing, processing, and forwarding activity at each link. For example, error control and flow control are performed on an end-to-end, rather than on a link-bylink basis. BY reducing the activities at each link, additional throughput is possible. As initially designed, fast packet switching systems handled variablelength packets. The final technical

Page

contribution to ATMs development was the modification of fast packet switching to handle small, fixed-length packets called cells. Short cells reduce queuing delay and increase the ability of system elements to operate in parallel. Fixedsize cells limit delay variance and ease buffer allocation. Limited delay variance is particularly important for supporting real-time traffic, such as voice or video conversations. Although it is based on established technologies, ATM is only now becoming a practical consideration for network architects. Large end users with low delay, high-bandwidth applications are seeking new network technologies that can quickly move a variety of source material (for example, data, video, voice and image) between remote locations. ATM has may of the characteristics these users desire. For wide area networking, ATM is currently being standardized for use in Broadband Integrated Services Digital Networks (BISDNs) by the Consultative Committee for International Telegraph and Telephone (CCITT) and the American National Standards Institute (ANSI). Officially, the ATM layer of the BISDN model is defined by CCITT I.361, one of the many BISDN specifications. Although ATM was designed with wide area networks (WANs) in mind, many experts now believe that another widespread use of ATM will be in campus network applications. Local area network (LAN) managers anticipate that ATMs high throughput and improved scalability will address problems created by network-intensive distributed applications. To promote ATM interoperability based on standards, the ATM Forum was founded in the fall of 1991. This organization, which has hundreds of members, helps ensure ATM implementation compatibility through various activities, including publication of the ATM User-Network Interface Specification. Vendors can use these specifications as guides for product implementation.

Motivation for ATM


In order to understand what ATM is all about, a brief introduction to STM is in order. ATM is the complement of STM, which stands for "Synchronous Transfer Mode". STM is used by telecommunication backbone networks to transfer packetized voice and data across long distances. It is a circuit switched networking mechanism, where a connection is established between two end points before data transfer commences, and torn down when the two end points are done. Thus the end points allocate and reserve the connection bandwidth for the entire duration, even when they may not actually be transmitting the data. The way data is transported across an STM network is to divide the bandwidth of the STM links (familiar to most people as T1 and T3 links) into a fundamental unit of transmission called time-slots or buckets. These buckets are organized into a train containing a fixed number of buckets and are labeled from 1 to N. The train repeats periodically every T time period, with the buckets in the train always in the same position with the same label. There can be up to M different trains labeled from 1 to M, all repeating with the time period T, and all arriving within the time period T. The parameters N, T, and M are determined by standards committees, and are different for Europe and America. For the trivia enthusiasts, the time period T is a historic legacy of the classic Nyquist sampling criteria for information recovery. It is derived from sampling the traditional 4Khz bandwidth of analog voice signals over phone lines at twice its frequency or 8Khz, which translates to a time period of 125 usec. This is the most fundamental unit in almost all of telecommunications today, and is likely to remain with us for a long time. On a given STM link, a connection between two end points is assigned a fixed bucket number between 1 and N, on a fixed train between 1 and M, and data from that connection is always carried in that

Page

bucket number on the assigned train. If there are intermediate nodes, it is possible that a different bucket number on a different train is assigned on each STM link in the route for that connection. However, there is always one known bucket reserved a priori on each link throughout the route. In other words, once a time-slot is assigned to a connection, it generally remains allocated for that connections sole use throughout the life time of that connection. To better understand this, imagine the same train arriving at a station every T time period. Then if a connection has any data to transmit, it drops its data into its assigned bucket(time-slot) and the train departs. And if the connection does not have any data to transmit, that bucket in that train goes empty. No passengers waiting in line can get on that empty bucket. If there are a large number of trains, and a large number of total buckets are going empty most of the time (although during rush hours the trains may get quite full), this is a significant wastage of bandwidth, and limits the number of connections that can be supported simultaneously. Furthermore, the number of connections can never exceed the total number of buckets on all the different trains (N*M). And this is the raisond'etre for ATM.

that the burst can be buffered up and put in subsequently available free buckets. This is called statistical multiplexing, and it allows the sum of the peak bandwidth requirement of all connections on a link to even exceed the aggregate available bandwidth of the link under certain conditions of discipline. This was impossible on an STM network, and it is the main distinction of an ATM network.

ATM Technology Overview


ATM is a cell-switching and multiplexing technology designed to combine the benefits of circuit switching (constant transmission delay, guaranteed capacity) with those of packet switching (flexibility, efficiency for intermittent traffic). Like X.25 and Frame Relay, TM defines the interface between the user equipment and the network (referred to as the User Network Interface, or UNI). This definition supports the use of ATM switches (and ATM switching techniques) within both public and private networks. Because it is an asynchronous mechanism, ATM differs from synchronous transfer mode (STM) methods, where time-division multiplexing techniques are employed to preassign users to time slots. ATM time slots are made available on demand, with labels identifying the source of the transmission contained in each ATM cell. TDM is inefficient relative to ATM because, if a station has nothing to transmit when its time slot comes up, that time slot is wasted. The converse situation, where one station has lots of information to transmit, is also less efficient. In this case, that station can only transmit when its turn comes up, even though all the other time slots may be empty. With ATM, a station can send labeled cells whenever necessary. Figure 1 contrasts time-division multiplexing (TDM) and ATM multiplexing techniques.

Statistical Multiplexing
Fast packet switching is attempting to solve the unused bucket problem of STM by statistically multiplexing several connections on the same link based on their traffic characteristics. In other words, if a large number of connections are very bursty (i.e. their peak/average ratio is 10:1 or higher), then all of them may be assigned to the same link in the hope that statistically they will not all burst at the same time. And if some of them do burst simultaneously, that that there is sufficient elasticity

Page

TDM
1 2 3 4 1 2 3 4 1 2 3

Time Slots 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

Station Data Nothing to Send

ATM
1 1 3 1 2 4 1

Cells 3 1 2 3 1 4 4 4 1 3

Station Data Nothing to Send

Another critical ATM design characteristic is its star topology. The ATM switch acts as a hub in the ATM network, with all devices attached directly. This provides all the traditional benefits of star-topology networks, including easier troubleshooting and support for network changes and additions. Furthermore, ATMs switching fabric provides additive bandwidth. As long as the switch can handle the aggregate cell transfer rate, additional connections to the switch can be made. The total bandwidth of the system increases accordingly. If a switch can pass cells among all its interfaces at the full rate of all interfaces, it is described as non-blocking. For example, an ATM switch with 16 ports each at 155 megabits per second (Mbps) would require about 2.5 gigabits per second (Gbps) aggregate throughput to be nonblocking. Finally, ATM is flexible, in that it can carry various source material and run on various physical-layer

implementations. The ATM model defines an engine that moves small, fixed-size cells through a network, but leaves issues of application and physical implementation open.

ATM and the BISDN Model


BISDN is a standard for widearea services that permits rapid movement of various types of source material over public networks. Where narrowband ISDN supports data rates up to 2 Mbps, BISDN supports rates from 34 Mbps to multiple Gbps. At these high bit rates, ATM excels as a switching technique. BISDN is expected to support such applications as image processing, video and distributed network operations between highperformance workstations. Analyzing ATMs place in the BISDN model fosters greater understanding of ATMs technical characteristics. The BISDN reference model is shown in Figure 2.

Page

Plane Management

Layer Management

Control Plane Higher Layers

User Plane Higher Layers

ATM Adaptation Layer

ATM Layer

Physical Layer

ATM and associated technologies perform functions corresponding roughly to Layer 1 and parts of Layer 2 (such as error control and data framing) of the Open System Interconnection (OSI) reference model. ATM can utilize any physical medium capable of carrying ATM cells. BISDN was originally standardized to run on the Synchronous Optical Network (SONET)/Synchronous Digital Hierarchy (SDH) and to offer services supporting data rates from 155 Mbps to over 2 Gbps. Over time, as ATM began to emerge from the large collection of BISDN specifications as an especially useful technology, the range of physical layers was extended to include existing transmission facilities such as Digital Signal Level 3 (DS3)/E3. This type of extension allows ATM implementation on current networks and opens the door for use outside the BISDN sphere. The ATM physical layer is divided into two parts: the physical medium sublayer and the transmission convergence sublayer. The physical medium sublayer is responsible for sending and receiving a continuous flow of bits with associated timing information to synchronize transmission and reception. Because it includes only

physical medium-dependent functions, its specification depends upon the physical medium used. Often, the physical medium sublayer is simply an existing standard such as SONET/SDH, DS3/E3 or the FDDI physical layer. The transmission convergence sublayer is responsible for following: Cell delineation. Maintains ATM cell boundaries. Header error control sequence generation and verification. Generates and checks a header error control code to ensure valid data. Cell rate decoupling. Inserts or suppresses idle (unassigned) ATM cells to adapt the rate of valid ATM cells to the payload capacity of the transmission system. Transmission frame adaptation. Packages ATM cells into frames acceptable to the particular physicallayer implementation. Transmission frame generation and recovery. Generates and maintains the appropriate physical-layer frame structure. ATM allows several physicallayer implementations, including SONET/SDH, DS-3/E3, 25 Mbps TAXI

Page

(copper unshielded twisted pair), 155 Mbps copper unshielded twisted pair, 100-Mbps local fiber (FDDI physical layer), and 155-Mbps local fiber (Fiber Channel physical layer). Above the physical layer is the ATM layer, which is described in detail later in this section. Immediately above the ATM layer is the ATM adaptation layer. The ATM adaptation layer translates between the larger service data units (SDUs) (for example, video streams and data packets) of upperlayer processes and ATM cells. Several ATM adaptation layers are currently specified, and they are analyzed later in this document. Finally, above the ATM adaptation layer are the higher-layer protocols representing traditional transports and applications. The protocol reference model also describes three separate reference planes: User plane. Transfers user application information and provides appropriate controls (for example, error and flow control). Control plane. Provides signaling for call connection control functions necessary for providing switched services. Management plane. Includes layer management, which performs layer-specific management functions, and plane management, which performs management functions related to the entire system. There is no defined (or standardized) relationship between OSI layers and BISDN ATM protocol model layers. Yet, the following relations can be found: The Physical layer of ATM is almost equivalent to layer 1 of the OSI model and it performs bit level functions. The ATM Layer can be equivalent of the lower edge of the layer 2 of the OSI model.

The ATM Adaptation Layer performs the adaptation of OSI higher layer protocols.

Physical Layer Functions The Physical Layer is divided into two sub-layers: the Physical Medium (PM) sub-layer and the Transmission Convergence (TC) sublayer. The PM sub-layer contains only the Physical Medium dependent functions. It provides bit transmission capability, including bit alignment. It performs line coding and also electrical/optical conversion, if necessary. Optical fiber will be the physical medium and in some cases, coaxial and twisted pair cables are also used. The PM layer includes bit timing functions, such as the generation and reception of waveforms suitable for the medium, and also insertion and extraction of bit timing information. The TC sub-layer mainly does five functions. The lowest function is generation and recovery of the transmission frame. The next function, transmission frame adaptation, takes care of all actions to adapt the cell flow according to the used payload structure of the transmission system in the sending direction. It extracts the cell flow from the transmission frame in the receiving direction. The cell delineation function enables the receiver to recover the cell boundaries. Scrambling and Descrambling are to be done in the information field of a cell before the transmission and reception respectively to protect the cell delineation mechanism. The HEC sequence generation is done in the transmit direction and its value is recalculated and compared with the received value and thus used in correcting the header errors. If the header errors can not be corrected, the cell will be discarded. Cell rate decoupling inserts the idle cells in the transmitting direction in order to adapt the rate of the ATM cells to the payload capacity of the transmission system. It suppresses all idle cells in the receiving direction. Only assigned and

Page

unassigned cells are passed to the ATM layer.

ATM layer functions


The ATM layer is the layer above the physical layer and performs four functions, explained as follows: Cell header generation/extraction: This function adds the appropriate ATM cell header (except for the HEC value) to the received cell information field from the AAL in the transmit direction. VPI/VCI values are obtained by translation from the SAP identifier and removes the cell header in the receive direction. Only cell information field is passed to the AAL. Cell multiplex and demultiplex: This function multiplexes cells from individual VPs and VCs into one resulting cell stream in the transmit direction. It divides the arriving cell stream into individual cell flows with respect to VC or VP in the receive direction. VPI and VCI translation: This function is performed at the ATM switching and/or cross-connect nodes. At the VP switch, the value of the VPI field of each incoming cell is translated into a new VPI value of the outgoing cell. The values of VPI and VCI are translated into new values at a VC switch. Generic Flow Control(GFC): This function supports control of the ATM traffic flow in a customer network. This is defined at the B-ISDN User-toNetwork Interface (UNI). This is generally not used. ATM Adaptation Layer Functions (AAL): AAL is divided into two sublayers: Segmentation and Reassembly(SAR) and Convergence sublayer (CS).

SAR sublayer: This layer performs segmentation of the higher layer information into a size suitable for the payload of the ATM cells of a virtual connection and at the receive side, it reassembles the contents of the cells of a virtual connection into data units to be delivered to the higher layers. CS sublayer: This layer performs functions like message identification and time/clock recovery. This layer is further divided into Common part convergence sublayer (CPCS) and a Service specific convergence sublayer (SSCS) to support data transport over ATM. AAL service data units are transported from one AAL service access point (SAP) to one or more others through the ATM network. The AAL users can select a given AAL-SAP associated with the QOS required to transport the AAL-SDU. There are 5 AALs that have been defined, one for each class of service.

Cell Relay
ATMs most obvious technical characteristic is its use of small, fixedsize cells. Each ATM cell consists of 48 octets of payload and five octets of header information. The five header octets contain information identifying the path through the network, as well as congestion indicators, the payload type, and other parameters. The payload portion carries upper-layer information. An ATM cell is shown in Figure 3. The header begins with 4 bits of generic flow control (GFC) information, which for the most part, goes unused today. This field can be used to provide standardized local functions, such as media access control when multiple stations share a single switch port. Values within this field are not carried end-to-end and are overwritten by the ATM network.

Page

ATM Cell (53 Bytes)


HEADER
(5 Bytes)

PAYLOAD
(48 Bytes)

GFC

VPI

VCI
for UNI

R C PT E L S P

HEC

VPI

VCI
for NNI

R C PT E L S P

HEC

Following the GFC field, the ATM specification calls for 8 bits of virtual path identifier (VPI) information and 16 bits of virtual channel identifier (VCI) information. These two fields are commonly referred to as one, called the VPI/VCI field or value. Together, the VPI and VCI provide ATM with two connection concepts. A virtual channel is simply the classical notion of the virtual circuit, providing a logical connection between two users. A virtual path defines a bundle of virtual channels along some segment of their route through the network. Many virtual channels use them same virtual path. Using a two-part connection identifier helps speed operations at each link. Once a virtual path is established, adding another virtual channel to that path involves very little overhead. In addition, a number of data transport functions, such as traffic management, can be performed at the virtual path level, simplifying the network architecture.

ATM virtual connections are formed by linking VPI/VCI values on different ports, as illustrated in Figure 4. When a cell with VPI/VCI 0/21 arrives at port 1 of the ATM switch, the switch looks in its table and learns that it must translate the VPI/VCI value to 0/45 and output the cell on port 2. Similarly, a cell with VPI/VCI 0/64 arriving at port 1 will be output on port 3 with VPI/VCI = 0/21. Transit in the opposite direction over the same circuits results in full-duplex operations. As this example illustrates, VPI/VCI values are unique only on a single interface, not throughout the ATM network. Following the VPI/VCI information is the 3-bit payload type field. The first bit indicates user data or control data; if the first bit indicates user data, the middle bit indicates congestion, and the last bit indicates the end of the frame. The next field is the 1-bit cell loss priority (CLP) field. This field allows the user or network to optionally indicate the explicit loss priority of the cell. A value of one means that the cell

Page

may be discarded first, if necessary; a value of zero means that the cell should not be discarded as readily. Discarding cells with both CLP values might be necessary when severe network congestion is present. Finally, the 8-bit header error control (HEC) field is used to detect

multiple-bit errors and optionally correct single-bit errors in the header. It can also be used for cell delineation by searching the bit stream for a repeating pattern of HEC values.

ATM Switch
0/21 0/64

Port 1 Port 1 1 VPI/VCI 0/21 0/64 Port 2 3 VPI/VCI 0/45 0/21

Port 2

0/45

Port 3

0/21

ATM Adaptation Layers


ATM adaptation layers (AALs) segment upper-layer information into cells at the transmitter and reassembles them at the receiver. Several AAL specification exist, including AAL1 and AAL2, which support voice, video and similar traffic, and AAL3/4 and AAL5, which support data communications. ATM AALs are used to support four service classes. These service classes are shown in Figure 5. Class A will probably be used to transport the existing telephone digital hierarchy. Class B is similar to Class A except that Timing Relationship Between Source and Destination Bit Rate Connection Mode Class A Required Constant ConnectionOriented AAL1 CBR

in Class B, the bit rate is variable, so it is more appropriate for compressed video, where the information rate changes with scene content. Classes C and D will be used to transport data in connection-oriented and connectionless environments, respectively. AAL3/4 and AAL5 can each carry both connection-oriented and connectionless data packets. They are essentially two different ways of doing the same thing. Work on AAL1 and AAL2 is in its early stages. With the finalization of UNI Version 4.0, these classes are all but done away with, as they have been replaced with the appropriate levels of Quality of Service (QOS) loosly outlined in the last column.

Class B Required Variable ConnectionOriented AAL2 VBR

Class C Not Required Variable ConnectionOriented AAL3/4 or AAL5 UBR

Class D Not Required Variable Connectionles s AAL3/4 or AAL5 ABR

Quality of Service

Page

ATM USER

Public ATM Network

ATM USER

Private ATM Switch


ATM USER

Public ATM Switch

Public ATM Switch

Private UNI

Public UNI

Public NNI

Page 10

AAL1 AAL1 is used for the transmission of Constant Bit Rate (CBR) services, when information transferred between source and destination at a constant bit rate is required. Besides transferring data at a constant bit rate, the AAL1 provides timing information between source and destination and an indication of lost or corrupted data. Interactions between the user plane and the control plane are still under investigation.

The SAR layer accepts a 47 octet block of data from the CS and then adds a one octet header to form the SAR PDU. The eight bits of the header are divided into 1 bit for the Convergence Sublayer Indication (CSI), three bits for the Sequence Number (SN) and four bits for the Sequence Number Protection (SNP). The SNP is used to determine and correct any errors in the CSI and SN fields of the SAR PDU header. Once prepared, the SAR PDU is inserted into the 48-octet payload of the ATM cell at the ATM layer.

Data Frame AAL1 Convergence Sublayer Adaptation Layer AAL1 Segmentation and Reassembly Sublayer

Payload Frame

Convergence Sublayer PDU

SAR PDU

SAR PDU

SAR PDU

SAR PDU

ATM Cell

ATM Layer
ATM Cell ATM Cell

ATM Cell

Page 11

AAL2 AAL2 is used for the transmission of Variable Bit Rate (VBR) services, when information transferred between source and destination at a variable bit rate is required. Besides transferring data at a variable bit rate, the AAL1 provides timing information between source and destination and an indication of lost or corrupted data. The CCITT is still developing this layer, thus the discussion here is an example of what is expected to be used. The Sequence Number (SN) field contains the sequence number to allow
Data Frame AAL2 Convergence Sublayer Adaptation Layer AAL2 Segmentation and Reassembly Sublayer

the recovery of lost or misrouted cells. The Information type (IT) field indicates the beginning of a message (BOM), the continuation of a message (COM), the end of a message (EOM) or that the cell contains timing or other information. The Length Indicator (LI) field indicates the number of useful bytes in partiallyfilled cells. The CRC field allows the SAR to correct bit errors in the SAR SDU. The coding and length of each field are for further study. Once prepared, the SAR PDU is inserted into the 48-octet payload of the ATM cell at the ATM layer.

Payload Frame

Convergence Sublayer PDU

SAR PDU

SAR PDU

SAR PDU

SAR PDU

ATM Cell

ATM Layer
ATM Cell ATM Cell

ATM Cell

Page 12

AAL3/4 AAL3/4 is divided into two sublayers: a Convergence Sublayer (CS) and a Segmentation and Reassembly sublayer (SAR). The CS itself has two parts: a Common Part Convergence Sublayer (CPCS) and a Service Specific Convergence Sublayer (SSCS). The SSCS is not yet fully standardized. The CPCS is designed primarily for error control. The CPCS encapsulates information within a 4-octet header and a 4-octet trailer. Once encapsulated, the CS protocol data unit (PDU) is broken into 44-octet units, with a 2octet header and 2-octet trailer inside the cell payload by the SAR sublayer.
Data Frame AAL3/4 Convergence Sublayer Adaptation Layer AAL3/4 Segmentation and Reassembly Sublayer

In the SAR header are two bits that indicate whether the cell is the beginning of message (BOM), continuation of message (COM), or end of message (EOM). Single segment messages (SSMs) that begin and end in one cell are also possible. A message identifier (MID) in the SAR header allows multiple messages to interleave on a single virtual connection. Finally, the SAR PDU is inserted into the 48-octet payload of the ATM cell at the ATM layer. The distinguishing features of this technology are a 10-bit CRC per cell (rather than per frame) and the fact that it is possible to multiplex frames onto a single connection.

Payload Frame

Convergence Sublayer PDU

SAR PDU

SAR PDU

SAR PDU

SAR PDU

ATM Cell

ATM Layer
ATM Cell ATM Cell

ATM Cell

Page 13

AAL5 Packet delimiting in AAL5 is performed within the ATM cell header rather than within the SAR PDU header, as in AAL3/4. Also, error detection is provided by a length field and a 32-bitper-frame CRC rather than by a 10-bitper-cell CRC, as in AAL3/4. Payload frame treatment by AAL5 is shown in Figure 8. The AAL5 convergence sublayer appends a pad and an 8-octet trailer to the payload frame. The pad ensures that the AAL5 PDU falls exactly on a 48octet cell payload boundary. The AAL5 convergence sublayer trailer includes the length of the data frame and a 32bit CRC computed across the entire payload. By checking that the length and CRC still match the data frame, a receiver can detect bit errors and lost or misordered cells. After processing by convergence sublayer, the AAL5 PDU is then
Data Frame Convergence Sublayer AAL5 Segmentation and Reassembly Sublayer

segmented into 48-octet blocks, without the additional SAR header or trailer required by AAL3/4. Messages are not interleaved. Instead, empty cells are sent until the data frame is ready. Then data cells are sent with the end-ofmessage bit in the header set to zero, and the last data cell is sent with the EOM bit set to one. A simplified illustration of this process is shown in Figure 9. Finally, these blocks are placed directly into the ATM cell payload field. AAL5 is used primarily in LAN Emulation (LANE) as the ATM Adapatation Layer of choice. Connecting computers that use different AALs is somewhat analogous to connecting computers on different shared-media LANs (for example, Ethernet and Token Ring). A router/bridge or other internetworking device must convert between the two frame and cell types.

Payload Frame

Convergence Sublayer PDU

SAR PDU

SAR PDU

SAR PDU

SAR PDU

ATM Cell

ATM Layer

0x0
ATM Cell

0x0 0x0

ATM Cell

ATM Cell

0x1

Page 14

ATM Implementations
Although it has been the subject of research for over ten years, ATM product development has only recently begun to gather momentum. Early WAN applications use ATM as a core (backbone) technology. ATM WAN core networks are attractive because of their reduced service costs (DS3/E3 performance at lower costs that comparable private lines) and the accommodation of multiple types of traffic (for example, data, voice, image and video). ATM is also used as a LAN core. In this application, ATM is like a superhighway LAN carrying traffic sent to it by a number of distribution LANs. ATMs switched access allows better scaling than shared-media networks such as Fiber Distributed Data Interface (FDDI), (although FDDI switching is also technically possible). During the next few years, ATMs cost will decrease as it moves from its present desktop use by financial traders, aerospace engineers, motion-picture animators, and other high-powered LAN users to the basic desktop user, providing high bandwidth and videoconferencing capabilities without losing access to the core networking functions. Traffic Control in ATM Networks There are many functions involved in the traffic control of ATM networks which are given as follows. 1. Connection Admission Control: This can be defined as the set of actions taken by the network during the call set-up phase to establish whether a VC/VP connection can be made. A connection request for a given call can only be accepted if sufficient network resources are available to establish the end-to-end connection maintaining its required quality of service and not affecting the quality of service of existing connections in the network by this new connection.

There are two classes of parameters which are to be considered for the connection admission control. They can be described as follows: A. Set of parameters that characterize the source traffic, i.e. Peak cell rate, Average cell rate, burstiness and peak duration, etc. B. Another set of parameters to denote the required quality of service class expressed in terms of cell transfer delay, delay jitter, cell loss ratio and burst cell loss, etc. 2. Usage Parameter Control (UPC) and Network Parameter Control (NPC): UPC and NPC perform similar functions at User-to-Network Interface and Network-to-Node Interface respectively. They indicate the set of actions performed by the network to monitor and control the traffic on an ATM connection in terms of cell traffic volume and cell routing validity. This function is also known as "Police Function". The main purpose of this function is to protect the network resources from malicious connection and to enforce the compliance of every ATM connection to its negotiated traffic contract. An ideal UPC/NPC algorithm meets the following features: A. Capability to identify any illegal traffic situation. B. Quick response time to parameter violations. C. Less complexity and much simplicity of implementation. 3. Priority Control: CLP (Cell Loss priority) bit in the header of an ATM cell allows users to generate different priority traffic flows and the low priority cells are discarded to protect the network performance for high priority cells. The two priority classes are treated separately by the network Connection Admission Control and UPC/NPC functions to provide two requested QOS classes.

Page 15

4. Network Resource Management: Virtual Paths can be employed as an important tool of traffic control and Network resource management in ATM networks. They are used to simplify Connection Admission Control(CAC) and Usage/ Network parameter control(UPC/NPC) that can be applied to the aggregate traffic of an entire virtual path. Priority control can also be implemented by segregating traffic types requiring different quality of service (QOS) through virtual paths. VPs can also be used to distribute messages efficiently for the operation of particular traffic control schemes like congestion notification. Virtual paths are also used in statistical multiplexing to separate traffic to prevent statistically multiplexed traffic from being interfered with other types of traffic, for example guaranteed bit rate traffic. 5. Traffic Shaping: Traffic shaping changes the traffic characteristics of a stream of cells on a VPC or VCC by properly spacing the cells of individual ATM connections to decrease the peak cell rate and also to reduce the cell delay variation. Traffic shaping must preserve the cell sequence integrity of an ATM connection. Traffic shaping is an optional function for both network operators and end users. It helps the network operator in dimensioning the network more costeffectively and it is used to ensure conformance to the negotiated traffic contract across the user-to network interface in the customer premises network. Congestion Control in ATM Congestion control plays an important role in the effective traffic management of ATM networks. Congestion is a state of network elements in which the network can not assure the negotiated quality of service to already existing connections and to new connection requests. Congestion may happen

because of unpredictable statistical fluctuations of traffic flows or a network failure. Congestion control is a network means of reducing congestion effects and preventing congestion from spreading. It can assign CAC or UPC/NPC procedures to avoid overload situations. To mention an example, congestion control can minimize the peak bit rate available to a user and monitor this. Congestion control can also be done using explicit forward congestion notification (EFCN) as is done in Frame Relay protocol. A node in the network in a congested state may set an EFCN bit in the cell header. At the receiving end, the network element may use this indication bit to implement protocols which will lower the cell rate of an ATM connection during congestion.

ATM Architectures
As simplified in standards documents, the ATM architecture seems to consist of super terminals (very fast, intelligent terminals with large graphics displays and multimedia capability) to connect directly to public ATM networks. Unfortunately, this model ignores the current heterogeneous installed base and the economic necessity of at least some private networking. A more practical ATM architecture calls for connectivity with todays wide variety of installed networks through internetworking devices (such as routers) to an ATM network. The ATM network could consist of both public and private portions. Network planners considering ATM for data can choose from among three ATM implementation techniques: ATM permanent-virtual-connection (PVC) service, ATM switched-virtualconnection (SVC) service, and ATM connectionless service. In the following discussion, the term router is used to refer to any ATM end point.

Page 16

ATM Permanent Virtual Connection (PVC) Service ATM PVC service operates much like Frame Relay, a virtual connection mesh, partial mesh, or star is administratively established through the ATM network between the routers. ATM PVC service was the original focus of the ATM Forum. Advantages of ATM PVC service include a direct ATM connection between routers and the simplicity of the specification and subsequent implementation. Disadvantages include static connectivity and the administrative overhead of provisioning virtual connections manually. ATM Switched Virtual Connection (SVC) Service ATM SVC service operates much like X.25 SVC service, although ATM allows much higher throughput. Virtual connections are created and released dynamically, providing user bandwidth on demand. This service requires a signaling protocol between the router and the network. The ATM Forum recently completed several signaling implementation agreements based on the CCITT standardization, like UNI 3.0, UNI 3.1, UNI 4.0, IISP and PNNI 1.0. Advantages of the ATM SVC service include good general connectivity and simple administration. Its disadvantages include potentially slow SVC setup and lack of wide vendor implementation at all versions. ATM Connectionless Service ATM connectionless service operates much like the Switched Multimegabit Data Service (SMDS). In this scheme, a virtual connection is administratively established between the router and a connectionless service function (CLSF), an ancillary part of the ATM switch. The CLSF forwards data frames based on the destination address (which can be a multicast address)

included in each frame (not each cell). No signaling is required because all connections to the CLSF are preestablished. Many telecommunications carriers are introducing metropolitan area network (MAN) trials and services and plan to migrate toward ATM connectionless service, however, for LAN-based ATM networks, this type of service is virtually non-existent. The advantages of ATM connectionless service include good connectivity and a strong match with many of todays popular connectionless protocols. The Internet communitys Internet Protocol (IP) and Novells Internet Packet Exchange (IPX) are examples of these protocols. The primary disadvantage of this approach is that the CLSF can become a throughput bottleneck or a single point of failure. ATM Interoperability Issues Several ATM interoperability issues must be resolved before ATM will achieve general acceptance by network planners. ATM standards organizations and implementors are currently working on each of the following issues. AAL Compatibility The compatibility of the various AALs is an important interoperability issue. For example, AAL5 uses a larger per-cell payload. AAL3/4 is compatible with IEEE 802.6, while AAL5 is not. AAL3/4 uses a per-cell cyclic redundancy check (CRC) scheme, while AAL5 uses a per-frame CRC. Communication between end points supporting AAL3/4 and those supporting AAL5 will require internetworking by a device such as a router. Protocol Multiplexing

Page 17

In order to multiplex different protocols through ATM, a means to identify the particular network-layer protocol (or bridging medium or origin) must be provided. An Internet Request for Comments (RFC) document has been drafted for multiprotocol interconnection over AAL5, but none has yet been drafted for AAL3/4. Also, the Internet community has official jurisdiction only over IP; other protocols can use different methods, as determined by their controlling standards bodies or vendors. Addressing ATM addressing is still another interesting area of ATM technology. CCITT Recommendation E.164 uses telephone numbers for addresses and has support from much of the telecommunications industry. The ITUT standard uses E.164 addressing. The Institute of Electrical and Electronic Engineers (IEEE) 802 standards specify 48-bit media-access control (MAC) addresses and has support from many in the data communications industry. A likely compromise is OSI addresses, called network service access points (NSAPs), which can encode both E.164 and IEEE 802 addresses, as well as other addressing plans. The ATM Forum UNI specification defines the usage of NSAP addressing. Routers will have to map from network-layer addresses (for example, IP, IPX, and others) to ATM addresses. In todays LANs, this mapping to IEEE 802 addresses is usually accomplished by the Address Resolution Protocol (ARP) for IP and comparable mechanisms for other protocols. For ATM, several mechanisms are still under consideration: ARP. Multicast packets could map between addresses. This is a simple extension of existing LAN methods, but it requires multicasting, which not all switches can support robustly. Also, ARP doesnt

scale well across large ATM networks. Directory lookup. Distributed address resolution servers could gather and exchange information about address mappings for each ATM end point. While this could scale well, it requires the most work to develop protocols and implement servers. Algorithm mapping. An automatic conversion between address types might be possible. However, such techniques tend to become obstacles as networks evolve. Administrative mapping. An operator could naturally enter the network-layer/ATM address mapping. This simple technique will be used for many early ATM networks, but clearly an automatic method, such as the three options described previously, will become necessary to relieve the burden imposed by such manual data entry. While these techniques are all useful for routed protocols, the operation of bridging over ATM is less clear. Since bridging relies on multicast to learn paths to each end-station, it is unlikely to scale well to very large ATM networks. Performance Issues There are five parameters that characterize the performance of ATM switching systems. They are: 1. Throughput 2. Connection Blocking Probability 3. Cell Loss Probability 4. Switching Delay 5. Jitter on the Delay Throughput: This can be defined as the rate at which the cells depart the switch measured in the number of cell departures per unit time. It mainly

Page 18

depends on the technology and dimensioning of the ATM switch. By choosing a proper topology of the switch, the throughput can be increased. Connection Blocking Probability: Since ATM is connection oriented, there will be a logical connection between the logical inlet and outlet during the connection set up phase. Now the connection blocking probability is defined as the probability that there are not enough resources between inlet and outlet of the switch to assure the quality of all existing as well as new connection. Cell Loss Probability: In ATM switches, when more cells than a queue in the switch can handle will compete for this queue, cells will be lost. This cell loss probability has to be kept within limits to ensure high reliability of the switch. In Internally Non-Blocking switches, cells can only be lost at their inlets/outlets. There is also possibility that ATM cells may be internally misrouted and they reach erroneously on another logical channel. This is called Cell Insertion Probability. Switching Delay: This is the time to switch an ATM cell through the switch. The typical values of switching delay range between 10 and 1000 MicroSecs. This delay has two parts. 1. Fixed Switching Delay and it is because of internal cell transfer through the hardware. 2. Queueing delay and this is because of the cells queued up in the buffer of the switch to avoid the cell loss. Jitter on the Delay: This is denoted as the probability that the delay of the switch will exceed a certain value. This is called a quantile and for example, a jitter of 100 Microsecs at a 10exp-9 quantile means the probability that the delay in the switch is larger than 100 Microsecs. is smaller than 10exp-9.

There are several practical applications using ATM Technology. ATM is going to be the Backbone Network for many broadband applications including the Information Superhighway. Some of the key applications can be mentioned as follows: Video Conferencing Desktop Conferencing Multimedia Communications ATM Over Satellite Communications Mobile Computing over ATM for Wireless Networks

ATM Standards
In the US, ATM is being supported and investigated by T1S1 subcommittee (ANSI sponsored). In Europe, it is being supported and investigated by ETSI. There are minor differences between the two proposed standards, but may converge into one common standard, unless telecommunications companies in Europe and America insist on having two standards so that they can have the pleasure of supporting both to interoperate. The differences, however, are minor and do not impact the concepts discussed here. The international standards organization CCITT has also dedicated a study group, XVIII, to Broadband ISDN with the objective of merging differences and coming up with a single, global, worldwide standard for user interfaces to Broadband networks. A few of the standards written for ATM include: rfc1932, IP over ATM; rfc1483, Multiprotocol Encapsulation over AAL-5; rfc1577, Classical IP and ARP over ATM; rfc1755, ATM Signaling Support for IP over ATM; rfc1626, Default IP MTU for use over ATM AAL5.

ATM Applications
Page 19

Conclusion
The discipline conditions under which statistical multiplexing can work efficiently in an ATM network are an active area of research and experimentation in both academia and industry. It has also been a prodigious source of technical publications and considerable speculations. Telecommunications companies in the US, Europe, and Japan as well as several research organizations and standards committees are actively investigating how best to do statistical multiplexing in such a way that the link bandwidth in an ATM network is utilized efficiently, and the quality of service requirements of delay and loss for different types of real time and non real time, as well as bursty and continuous traffic, are also satisfied during periods of congestion. The reason why this problem is so challenging is that if the peak bandwidth requirement of every connection is allocated to it, then ATM just degenerates into STM and no statistical advantage is gained from the anticipated bursty nature of many of the future broadband integrated traffic profiles. Thus the past few years publications in IEEE Journal of Selected Areas in Communications and the IEEE Network and Communications Magazines are filled with topics of resource allocation in broadband networks, policing metering and shaping misbehaving traffic and congestion avoidance and control in ATM networks, and tons of mathematical models and classifications speculating what the broadband integrated traffic of the future might actually look like, and how it might be managed effectively in a statistics-based nondeterministic traffic transportation system, such as an ATM network. The more adventurous readers

desirous of learning more about ATM networks are encouraged to seek out these and the standards committees publications. Fortunately, however, these are problems that the service providers and ATM vendors, like the telecommunications companies, have to solve, and not the users. The users basically get access to the ATM network through well-defined and well-controlled interfaces, and basically pump data into the network, based on certain agreed upon requirements that they specify to the network at connection setup time. The network will then try to ensure that the connection stays within those requirements and that the quality of service parameters for that connection remain satisfied for the entire duration of the connection. ATM is a promising cell-transfer technology offering support for multimedia applications. Its ability to cost effectively move data, voice, image, video and other source material over a variety of physical media at high speeds and with low delay makes ATM a compelling solution for both WANs and LANs.

References
Prycker, Martin De, Asynchronous Transfer Mode: Solution for Broadband ISDN, Second Edition, Ellis Horwood, Great Britain, 1993. Black, Uyless, ATM: Foundation for Broadband Networks, Prentice-Hall, New Jersey, 1995. Asynchronous Transfer Mode (ATM): Cisco Technology Briefing, Cisco Systems, California, 1993.

Page 20

Das könnte Ihnen auch gefallen