Sie sind auf Seite 1von 13

Bab II Characteristics and Requirements of Broadband Multimedia Data DIGITAL REPRESENTATION OF AUDIO Three stages are involved in ADC:

Sampling Quantization Coding REPRESENTATION OF Images and Video Capture and Reproduction of Images and Video Images are captured using a camera in the following manner: the camera lens focuses on an image of a scene on the photosensitive surface of a sensor inside the camera. The brightness of each point is converted into an electrical charge by the photosensitive layer. The amount of the electrical charge is proportional to the brightness at that point. The photosensitive surface is then scanned from left to right and top to bottom with an electron beam to pick up the charges at the surface. In this way a scene (or an image) is converted into a continuous electrical signal. Scanning is carried out rapidly over the image so that the complete image is captured before it moves too much Aspect Ratio The ratio of an images width to its height is called the aspect ratio. It has a major a esthetic effect on the appearance of the picture. The original motion picture aspect ratio was 4:3, that is the picture width was 1.33 times the height. Since this ratio appeared attractive to the public, it was used in television broadcasting. Note that the display screen size normally refers to the diagonal measurement of the screen from the

top left corner to the bottom right corner of the screen. Thus a television set of size of 25 inches with 4:3 aspect ratio has a display of 20 inches horizontally and 15 inches vertically. Resolution The resolution of an image or a display is specified by horizontal resolution and vertical resolution. In practice, the horizontal resolution is measured by the maximum number of white-and-black vertical alternating lines that can be reproduced in a horizontal distance corresponding to the raster height A system having a horizontal resolution of 480 lines can display 240 white and 240 black lines alternating across a horizontal distance corresponding to the height of the image. Vertical resolution specifies the number of horizontal scan lines in a frame. The more lines there are, the higher the vertical resolution. Broadcast television systems use either 525 lines (North America and Japan) or 625 lines (Europe, etc.) per frame. Color Video/Television To capture color images, a color camera splits the incoming light into red, green, and blue components using certain optical devices. These three color components are focused on the red sensor, green sensor, and blue sensor respectively, which convert these three components into separate electrical signals. STORAGE AND BANDWIDTH REQUIREMENTS Storage requirement is measured in terms of bytes or MBytes. In the digital domain,bandwidth is measured as bit rate in bits/s (bps) or Mbits/s (Mbps).

The base unit for storage is byte and for bandwidth is bit. For images, we measure their storage requirement in terms of Bytes or MBytes per image. It can be calculated from the number of pixels (H) on each line, the number of lines (V) in the image and the pixel depth P (bits per pixel) as follows: storage requirement = HVP/8 For example, if an image has 480 lines, 600 pixels on each line and a pixel depth is 24 bits, we need 864,000 bytes data to represent the image. (400x600x24/8=864.000) Single images obviously do not have a time dimension. If there is a time limit for each image transmission, however, the bandwidth requirement can be calculated based on the storage requirement. For example, For example, if the above image (864,000 bytes) must be transmitted within 2 seconds, then the required bandwidth is 3.456 Mbits/s.(864.000x8/2=3.456) Both audio and video are time continuous, normally characterize them in bits/s or Mbits/s. For audio, this number is calculated based on the sampling rate and the number of bits per sample. The bit rate for video can be calculated in the same way. But commonly calculate it from the amount of data in each image (called a frame) and the number of frames per second. The resulting number specifies the bit rate required of the transmission channel. If we want to store or retrieve digital audio and video, this number also specifies the transfer rate required of the storage devices. If we know the duration of the audio or video, the

amount of storage requirement can be calculated. From the above table and figure we can see that digital audio, image, and video require huge amount of data for representation and very-high-network bandwidth for transmission. Using these values, we can calculate that a hard disk of 1 GB can store only 1.5 hr of CD-audio or 36 seconds of television-quality video. 1GB x 8 bits : data rate x 3600s All this shows why data compression is necessary for multimedia applications. Digital audio and video are timedependent continuous media. This means that to achieve a reasonable quality playback of audio and video, audio samples and video samples (images) must be received and played back at regular intervals. For example, if an audio piece is sampled at 8 kHz, it must be played back at 8,000 samples per second. End-to-end delay is the sum of all delays in all the components of a multimedia system,including disk access, ADC, encoding, host processing, network access, network transmission, buffering, decoding, and DAC. More recent studies show that the end-to-end delay should be kept below 300 ms for most conversational type application. For information retrieval type applications, the delay requirement is not so stringent,as long as the user is not kept waiting too long. For most applications, a response time of a few seconds is acceptable. Delay variation is commonly called delay jitter.

For telephone-quality voice and television quality video, delay jitter should be below 10 ms. Delay jitter values for high-quality stereo audio must be kept particularly small (below 1 ms), because our perception of the stereo effect is based on minimal phase differences Error and Loss Tolerance in Multimedia Data For voice, we can tolerate a bit error rate of 10-2. For images and video, we can tolerate a bit rate from 10-4 to 10-6 QUALITY OF SERVICE QOS is a contract negotiated and agreed among multimedia applications and the multimedia system (service provider). In a deterministic guarantee, the required QOS is satisfied fully. A statistical guarantee provides a guarantee with a certain probability. In best effort policy, thereis no guarantee at all; the application is executed for as long as it takes. Bab III Digital Audio, Image, and Video Compression COMPRESSION PRINCIPLES It is desirable to compress digital audio, image, and video so that their bit rates or storage requirements become manageable. But is data compression possible? The answer is yes. We achieve data compression by exploiting two major factors: redundancy existing in digital audio, image, and video data, and the properties of human perception. Classifications of Compression Techniques 1. Lossless Versus Lossy Compression Techniques If the original data can be reconstructed exactly after using a compression technique, we call this compression technique lossless. Otherwise, we call it lossy.

Lossless compression techniques are normally used for compressing computer programs and legal and medical documents where no error or loss is allowed. Lossless compression techniques exploit only data statistics (data redundancy). The compression ratio achievable is normally low. Lossy compression techniques are normally used for compressing digital audio, image, and video in most multimedia applications where some errors or loss can be tolerated. They use both data statistics and human perception properties. Thus, they can produce very high compression ratio.

CONT 2. Constant Bit Rate Versus Variable Bit Rate Coding It is important to classify whether a compression technique is CBR or VBR. First, media contents vary with time. Second, VBR is difficult to specify and model, thus difficult to support by a multimedia communications system. Lossless Compression Techniques Entropy Coding Run-Length Coding Lempel-Ziv-Welch (LZW) Coding Digital Audio Compression Techniques Non-linear Quantization Predictive Coding Compression Technique Making Use of Masking Property:MPEG-Audio Digital Image and Video Compression Techniques Spatial and Temporal Subsampling Coding Predictive Coding Conditional Replenishment Motion Estimation and CompensationTransform Coding Hybrid Coding Vector Quantization Fractal Image Coding Model- and Knowledge-Based Coding Subband Coding

Contour-Texture-Oriented Techniques Other Techniques Summary of Compression Techniques Multimedia Compression Standards For digital audio and image applications involving storage or transmission to become widespread in the marketplace, standards for audio and image compression methods are needed to enable interoperability of equipment from different manufacturers. In response to this requirement, several standards of compression techniques for different applications have been proposed in the past few years. Five most important audiovisual compression standards are JPEG for still-image compression; CCITT (now ITU-TS) H.261 for videophone and teleconference applications at a bit rate of multiples of 64 kbps; MPEG for motion image and associated audio compression; ITU-TS H.263 for videophone applications at a bit rate below 64 kbps; International Standard Organization (ISO) JBIG for compressing bilevel images. The JPEG (Joint Photographers Expert Group) -Still Image Compression Standard JPEG is the first international digital image compression standard for continuous-tone (multilevel) still images, both gray scale and color. It has been implemented in both hardware and software. Although its initial intention was to compress still images, real-time JPEG encoding and decoding have been implemented to handle full motion video. This application is called Motion JPEG or MJPEG. Lossy compression. CONT JPEG specifies four modes of operation:

(1) Lossy sequential DCT-based encoding, in which each image component is encoded in a single left-to-right, top-to-bottom scan. This is called baseline mode and must be supported by every JPEG implementation. (2) Expanded lossy DCT-based encoding, which provides enhancement to the baseline mode operation. One notable enhancement is progressive coding, in which the image is encoded in multiple scans to produce a quick, rough decoded image when the transmission bandwidth is low. (3) Lossless encoding, in which the image is encoded to guarantee the exact reproduction. (4) Hierarchical encoding, in which the image is encoded in multiple resolutions. Cont JPEG uses a very general image model, allowing it to be used in almost all types of image compression applications. A source image consists of at least one and at most 255 components or planes. These components can have different numbers of pixels horizontally or vertically. For example, one component can have 512 x 512 pixels and another component can have 128 x 128 pixels. Cont Pixels in different components must have the same pixel depth (same number of bits per pixel). For baseline mode, the pixel depth specified is 8 bits per pixel. For expanded lossy mode and hierarchical mode, either 8bits or 12 bits per pixel can be used. For lossless mode, a pixel depth between 2 to 12 bits can be used. The most common applications use three components. These three components can be RGB, YUV, or any other luminance and chrominance components. A gray scale image consists of one component. In the case of YUV, U and V components are usually smaller than the Y component. For example, Y has 512 x 512 pixels

and U and V have128 x 128 pixels each. MPEG ? In the early 1990s, the Motion Picture Experts Group (MPEG) started investigating coding techniques for storage of video, such as CD-ROMs. The aim was to develop a video codec capable of compressing highly active video such as movies, on hard discs, with a performance comparable to that of VHS home video cassette recorders (VCRs). In fact, the basic framework of the H.261 standard was used as a starting point in the design of the codec. The first generation of MPEG, called the MPEG-1 standard, was capable of accomplishing this task at 1.5 Mbit/s. Since, for storage of video, encoding and decoding delays are not a major constraint, one can trade delay for compression efficiency. For example, in the temporal domain a DCT might be used rather than DPCM, or DPCM used but with much improved motion estimation, such that the motion compensation removes temporal correlation. This latter option was adopted within MPEG-1. The group had three original work itemscoding of moving pictures and associated audio up to 1.5, 10, and 40 Mbps. These three work items were nicknamed MPEG-1, MPEG-2, and MPEG-3. The intention of MPEG-1 is to code VHS-quality video (360 x 280 pixels at 30 pictures per second) at the bit rate around 1.5 Mbps. 1.5 Mbits was chosen because the throughput of CD-ROM drives at that time was about that rate. The original intention of MPEG-2 was to code CCIR 601 digital-televisionquality video (720 x 480 pixels at 30

frames per second) at a bit rate between 2 to 10 Mbps. The original intention of MPEG-3 was to code HDTV-quality video at a bit rate around 40 Mbps. Later on, it was realized that functionality supported by the MPEG-2 requirements covers MPEG-3 requirements, thus the MPEG-3 work item was dropped in July 1992. During the standardization process, it also realized that there is a need for very-low-bit-rate coding for audiovisual. Thus the MPEG-4 work item was proposed in May 1991 and approved in July 1993. MPEG-4 is an initiative within the MPEG process to develop a standard for very-low-bit rate audiovisual coding. When complete, the MPEG-4 standard will enable a whole spectrum of new applications, including interactive mobile multimedia communications, videotelephony with plain-old telephone (POT) service or with wireless networks.

Bab IV Fiber Distributed Data Interface Introduction The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps tokenpassing, dual-ring LAN using fiberoptic cable. FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and greater distances than copper. Recently, Copper Distributed Data Interface (CDDI), has emerged to provide 100-Mbps service over copper. CDDI is the implementation of FDDI protocols over twisted-pair copper wire. Cont FDDI uses dual-ring architecture with traffic on each ring flowing in

opposite directions (called counterrotating). The dual rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmission, and the secondary ring remains idle. The primary purpose of the dual rings is to provide superior reliability and robustness Standards FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local-area networks (LANs) based on Ethernet and Token Ring. A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version. FDDI Transmission Media FDDI uses optical fiber as the primary transmission medium, but it also can run over copper cabling, referred to as Copper-Distributed Data Interface (CDDI). Optical fiber has several advantages over copper media : 1. Fiber is immune to electrical interference from radio frequency interference (RFI) and electromagnetic interference (EMI).

2. Fiber historically has supported much higher bandwidth (throughput potential) than copper, although recent technological advances have made copper capable of transmitting at 100 Mbps. 3. Finally, FDDI allows 2 km between stations using multimode fiber, and even longer distances using a single mode. Cont FDDI defines two types of optical fiber: A mode is a ray of light that enters the fiber at a particular angle. 1. Multimode fiber uses LED as the lightgenerating device, 2. Single-mode fiber generally uses lasers. Multimode fiber Multimode fiber allows multiple modes of light to propagate through the fiber. Because these modes of light enter the fiber at different angles, they will arrive at the end of the fiber at different times. This characteristic is known as modal dispersion. Modal dispersion limits the bandwidth and distances that can be accomplished using multimode fibers. For this reason, multimode fiber is generally used for connectivity within a building or a relatively geographically contained environment. Single-mode fiber Single-mode fiber allows only one mode of light to propagate through the fiber. Because only a single mode of light is used, modal dispersion is not present with single-mode fiber. Therefore, single-mode fiber is capable of delivering considerably higher performance connectivity over much larger distances, which is why it generally is used for connectivity between buildings and within

environments that are more geographically dispersed. FDDI Specifications FDDI specifies the physical and mediaaccess portions of the OSI reference model. FDDI is not actually a single specification, but it is a collection of four separate specifications, each with a specific function. Combined, these specifications have the capability to provide high-speed connectivity between upper-layer protocols such as TCP/IP and IPX, and media such as fiber-optic cabling. FDDI's four specifications Media Access Control (MAC) Physical Layer Protocol (PHY) Physical-Medium Dependent (PMD) Station Management (SMT) MAC specification Defines how the medium is accessed, including frame format, token handling, addressing, algorithms for calculating cyclic redundancy check (CRC) value, and errorrecovery mechanisms. PHY specification Defines data encoding/decoding procedures, clocking requirements, and framing, among other functions. PMD specification Defines the characteristics of the transmission medium, including fiber-optic links, power levels, bit-error rates, optical components, and connectors. SMT specification Defines FDDI station configuration, ring configuration, and ring control features, including station insertion and removal, initialization, fault isolation and recovery, scheduling, and statistics collection. FDDI Specifications Map to the OSI Hierarchical Model FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring in its relationship with the OSI model. Its primary purpose is to provide connectivity between upper OSI

layers of common protocols and the media used to connect network devices. Figure illustrates the four FDDI specifications and their relationship to each other and to the IEEE-defined Logical Link Control (LLC) sublayer. The LLC sublayer is a component of Layer 2, the MAC layer, of the OSI reference model. FDDI Station-Attachment Types One of the unique characteristics of FDDI is that multiple ways actually exist by which to connect FDDI devices. FDDI defines four types of devices: single-attachment station (SAS) dual-attachment station (DAS) single-attached concentrator (SAC) dual-attached concentrator (DAC) Cont An SAS attaches to only one ring (the primary) through a concentrator. One of the primary advantages of connecting devices with SAS attachments is that the devices will not have any effect on the FDDI ring if they are disconnected or powered off. An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an FDDI network. It attaches directly to both the primary and secondary rings and ensures that the failure or powerdown of any SAS does not bring down the ring. This is particularly useful when PCs, or similar devices that are frequently powered on and off, connect to the ring. Cont Each FDDI DAS has two ports, designated A and B. These ports connect the DAS to the dual FDDI ring. Therefore, each port provides a connection for both the primary and the secondary rings.

Devices using DAS connections will affect the rings if they are disconnected or powered off.

FDDI Fault Tolerance FDDI provides a number of faulttolerant features. In particular, FDDI's dualring environment, the implementation of the optical bypass switch, and dual-homing support make FDDI a resilient media technology. Dual Ring FDDI's primary fault-tolerant feature is the dual ring. If a station on the dual ring fails or is powered down, or if the cable is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single ring. When the ring is wrapped, the dualring topology becomes a single-ring topology. Data continues to be transmitted on the FDDI ring without performance impact during the wrap condition. It should be noted that FDDI truly provides fault tolerance against a single failure only. When two or more failures occur, the FDDI ring segments into two or more independent rings that are incapable of communicating with each other. Cont Optical Bypass Switch An optical bypass switch provides continuous dual-ring operation if a device on the dual ring fails. This is used both to prevent ring segmentation and to eliminate failed stations from the ring. The optical bypass switch performs this function using optical mirrors that pass light from the ring directly to the DAS device during normal operation. If a failure of the DAS device occurs, such as a power-off, the optical bypass switch will pass the light through itself by using internal

mirrors and thereby will maintain the ring's integrity. The benefit of this capability is that the ring will not enter a wrapped condition in case of a device failure. The Optical Bypass Switch Uses Internal Mirrors to Maintain a Network Dual Homing Critical devices, such as routers or mainframe hosts, can use a faulttolerant technique called dual homing to provide additional redundancy and to help guarantee operation. In dual-homing situations, the critical device is attached to two concentrators. One pair of concentrator links is declared the active link; the other pair is declared passive. The passive link stays in backup mode until the primary link (or the concentrator to which it is attached) is determined to have failed. When this occurs, the passive link automatically activates. A Dual-Homed Configuration Guarantees Operation FDDI Frame Format The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas in which FDDI borrows heavily from earlier LAN technologies, such as Token Ring. FDDI frames can be as large as 4,500 bytes. The FDDI Frame Is Similar to That of a Token Ring Frame FDDI Frame Fields PreambleGives a unique sequence that prepares each station for an upcoming frame. Start delimiterIndicates the beginning of a frame by employing a signaling pattern that differentiates it from the rest of the frame. Frame controlIndicates the size of the address fields and whether the

frame contains asynchronous or synchronous data, among other control information. Destination addressContains a unicast (singular), multicast (group), or broadcast (every station) address. As with Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long. Source addressIdentifies the single station that sent the frame. As with Ethernet and Token Ring addresses, FDDI source addresses are 6 bytes long. DataContains either information destined for an upper-layer protocol or control information. Frame check sequence (FCS)Is filed by the source station with a calculated cyclic redundancy check value dependent on frame contents (as with Token Ring and Ethernet). The destination address recalculates the value to determine whether the frame was damaged in transit. If so, the frame is discarded. End delimiterContains unique symbols; cannot be data symbols that indicate the end of the frame. Frame statusAllows the source station to determine whether an error occurred; identifies whether the frame was recognized and copied by a receiving station. Copper Distributed Data Interface Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over twisted-pair copper wire. Like FDDI, CDDI provides data rates of 100 Mbps and uses dual-ring architecture to provide redundancy. CDDI supports distances of about 100 meters from desktop to concentrator. Cont CDDI is defined by the ANSI X3T9.5 Committee.

The CDDI standard is officially named the Twisted-Pair Physical MediumDependent (TP-PMD) standard. It is also referred to as the Twisted-Pair Distributed Data Interface (TP-DDI), consistent with the term Fiber Distributed Data Interface (FDDI). CDDI is consistent with the physical and media-access control layers defined by the ANSI standard. Cont The ANSI standard recognizes only two types of cables for CDDI: shielded twisted pair (STP) and unshielded twisted pair (UTP). STP cabling has 150-ohm impedance and adheres to EIA/TIA 568 (IBM Type 1) specifications. UTP is data-grade cabling (Category 5) consisting of four unshielded pairs using tight-pair twists and specially developed insulating polymers in plastic jackets adhering to EIA/TIA 568B specifications.

Bab V FDDI & DQDB INTRODUCTION The interconnection of LANs over a complete site, without any performance loss, is currently a major issue being addressed by network designers. The emerging IEEE DQDB and ANSI FDDI standards are the two candidate networks being considered to meet such requirements.. FDDI The architecture of a fibre distributed data interface (FDDI) subnetwork consists of two independent optical fibre rings. These are referred to as the primary ring and the secondary ring, each carrying data in opposite directions at a rate of 100Mbps. The proposed standard specifies the maximum fibre path length to be 200 km with up to 500 physical connections. TTRT TRT-THT

FDDI uses a timed token rotation protocol to control access to the medium. Each station measures the time that has elapsed since a token was last received. As part of the ring initialization process, all stations negotiate a target token rotation time (TTRT). The asynchronous service allows the use of a token only when the time since a token was last received has not exceeded the established TTRT. Cont The time interval between two successive receptions of the token by a station is called the token rotation time (TRT). On receipt of the token, if a station has asynchronous (data) traffic to send, it computes the difference in the time between the TTRT and the actual token rotation time (TRT); that is (TTRT - 'RT). The difference is known as the token holding time (THT). THT is positive, the station can transmit for this interval prior to releasing the token. As can be deduced from this, the TTRT establishes a guaranteed maximum response time for the ring since, in the worst case, the time between the arrival of two successive tokens will never exceed twice the TTRT value. Distributed Queue Dual Bus (DQDB) The Distributed Queue Dual Bus (DQDB) is the IEEE MAN standard (IEEE 802.6). DQDB refers to the topology and access control technique employed. It can operate at a variety of data rates of multiple of 3.392 Mbps. DQDB open bus topology. The DQDB topology is a dual bus using unidirectional taps, as shown in Figure.

Transmission on the two buses is independent; thus the effective data rate of a DQDB network is twice the data rate of a single bus. CONT The transmission time on each bus is divided into a steady stream of fixedsize slots with a length of 53 bytes. Nodes transmit and receive data through slots. Head(A) is responsible for generating the slots on bus A, while head(B) is responsible for generating the slots on bus B. Multiple slots are generated to the buses every 125 ms (called a clock cycle). The exact number of slots generated per clock cycle depends on the physical data rate of the network. Queued arbitrated (QA) and Prearbitrated (PA) There are two types of slots: 1. Queued arbitrated (QA) 2. Prearbitrated (PA) Prearbitrated (PA) PA slots carry isochronous data and the PA function provides access control for the connection - oriented transfer over a guaranteed bandwidth channel. The PA function assumes the previous establishment of a connection. As a result of connection establishment, the PA function is informed of the virtual channel ID (VCI) associated with this connection. An isochronous connection may use all data bytes in a slot; or alternatively, a slot may be shared by a number of isochronous connections. Queued arbitrated (QA) QA slots carry asynchronous data. The payload of each slot is 48 bytes. The use of these slots is controlled by a distributed reservation scheme known as distributed queuing.

The basic idea of distributed queuing is as follows. When a station has data to transmit, it issues a request. It can transmit data in an empty slot when other stations requests issued before its own request have been served. DQDB access control using busy and request bits. Operation of the access control protocol The operation of the access control protocol is based upon two control bits contained in the access control field of the slot header: the BUSY bit and the REQUEST (REQ) bit. The BUSY bit indicates the slot status whether empty or occupied. The REQ bit is used to indicate when a segment has been queued for transmission on the opposite bus. Each station, by counting REQ bits from bus B and empty slots that pass on bus A, can determine the number of segments that are a head of it in the distributed queue for bus A. The distributed queue uses two counters in each station: a request (RQ) counter and a count down (CD) counter a shown in figure. Cont The RQ counter is used to maintain a count of the number of stations downstream on bus A which have requested access to bus A via REQ bits on bus B. A station that has a segment to transmit on bus A sets a REQ bit on bus B, and transfers the current value of the RQ counter into the CD counter. The CD counter thus indicates the number of segments which are queued by the downstream stations before the current request was made and hence the number of empty slots the station must allow to pass on bus A.

For each empty slot allowed to pass downstream on bus A, the CD counter is decremented. When the CD counter becomes zero, the station can send its segment in the next empty slot that passes on bus A. Each station can place only one segment at a time in the distributed queue. A separate queue is operated for each of the two buses, with separate counters at each station for each bus. A comparison of DQDB and FDDl FDDI provides fair access for all users for any network size and traffic load. A performance study of the DQDB protocol shows that the basic access protocol has unfair bandwidth sharing for users when there are heavy traffic demands; especially at higher network speeds and larger station separations. The proposed IEEE 802.6 standard specifies a bandwidth balancing (BWB) mechanism to ensure fair sharing of bandwidth between stations operating at a single priority. Feature of DQDB The most important feature of DQDB for multimedia communications is its PA slots. Transmission through these slots provides bandwidth and delay guarantees. Another important feature of DQDB is that its slot size is the same as the cell size of ATM, facilitating the easy interconnection between DQDB and ATM networks.

Bab VIII RTP & RTCP REAL-TIME TRANSPORT PROTOCOL INTRODUCTION RTP is being developed by the Audio/Video Transport group within the IETF. Its specification is at the stage of Internet Draft .

Its aim is to provide end-to-end, realtime communication over the Internet. It provides a data transport function and is called a transport protocol, but it is currently often used on top of UDP, which is a connectionless transport protocol. To provide QOS guarantees, RTP should operate on top of a resource reservation protocol Relationship among RTP and other protocols. Connection in RTP There is no notion of a connection in RTP. It can work on both connectionoriented and connectionless lower layer protocols. RTPS PART RTP consists of two parts: a data part and a control part. Application data are carried in RTP data packets. The RTP control (RTCP) packets provide some control and identification functions. RTP Data Transmission Applications using RTP send data in RTP protocol data units. Cont RTP offers no reliability mechanisms, as they are likely to be inappropriate for realtime applications. Data packet order is not maintained; the receiver must reorder data packets based on the sequence number. If an application needs reliability, the receiver can detect packet loss based on sequence number and ask for retransmission if required. In addition, a data checksum can be included in the application-dependent header extension to check data integrity. Features of RTP suitable for multimedia applications

1. It is a lightweight protocol, thus high throughput can be achieved. There is no error control and flow control mechanism, though the sender can adjust the transmission rate based on the feedback information provided by the control packets. 2. Time stamp and SSRC (synchronization source identifier) information can be used to realize intramedia and intermedia synchronization for multimedia applications. Cont 3. Multicast is possible if lower layers provide multicast routers. 4. RTP specifies connection QOS through a traffic profile. It assumes that there are a number of predefined traffic profiles that define QOS requirements for typical applications. For example, if the profile specified is telephone audio, the parameters stored in the profile may look like this: bit rate = 64 kbps, maximum delay = 100 ms, and maximum error rate = 0.01. So Although it is called real-time transport protocol, RTP does nothing to provide timely delivery of data packets. It relies on the lower layers to manage resources and provide QOS guarantee. RTCP RTP Control Functionality RTP control protocol, called RTCP, supports the protocol control functionality. It is usually carried on a separate lower level transport association (e.g., a separate UDP port, although it is possible to combine both control and data packets into one lower layer PDU without additional encapsulation). RTCP MESSAGE An RTCP message consists of a number of packets, each with its own type code and length indication. Their format is fairly similar to data packets.

RTCP packets are multicast periodically to the same multicast group as data packets. RTCP FUNCTION RTCP provides the following functionality: QOS monitoring and congestion control, intermedia synchronization, source identification, and session size estimation and scaling. Applications that have recently sent data generate a sender report, which contains information useful for intermedia synchronization as well as cumulative counters for packets and bytes sent. These allow receivers to estimate the actual data rate. RTCPS OPERATION All session members periodically issue receiver reports to all senders from which they have received data recently. Receiver reports contain information of the highest sequence number received, the number of packets lost, a measure of the interarrival jitter and timestamps needed to compute an estimate of the roundtrip delay between the sender and the receiver. Loss and jitter information can be used by the senders to adjust their transmission rate, if necessary and possible, to achieve graceful degradation. CONT The RTCP sender reports contain an indication of real-life time and a corresponding RTP timestamp. These two values allow intermedia synchronization if clocks among different transmitters are synchronized. RTP data packets do not identify their origins. RTCP messages contain an SDES (source description) packet. This packet can contains the e-mail address of the sending user. It can

also include other information, such as user name, address, and telephone number. Session members can estimate the session size by listening to the RTCP packets sent periodically.

Das könnte Ihnen auch gefallen