Sie sind auf Seite 1von 59

Ver 1.

MPEG-2 DVB GUIDE

02/02/12

MPEG-2 DVB POCKET GUIDE

Revision History
Date
22nd .Oct 2001

Revision
1.0

Description
First Draft Version.

Author
Athif

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Preface
For the next few years, the opportunities for internetwork professionals will be immense. However, the challenge of possessing the proper knowledge base and skill set will be faced by every internetworking professional. From an Internetwork education perspective, the MPEG and DVB understanding provide a valuable insight into how compression technologies operate. A broader goal of this document is to demonstrate the power of MPEG-2 and DVB technologies. In sum, I hope this document assists in developing the readers MPEG-2 and DVB analysis skills.

TABLE OF CONTENTS
MPEG-2 Summary of MPEG Compression Capability Interaction The MPEG-2 Standard MPEG-2 Video Compression Temporal redundancy Spatial redundancy Basic Operation of an MPEG-2 Encoder MPEG-2 Profiles MPEG-2 Audio Compression MPEG-2 Data A Quick History of MPEG-2 MPEG-2 Transmission Building the MPEG Bit Stream Elementary Stream (ES) Packetized Elementary Stream (PES) MPEG-2 Multiplexing MPEG Program Stream MPEG Transport Stream MPEG Transport Streams Transmission of the MPEG-TS Single and Multiple Program Transport Streams Streams supported by the MPTS Signalling Tables DVB Signalling Tables and Transport Layer PIDs MPEG-2 Signalling Tables Programme Service Information (SI) provided by MPEG-2 and used by DVB DVB Signalling Tables Service Information (SI) provided by DVB Format of a Transport Stream Packet Option Transport Packet Adaption Field DVB Satellite MPEG Transport Service Encoding Specified by DVB-S Digital Storage Media Command and Control (DSM-CC) DVB: Digital Video Broadcasting Flexibility - a key design goal of DVB 2 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Open Design Summary of DVB Features DVB Transmission Typical Streams carried in a DVB Terrestrial Transport Multiplex DVB Bearer Networks Digital Satellite TV (DVB-S) Digital Terrestrial TV Network (DVB-T) DVB Receivers & Transmission DVB Receivers Multimedia Home Platform (MHP) Wide-Screen Format Programme Guides Event Schedule Guide (ESG) Electronic Programme Guide (EPG) DSM-CC for Software Download Data Transmission using MPEG-2 and DVB Introduction Forward Data Transmission Typical configuration for providing Direct to Home (DTH) Internet delivery using DVB Data Piping Data Streaming Multi-Protocol Encapsulation (MPE) Data Carousels Object Carousels Digital Storage Media Command and Control Digital Storage Media Command and Control (DSM-CC) Packet Download DSM-CC Multi Protocol Encapsulation Return Channel Systems Return Data Transmission (Interaction Channel) via Satellite Protocol architecture for (a) Outbound link to client and (b) return link from client MPEG-2 Encoders and Decoders MPEG-2 Decoders Software MPEG-2 Decoders PC-Based MPEG-2 Accelerators Computer MPEG-2 Decoders Network Computers / Thin Clients The MPEG-2 Set-Top Box (STB) MPEG-2 Consumer Equipment Delivery of MPEG-2 Streaming applications Buffered Applications Issues Programming APIs for MPEG-2 Computer Cards Broadband Multimedia Satellite Systems What is Broadband Multimedia? Multimedia Multi-Platform delivery Broadband Multimedia Content - Is Content King? DVB Satellite Return Channel (DVB-RCS) One arrangement of the User Terminal Protocol Stack Terminal Operation Simplified Transponder Usage by a Group of DVB-RCS User Terminals Broadband Interactive Service ( BBI ) 3 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

TCP over Satellite SPECIFIC ISSUES THAT IMPACT THE SATELLITE SERVICE MPEG-2 and DVB Standards MPEG-2 Standards DVB Standards DAVIC Specifications Glossary Appendix A Digital Video First There was Analog Defining Digital Video Four Factors of DV Frame Rate Color Resolution Spatial Resolution Image Quality Need For Compression Factors Effecting Compression Real-Time Versus Non-Real-Time Symmetrical versus Asymmetrical Compression Ratios Lossless versus Lossy Interframe Versus Intraframe Bit Rate Control Selecting a Compression MPEG Overview The Status of MPEG Reference Frames and Redundancy Inside an MPEG Stream Step 1, Finding the Macro Block ( Motion Compensation) Step 2, Tracking the Changes ( Spatial Redundancy) Applications for MPEG-1 Video Kiosk Video On Demand Video Dial Tone Training Corporate Presentations Video Library Internet and Intranet Applications for MPEG-2 DVD-ROM DVD Video CATV DBS HDTV MPEG Playback System Configuration MPEG Displayed on Television Monitor MPEG Video in a Window SVGA Video CD with MPEG Video Conclusion Questions and Answers What is the difference between MPEG-1 and MPEG-2 ? 4 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

MPEG-2
MPEG is an encoding and compression system for digital multimedia content defined by the Motion Pictures Expert Group (MPEG). MPEG-2 extends the basic MPEG system to provide compression support for TV quality transmission of digital video. To understand why video compression is so important, one has to consider the vast bandwidth required to transmit uncompressed digital TV pictures. Phase Alternate Line (PAL) is the analogue TV transmission standard used in India, and throughout many parts of the world. An uncompressed PAL TV picture requires a massive 216 Mbps, far beyond the capacity of most radio frequency links. The U.S. uses an analogue TV system called NTSC. This system provides less precise color information, and a different frame rate. An uncompressed NTSC signal requires slightly less transmission capacity at 168 Mbps. The situation becomes much more acute, when one realizes that high definition TV is just around the corner. A High Definition TV (HDTV) picture requires a raw bandwidth exceeding 1 Gbps (1000 Mbps). MPEG-2 provides a way to compress this digital video signal to a manageable bit rate. The compression capability of MPEG-2 video compression is shown in the table below:

Summary of MPEG Compression Capability Because the MPEG-2 standard provides good compression using standard algorithms, it has become the standard for digital TV. It has the following features: y y y y y Video compression which is backwards compatible with MPEG-1 Full-screen interlaced and/or progressive video (for TV and Computer displays) Enhanced audio coding (high quality, mono, stereo, and other audio features) Transport multiplexing (combining different MPEG streams in a single transmission stream) Other services (GUI, interaction, encryption, data transmission, etc)

The list of systems which now (or will soon) use MPEG-2 is extensive and continuously growing: digital TV (cable, satellite and terrestrial broadcast), Video on Demand, Digital Versatile Disc (DVD), personal computing, card payment, test and measurement, etc. 5 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Interaction
One of the potential new services which may be provided by MPEG-2 is the ability to use a return channel to allow the user to control the content or scheduling of the transmitted video / audio / data. This is known as interaction, and is seen by many as the key discriminator between traditional video and MPEG-2. MPEG-2 defines an interaction channel using DSM-CC (explained later). Interaction channels may be used for diverse services including: y Display and control of small video clips to promote products / future programming y Ability to select and pay for Video on Demand (VOD) y Access to remote information servers y Access to remote databases / systems providing home shopping, banking, etc. y Internet Access At the moment, nobody is quite sure whether, when everything falls into place, the bulk of users will interact through their TV s or their Personal Computers - and exactly how much interaction the users want or will pay for.

The MPEG-2 Standard


MPEG-2 is a standard from the Motion Pictures Expert Group (MPEG), which defines a series of standards for compression of moving picture information. The Moving Picture Coding Experts Group was established in January 1988 with the mandate to develop standards for coded representation of moving pictures, audio and their combination. The following MPEG standards exist: y MPEG-1, a standard for storage and retrieval of moving pictures and audio on storage media. y MPEG-2, a standard for digital television. y MPEG-4, a standard for multimedia applications. y MPEG-7, a content representation standard for information search , to allow fast and efficient searching for the material that is of interest to the user MPEG-1 and MPEG-2 have been standardized, whereas MPEG-4 and MPEG-7 are currently being developed. The MPEG-2 standard has been extended by various groups including: y Digital Video Broadcasting (European) y U.S. Advanced Televisions Systems Committee (ATSC) y Digital Audio Visual Council (DAVIC) y Digital Versatile Disk (DVD)

MPEG-2 Video Compression


The MPEG-2 video compression algorithm achieves very high rates of compression by exploiting the redundancy in video information. MPEG-2 removes both the temporal redundancy and spatial redundancy that are present in motion video.

 Temporal redundancy arises when successive frames of video display images of the same scene. It is
common for the content of the scene to remain fixed or to change only slightly between successive frames.

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

 Spatial redundancy occurs because parts of the picture (called pels) are often replicated (with minor changes) within a single frame of video. Clearly, it is not always possible to compress every frame of a video clip to the same extent - some parts of a clip may have low spatial redundancy (e.g. complex picture content), while other parts may have low temporal redundancy (e.g. fast moving sequences). The compressed video stream is therefore naturally of variable bit rate, where as transmission links frequently require fixed transmission rates. The key to controlling the transmission rate is to order the compressed data in a buffer in order of decreasing detail. Compression may be performed by selectively discarding some of the information. A minimal impact on overall picture quality can be achieved by throwing away the most detailed information, while preserving the less detailed picture content. This will ensure the overall bit rate is limited while suffering minimal impairment of picture quality. The basic operation of the encoder is shown below:

Basic Operation of an MPEG-2 Encoder MPEG-2 includes a wide range of compression mechanisms. An encoder must therefore which compression mechanisms are best suited to a particular scene / sequence of scenes. In general, the more sophisticated the encoder, the better it is at selecting the most appropriate compression mechanism, and therefore the higher the picture quality for a given transmission bit rate. MPEG-2 decoders also come in various types and have varying capabilities (including ability to handle high quality video, ability to cope with errors) and connection options.

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

MPEG-2 Profiles
A number of levels and profiles have been defined for MPEG-2 video compression. Each of these describes a useful subset of the total functionality offered by the MPEG-2 standards. An MPEG-2 system is usually developed for a certain set of profiles at a certain level. Basically: y Profile = quality of the video y Level = resolution of the video The basic system is known as Main Profile Main Level (MP@ML) which covers video compression to 1-15 Mbps. There are other levels such as: High Level, High Level-1440, and Low Level; just as there are other profiles: Simple, SNR, Spatial, 4:2:2 and High. Typical decoder specifications are: y 720 x 576 x 25 fps (PAL CCIR 601) y 352 x 576 x 25 fps (PAL Half-D1) y 720 x 480 x 30 fps (NTSC CCIR 601) y 352 x 480 x 30 fps (NTSC Half-D1) Most decoders will also support MPEG-1: y 352 x 288 x 25 fps (PAL SIF) y 352 x 240 x 30 fps (NTSC SIF)

MPEG-2 Audio Compression


The MPEG-2 standard defines an audio encoding scheme. Digital audio may be encoded in a number of other encoding formats at various bit rates.

MPEG-2 Data
MPEG-2 also provides support for data transmission. MPEG-2 differentiates two types of data: y Service Information - Information about the video, audio, and data streams carried by the MPEG-2 transmission. y Private Data - Information for one or more particular user (or receiver equipment).

A Quick History of MPEG-2


y y y y y y y y y y y y y y 8 1970's Work in digital compression lead to specification of Discrete Cosine Transform algorithms. 1988, Motion Picture Expert Group (MPEG) formed. 1992, MPEG-2 (TV) and MPEG-3 (HDTV) combined. 1993, MPEG-2 Main profile defined (compatible with MPEG-1). 1993, ETSI DVB Project set up to extend MPEG-2 system details 1994, ISO 13818-1 MPEG-2 Systems Definition. 1996, Standardization of the 4:2:2 video format. 1996, Set of Digital Video Broadcast (DVB) standards published by ETSI. 1996, HDTV (1250/50) demonstrated in 16:9 (widescreen) format 1996, 2 Million MPEG-1 video disk players in China 1996, U.S. adopts a Digital TV (DTV) standard based on MPEG-2 1997, Extended CPU graphics instruction sets able to decode MPEG on a PC 1997, First interactive Digital Video Broadcast service using MPEG-2 1998, Digital Versatile Disk (DVD) using MPEG-2 System Engineering Group

Ver 1.0 y y y y

MPEG-2 DVB GUIDE 1998, Active Movie API allowing MPEG-2 to be played on a PC 1998, Launch of DVB-T terrestrial TV service throughout the UK 1998, Launch of DTV service in U.S.A 2000, Definition of the DVB Multimedia Home Platform (MHP)

02/02/12

MPEG-2 Transmission

The MPEG-2 standards define how to format the various component parts of a multimedia programme (which may consist of: MPEG-2 compressed video, compressed audio, control data and/or user data). It also defines how these components are combined into a single synchronous transmission bit stream. The process of combining the steams is known as multiplexing. The multiplexed stream may be transmitted over a variety of links, standards / products are (or will soon be) available for: y Radio Frequency Links (UHF/VHF) y Digital Broadcast Satellite Links y Cable TV Networks y Standard Terrestrial Communication Links (PDH, SDH) y Microwave Line of Sight (LOS) Links (wireless) y Digital Subscriber Links (ADSL family) y Packet / Cell Links (ATM, IP, IPv6, Ethernet) Many of these formats are being standardized by the DVB project.

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Building the MPEG Bit Stream


To understand how the component parts of the bit stream are multiplexed, we need to first look at each component part. The most basic component is known as an Elementary Stream in MPEG. A programme (perhaps most easily thought of as a television programme, or a Digital Versatile Disk (DVD) track) contains a combination of elementary streams (typically one for video, one or more for audio, control data, subtitles, etc).

Elementary Stream (ES)


Each Elementary Stream (ES) output by an MPEG audio, video and (some) data encoders contain a single type of (usually compressed) signal. There are various forms of ES, including: y Digital Control Data y Digital Audio (sampled and compressed) y Digital Video (sampled and compressed) y Digital Data (synchronous, or asynchronous) For video and audio, the data is organized into access units, each representing a fundamental unit of encoding. For example, in video, an access unit will usually be a complete encoded video frame.

Packetized Elementary Stream (PES)


Each ES is input to an MPEG-2 processor (e.g. a video compressor or data formatted) which accumulates the data into a stream of Packetised Elementary Stream (PES) packets. A PES packet may be a fixed (or variable) sized block, with up to 65536 bytes per block and includes a 6 byte protocol header. A PES is usually organized to contain an integral number of ES access units. The PES header starts with a 3 byte start code, followed by a one-byte stream ID and a 2-byte length field. The following well-known stream IDs are defined in the MPEG standard: 1. 110x xxxx - MPEG-2 audio stream number x xxxx. 2. 1110 yyyy - MPEG-2 video stream number yyyy. 3. 1111 0010 - MPEG-2 DSM-CC control packets. The next field contains the PES Indicators. These provide additional information about the stream to assist the decoder at the receiver. The following indicators are defined: y PES_Scrambling_Control - Defines whether scrambling is used, and the chosen scrambling method. y PES_Priority - Indicates priority of the current PES packet. y data_alignment_indicator - Indicates if the payload starts with a video or audio start code. y copyright information - Indicates if the payload is copyright protected. y original_or_copy - Indicates if this is the original ES. A one-byte flags field completes the PES header. This defines the following optional fields, which if present, are inserted before the start of the PES payload. y Presentation Time Stamp (PTS) and possibly a Decode Time Stamp (DTS) - For audio / video streams these time stamps which may be used to synchronize a set of elementary streams and control the rate at which they are replayed by the receiver. y Elementary Stream Clock Reference (ESCR) y Elementary Stream rate - Rate at which the ES was encoded. y Trick Mode - indicates the video/audio is not the normal ES, e.g. after DSM-CC has signalled a replay. y Copyright Information - set to 1 to indicated a copyright ES. y CRC - this may be used to monitor errors in the previous PES packet y PES Extension Information - may be used to support MPEG-1 streams. The PES packet payload includes the ES data. The information in the PES header is, in general, independent of the transmission method used.

10

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

MPEG-2 Multiplexing
The MPEG-2 standard allows two forms of multiplexing: y MPEG Program Stream A group of tightly coupled PES packets referenced to the same time base. Such streams are suited for transmission in a relatively error-free environment and enable easy software processing of the received data. This form of multiplexing is used for video playback and for some network applications. y MPEG Transport Stream Each PES packet is broken into fixed-sized transport packets forming a general purpose way of combining one or more streams, possibly with independent time bases. This is suited for transmission in which there may be potential packet loss or corruption by noise, or / and where there is a need to send more than one programme at a time.

Combining Elementary Streams from encoders into a Transport Stream (red) or a Programme Stream (yellow). The Service Information (SI) component on the transport stream is not shown. The Programme Stream is widely used in digital video storage devices, and also where the video is reliably transmitted over a network (e.g. video-clip down load). Digital Video Broadcast (DVB) uses the MPEG-2 Transport Stream over a wide variety of under-lying networks. Since both the Program Stream and Transport Stream multiplex a set of PES inputs, interoperability between the two formats may be achieved at the PES level.

MPEG Transport Streams


A transport stream consists of a sequence of fixed sized transport packet of 188 B. Each packet comprises 184 B of payload and a 4 B header. One of the items in this 4 B header is the 13 bit Packet Identifier (PID) which plays a key role in the operation of the Transport Stream. The format of the transport stream is described using the figure below (a later section describes the detailed format of the TS packet header). This figure shows two elementary streams sent in the same MPEG-2 transport multiplex. Each packet is associated with a PES through the setting of the PID value in the packet header (the values of 64 and 51 in the figure). The audio packets have been assigned PID 64, and the video packets PID 51 (these are arbitrary, but different values). As is usual, there are more video than audio packets, but you may also note that the two types of packets are not evenly spaced in time. The MPEG-TS is not a time division multiplex, packets with any PID may be inserted into the TS at any time by the TS multiplexer. If no packets are available at the multiplexer, it inserts null 11 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

packets (denoted by a PID value of 0x1FFF) to retain the specified TS bit rate. The multiplexer also does not synchronize the two PESs, indeed the encoding and decoding delay for each PES may (and usually is different). A separate process is therefore required to synchronize the two streams (see below).

Single Program Transport Stream (Audio and Video PES).

Transmission of the MPEG-TS


Although the MPEG TS may be directly used over a wide variety of media (as in DVB), it may also be used over a communication network. It is designed to be robust with short frames, each one being protected by a strong error correction mechanism. It is constructed to match the characteristics of the generic radio or cable channel and expects an uncorrected Bit Error Rate (BER) of better than 10-10. (The different variants of DVB each have their own outer coding and modulation methods designed for the particular environment.) The MPEG-2 Transport Stream is so called, to signify that it is the input to the Transport Layer in the ISO Open System Interconnection (OSI) seven-layer network reference model. It is not in itself a transport layer protocol and no mechanism is provided to ensure the reliable delivery of the transported data. MPEG-2 relies on underlying layers for such services. MPEG-2 transport requires the underlying layer to identify the transport packets, and to indicate in the transport packet header, when a transport packet has been erroneously transmitted. When the MPEG-TS is used over a lower layer network protocol, the lower layer must identify the start of each transport packets, and indicate in the transport packet header, when a transport packet has been erroneously received. The MPEG TS packet size also corresponds to eight Asynchronous Transfer Mode (ATM) cells, assuming 8 B overhead (associated with the ATM Adaptation Layer (AAL)).

Single and Multiple Program Transport Streams


A TS may correspond to a single TV programme, or multimedia stream (e.g. with two a video PES and an audio PES). This type of TS is normally called a Single Programme Transport Stream (SPTS). An SPTS contains all the information requires to reproduce the encoded TV channel or multimedia stream. It may contain only an audio and video PESs, but in practice there will be other types of PES as well. Each PES shares a common timebase. Although some equipments output use SPTS, this is not the normal form transmitted over a DVB link. In most cases one or more SPTS streams are combined to form a Multiple Programme Transport Stream (MPTS). This larger aggregate also contains all the control information (Program Specific Information (PSI)) required to coordinate the DVB system, and any other data which is to be sent.

12

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Streams supported by the MPTS Most transport streams consist of a number of related elementary streams (e.g. the video and audio of a TV programme). The decoding of the elementary streams may need to be co-ordinated (synchronized) to ensure that the audio playback is in synchronism with the corresponding video frames. Each stream may be tightly synchronized (usually necessary for digital TV programs, or for digital radio programs), or not synchronized (in the case of programs offering downloading of software or games, as an example). To help synchronization time stamps may be (optionally) sent in the transport stream. There are two types of time stamps: y The first type is usually called a reference time stamp. This time stamp is the indication of the current time. Reference time stamps are to be found in the PES syntax (ESCR), in the program syntax (SCR), and in the transport packet adaption Program Clock Reference (PCR) field. y The second type of time stamp is called Decoding Time Stamp (DTS) or Presentation Time Stamp (PTS). These time stamps are inserted close to the material to which they refer (normally in the PES packet header). They indicate the exact moment where a video frame or an audio frame has to be decoded or presented to the user respectively. These rely on reference time stamps for operation.

Signalling Tables
For a user to receive a particular transport stream, the user must first determine the PID being used and then filter packets that have a matching PID value. To help the user identify which PID corresponds to which programme, a special set of streams, known as Signalling Tables, are transmitted with a description of each program carried within the MPEG-2 Transport Stream. Signalling tables are sent separately to PES, and are not synchronized with the elementary streams (i.e. they are an independent control channel).

13

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

DVB Signalling Tables and Transport Layer PIDs The tables (called Program Specific Information (PSI) in MPEG-2) consist of a description of the elementary streams which need to be combined to build programmes, and a description of the programmes. Each PSI table is carried in a sequence of PSI Sections, which may be of variable length (but are usually small, c.f. PES packets). Each section is protected by a CRC (checksum) to verify the integrity of the table being carried. The length of a section allows a decoder to identify the next section in a packet. A PSI section may also be used for downloading data to a remote site. Tables are sent periodically by including them in the transmitted transport multiplex.

MPEG-2 Signalling Tables


PAT - Program Association Table (lists the PIDs of tables describing each programme). The PAT is sent with the well-known PID value of 0x000. CAT - Conditional Access Table (defines type of scrambling used and PID values of transport streams that contain the conditional access management and entitlement information (EMM)). The PAT is sent with the well-known PID value of 0x001. PMT - Program Map Table (defines the set of PIDs associated with a programme, e.g. audio, video,) NIT - Network Information Table (PID=10, contains details of the bearer network used to transmit the MPEG multiplex, including the carrier frequency) DSM-CC - Digital Storage Media Command and Control (messages to the receivers)

14

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Programme Service Information (SI) provided by MPEG-2 and used by DVB


To identify the required PID to demultiplex a particular PES, the user searches for a description in a particular table, the Program Association Table (PAT). This lists all programmes in the multiplex. Each programme is associated with a set of PIDs (one for each PES) which correspond to a Programme Map Table (PMT) carried as a separate PSI section. There is one PMT per programme. DVB also adds a number of additional tables including those shown below.

DVB Signalling Tables


In addition to the PSI carried in each multiplex (MPTS), a service also carries information relating to the service as a whole. Since a service may use a number of MPTS to send all the required programs. Information is provided in the PSI tables defined by DVB. Each PSI table refers to the MPTS in which it is carried and any other MPTSs that carry other TS that are offered as a part of the same service. BAT- Bouquet Association Table (groups services into logical groups) SDT- Service Description Table (describes the name and other details of services) TDT - Time and Date Table (PID=14, provides present time and date) RST - Running Status Table (PID=13, provides status of a programmed transmission, allows for automatic event switching) EIT - Event Information Table (PID=12, provides details of a programmed transmission)

Service Information (SI) provided by DVB


Most viewers have little knowledge of the operation of these tables and interact with the decoder through a graphical or textual programme guide.

Format of a Transport Stream Packet

Each MPEG-2 TS packet carries 184 B of payload data prefixed by a 4 B (32 bit) header.

15

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

The header has the following fields: y The header starts with a well-known Synchronization Byte (8 bits). This has the bit pattern 0x47 (0100 0111). y A set of three flag bits are used to indicate how the payload should be processed. 1. The first flag indicates a transport error. 2. The second flag indicates the start of a payload (payload_unit_start_indicator) 3. The third flag indicates transport priority bit. y The flags are followed by a 13 bit Packet Identifier (PID). This is used to uniquely identify the stream to which the packet belongs (e.g. PES packets corresponding to an ES) generated by the multiplexer. The PID allows the receiver to differentiate the stream to which each received packet belongs. Some PID values are predefined and are used to indicate various streams of control information. A packet with an unknown PID, or one with a PID that is not required by the receiver is silently discarded. The particular PID value of 0x1FFF is reserved to indicate that the packet is a null packet (and is to be ignored by the receiver). y The two scrambling control bits are used by conditional access procedures to encrypted the payload of some TS packets. y Two adaption field control bits which may take four values: 1. 01 no adaptation field, payload only 2. 10 adaptation field only, no payload 3. 11 adaptation field followed by payload 4. 00 - RESERVED for future use y Finally there is a half byte Continuity Counter (4 bits) Two options are possible for inserting PES data into the TS packet payload: 1. The simplest option, from both the encoder and receiver viewpoints, is to send only one PES (or a part of single PES) in a TS packet. This allows the TS packet header to indicate the start of the PES, but since a PES packet may have an arbitrary length, also requires the remainder of the TS packet to be padded, ensuring correct alignment of the next PES to the start of a TS packet. In MPEG-2 the padding value is the hexadecimal byte 0xFF. 2. In general a given PES packet spans several TS packets so that the majority of TS packets contain continuation data in their payloads. When a PES packet is starting, however, the payload_unit_start_indicator bit is set to 1 which means the first byte of the TS payload contains the first byte of the PES packet header. Only one PES packet can start in any single TS packet. The TS header also contains the PID so that the receiver can accept or reject PES packets at a high level without burdening the receiver with too much processing. This has an impact on short PES packets

MPEG PES mapping onto the MPEG-2 TS 16 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Option Transport Packet Adaption Field


The presence of an adaptation field is indicated by the adaption field control bits in a transport stream packet. If present, the adaption field directly follows the 4 B packet header, before any user payload data. It may contain a variety of data used for timing and control. One important item in most adaption packets is the Program Clock Reference (PCR) field. Another important item is splice_countdown field. This field is used to indicate the end of a series of ES access units. It allows the MPEG-2 TS multiplexer to determine appropriate places in a stream were the video may be spliced to another video source without introducing undesirable disruption to the video replayed by the receiver. Since MPEG-2 video uses inter-frame coding a seamless switchover between sources can only occur on an I-frame boundary (indicated by a splice count of 0). This feature may, for instance be used to insert a news flash in a scheduled TV transmission. One other bit of interest here is the transport_private_data_flag that is set to 1 when the adaptation field contains private data bytes. Another is the transport_private_data_length field that specifies how many private data bytes will follow the field. Private data is not allowed to increase the adaptation field beyond the TS payload size of 184 bytes.

DVB Satellite
DVB transmission via satellite (often known as DVB-S) defines a series of options for sending MPEG-TS packets over satellite links. The DVB-S standard requires the 188 B (scrambled) transport packets to be protected by 16 bytes of Reed Solomon (RS) coding.

MPEG Transport Service Encoding Specified by DVB-S The resultant bit stream is then interleaved and convolutional coding is applied. The level of coding may be selected by the service provider (from 1/2 to 7/8 depending on the intended application and available bandwidth). The digital bit stream is then modulated using Quadrature Phase Shift Keying (QPSK). A typical satellite channel has a 36 MHz bandwidth, which may support transmission at up to 35-40 Mbps .

17

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Digital Storage Media Command and Control (DSM-CC)


DSM-CC is a toolkit for developing control channels associated with MPEG-1 and MPEG-2 streams. It uses a client/server model connected via an underlying network (carried via the MPEG-2 multiplex or independently if needed). DSM-CC may be used for controlling the video reception, providing features normally found on VideoCassette Recorders (VCR) (fast-forward, rewind, pause, etc). It may also be used for a wide variety of other purposes including packet data

DVB: Digital Video Broadcasting


Digital Video Broadcasting (DVB) is a transmission scheme based on the MPEG-2 video compression / transmission scheme and utilizing the standard MPEG-2 Transmission scheme. It is however much more than a simple replacement for existing analogue television transmission. In the first case, DVB provides superior picture quality with the opportunity to view pictures in standard format or wide screen (16:9) format, along with mono, stereo or surround sound. It also allows a range of new features and services including subtitling, multiple audio tracks, interactive content, multimedia content - where, for instance, programmes may be linked to World Wide Web material. DVB is a European initiative. Equipment conforming to DVB standard is now in use on six continents and is DVB rapidly becoming the worldwide standard for digital TV. At the time DVB was being developed in Europe, a parallel programme of standards and equipment development was also going-on in the U.S.A. by the Advanced Television Systems Committee (ATSC). ATSC is slightly different to DVB (obviously the U.S. was not too worried about this - for years they have lived with an insipid TV standard called NTSC, which has lead to slightly different TV market). Among other things ATSC adopts a different audio coding standard, and Vestigial Side Band (VSB) modulation. The U.S. has adopted a system based on ATSC, called Digital TV (DTV). During standardization this evolved into a hot debate between the PC-based manufacturers (favoring a non-interlaced display) and the TV manufacturers (favoring an interlaced format). There is much in common between the US and European standards and inter-operation between some DVB and DTV equipment has been demonstrated.

Flexibility - a key design goal of DVB


The DVB system is based on a generic transport system that does not impose any restriction on the type of material being sent. Transmission techniques have been defined for video, audio, Electronic Programme Guides (EPG) (multi-media magazine-style), Electronic Service Guide (ESG) teletext-style programme guides which are processed and displayed using the receiver's menus), pay-per-view TV, data carousels (resembling traditional teletext). There is even provision for the transmission of Internet Protocol packets over the same system. Using DVB, a single 38 Mbps Satellite DVB transponder (DVB-S) may be used to provide one of a variety of services: y 4-8 Standard TV channels (depending on programme style and quality) y 2 High Definition TV (HDTV) channels y 150 Radio programmes y 550 ISDN-style data channels at 64 kbps y A variety of other high and low rate data services Alternatively, any combination of services may be simultaneously carried up to the maximum transponder capacity.

18

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Open Design
The MPEG-2 system is designed to be open - allowing receivers to implement as much or as little functionality as they need. All parts of the standard allow for manufacturers to add sophistication and additional features to discriminate their products, while providing the basic functions required for interoperability.

Summary of DVB Features


1. 2. 3. MPEG-2 standard for video & audio Fixed rate simplex transmission Extends MPEG-2 transport facilities: i. Programme Guides (both teletext style and magazine style formats) ii. Specifications for Conditional Access (CA) iii. Optional return channel for interactive services iv. Various types of packet

DVB Standards and related documents are published by the European Telecommunications Standards Institute (ETSI). These include a large number of standards and technical notes to complement the MPEG-2 standards defined by the ISO.

DVB Transmission
DVB builds upon MPEG-2 and uses MPEG-2 Transmission. It also defines additional private sections and providing a definition of the physical medium (modulation, coding, etc) needed to carry the MPEG-2 Transport Streams.

Typical Streams carried in a DVB Terrestrial Transport Multiplex


Each MPEG-2 MCPC multiplex carries a number of streams which in combination deliver the required services. Here is a sample breakdown of the various MPEG-2 streams being used to provide a terrestrial 24 MBPS TV multiplex: Stream bit rate (kbps) SI 300 PSI 546 Digital Teletext 754 Total per Mux 1600 Sample per-multiplex overheads bit rate Stream (kbps) TV Video * 5000 Stereo Audio 270 SubTitles 50 Conditional Access 600 Total Programme 5920 Bit rate per programme (* Video at 4-6 Mbps depending on content, Conditional Access may not be required)

19

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

This allows a standard 18 MHz channel to carry 5 TV channels or 4 higher quality channels without conditional access - or 3 high quality channels with conditional access. The remaining bandwidth may be used for other services such as Electronic Programme Guides (EPGs), Audio Descriptions for the visually impaired (~ 70 kbps), Signing for the deaf (~ 400 kbps using a separate window), house pages, digital data, software downloads, etc.

DVB Bearer Networks


The DVB standards allow a DVB signal to be carried over a range of bearer networks. Various standards have evolved which define transmission over particular types of link: y DVB-S (Satellite) ETS 300 421 (Digital Satellite Transmission Systems) y DVB-T (Terrestrial) ETS 300 744 ( Digital Terrestrial Transmission Systems) y Interfaces to Plesiochronous Digital Hierarchy (PDH) networks (prETS 300 813). y Interfaces to Synchronous Digital Hierarchy (SDH) networks (prETS 300 814). y Interfaces to Asynchronous Transfer Mode (ATM) networks (prETS 300 815). y Interfaces for CATV/SMATV Headends and similar Professional Equipment (EN50083-9)

Digital Satellite TV (DVB-S)


Satellite transmission has lead the way in delivering digital TV to viewers. A typical satellite channel has 36 MHz bandwidth, which may support transmission at up to 35-40 Mbps (assuming delivery to a 3.5m receiving antenna) using Quadrature Phase Shift Keying (QPSK) modulation. The video, audio, control data and user data are all formed into fixed sized MPEG-2 transport packets. The MPEG TS packets are grouped into 8-packet frames (1503 B). The frames do not have any additional control information, but to enable the receiver to find the start of each frame, the TS-header byte is inverted (0xB8) in the first TS packet of each frame. The frames are then passed through a convolutional (organ-pipe) interleaver to ensure the data follows an approximately random pattern, assuring frequency dispersion of the modulated signal. At the start of each frame, the scrambler is re-initialized. 16 bytes of Reed Solomon (RS) coding are added to each 188 byte transport packet to provide Forward Error Correction (FEC) using a RS (204,188,8) code. For the satellite transmission, the resultant bit stream is then interleaved and convolutional coding is applied. The level of coding ranges from 1/2 to 7/8 depending on the intended application and available bandwidth. The digital bit stream is then modulated using QPSK modulation. The complete coding process may be summarized by: y y y y y Inverting every 8th Synchronization byte Scrambling the contents of each packet Reed-Solomon (RS) coding at 8% overhead Interleaved convolutional coding (the level of coding ranges from 1/2 to 7/8 depending on the intended application) The resulting bit stream is modulated using Quadrature Phase Shift Keying (QPSK).

Digital Terrestrial TV Network (DVB-T)


Legislation passed in 1996 has opened the way to allow new companies to enter the terrestrial TV market, while protecting the interest of the main players. This lead to the granting of licenses to various companies during 1997, and the planned rollout of the first direct-to-home digital terrestrial services during November 1998. A standard TV channel is 8 MHz wide, accommodating the TV signal in Phase Alternate Line (PAL) format and the sound subcarrier. The same 8 MHz of bandwidth may be used to provide a 24 Mbps digital transmission path using Coded Orthogonal Frequency Division Multiplexing (COFDM) modulation. This may support up to 6 digital TV channels. The information is transmitted in the following manner: 20 System Engineering Group

Ver 1.0 y

MPEG-2 DVB GUIDE

02/02/12

COFDM or Quadrature Phase Modulation (QPSK) (COFDM uses either 1705 carriers (usually known as '2k'), or 6817 carriers ('8k')) y Reed-Solomon (RS) coding at 8% overhead y Interleaved convolutional coding (the level of coding depends on the intended application) Some countries have chosen to introduce a digital TV service which is based on more robust modulation, allowing mobile reception of the signal, or to pilot high definition TV - in both cases the bandwidth (number of channels) has been traded.

DVB Receivers & Transmission


DVB Receivers
DVB transmissions may be received via a variety of equipment, but will typically be received by one of the following:

A Set-Top Box (STB), costing $200-$600, and connected via the SCART or standard TV antenna connection to a TV set. More sophisticated STBs contain more than one DVB decoder, the secondary decoder(s) providing a bit stream to an internal hard disk where selected programme content can be stored in digital format. Such a STB can be preprogrammed to record programmes (as in a Video Cassette Recorder (VCR)), but may also be used to implement pause and rewind functions (where users are given Playback control of the content, allowing them to defer viewing when, for instance, the telephone rings). An in-built decoder forming a part of a digital TV set or a digital VideoCassette Recorder (VCR). A receiver located centrally in a house which feeds digital TVs, VCRs, Camcorders, etc with signals via digital bus (e.g. the IEEE 1394 FireWire bus). A DVB compliant PC receiver card which displays the various styles of content (TV, Audio, and Data) on the PC screen. A DVB Multimedia Home Platform (MHP).

y y y

The next generation of digital recorders will provide much greater functionality than existing VCRs. By combining information held transmitted as MPEG meta-data prior to each programme, the digital recorder may itself determine whether the broadcast content should be recorded. This decision could for instance be done by matching the metadata with a user-specified customer profile stored in the digital recorder. This profile also allows a future Electronic Programme Guide (EPG) to generate customized schedules.

21

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Multimedia Home Platform (MHP)


The Multimedia Home Platform (MHP) is being defined to enable DVB receivers to be constructed in a common and provide common features and interfaces. DVB MHP uses the JAVA programming language (developed by SUN Microsystems) to access an Applications Program Interface (API) which gives access to DVB services. Using JAVA code, manufacturers can add additional capability to discriminate their products in the market place, as well as implementing industrial standard EPGs and user interfaces. Three application profiles have been defined: y Enhanced Broadcast y Interactive Television y Internet Access The first MHP standards were defined in ETSI TS 101 812 (2000).

Wide-Screen Format
It has been claimed that the human eyesight is naturally more wide than high and that by increasing the width of a TV picture, one can gain substantial improvement in the viewer experience. Certainly this format allows much more control by the viewer of the scene being presented - with the opportunity to focus on specific interests rather than following only the intended story line. The UK has piloted a wide-screen service using "PALPlus" in 16:9 format. After much market survey, most people seem to acknowledge the advantages of wide-screen TV (as opposed to the traditional 4:3 screen format). The 16:9 format has been adopted as the standard for High Definition (HD) TV. Attempting to view wide screen TV on a standard TV presents some problems - one either misses the left and right parts of the picture, or one sees the entire picture through a "letter box" which places broad black stripes across the top and bottom of the picture. Viewer surveys have shown that there has been considerable resistance to adopting a 16:9 "letter box" format for 4:3 TVs. It seems that the best way forward for traditional TVs is to adopt a small letter box, 14:3 which looses some information at each side, and introduces a small black band at the top and bottom of the picture. The plan is therefore to rollout TV in 16:9 format (much TV is already produced in this format), but to "guard" the TV picture, ensuring that all essential material is captured in the central 14:9 portion of the display.

Programme Guides
The purpose of a programme guide is to assist the viewer in selection of the programmes to be viewed. The guide will present a selection of information about the current TV programmes being shown, and the coming programmes. The programme guide is not bound to the channels being viewed, and it is feasible to see programme information from channels which are not currently transmitting. The programme guide will carry all programme information, including regional variations, which it may not be possible to receive outside a specific area, within a common MPEG2 multiplex stream. Two types of guides have been suggested: one simpler, allowing more flexibility, and resembling the TV listings found in newspapers; the other more sophisticated resembling a web-based TV magazine.  Event Schedule Guide (ESG) The simplest type of guide resembles that provided by the analogue teletext system. This type of guide may be constructed by the receiver based on the Service Information (SI) contained in the MPEG2 multiplex stream. Transmitting the information in this way allows the receiver manufacturers and viewers to determine the presentation format - level of detail, preferred presentation style, channels of interest, type of programme desired, etc. A bit rate of about 50 kbps is required a weeks programme (assuming 5 channels of traditional TV).  Electronic Programme Guide (EPG) An EPG presents an additional service to the user in the form of a multimedia magazine-style channel guide. This type of guide requires support in the manufacturers set top box for a consistent interface to allow the view of video stills, movie clips, buttons, layout, etc. It is expected that the look and feel for this 22 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

type of guide will dictate by the service provider. EPGs have some considerable advantage when considering other services which may be provided: Home shopping, remote banking, etc. There is also a bigger opportunity for advertising. The cost is in the standardization of the STB interface and the extra bandwidth consumed by the guide.

DSM-CC for Software Download


Most decoders will have software-based user interfaces implemented using a microprocessor. In many cases the microprocessor will also perform configuration and other housekeeping tasks associated with decoding the various MPEG2 streams. It has been suggested that the DSM-CC service could be used to reload and update this software over the MPEG2 broadcast link. This would use a DSM-CC object carousel to download each type of software (presently each STB will have its own architecture, and therefore will require a specific software image). To ensure that the software is always available it has been suggested that it should be encrypted with a key, which is independent of the Conditional Access (CA) system, and the RSA key has been suggested. The details of downloading - and how to recover from faults - who is controls the software in the viewers STB, etc are still issues for debate.

Data Transmission using MPEG-2 and DVB


Introduction
The growing use of multimedia-capable personnel computers to access the Internet and in particular, the use of World Wide Web, has resulted in a growing demand for Internet bandwidth. The emphasis has moved from basic Internet access to the expectation that connectivity may be provided whatever the location. This presents fresh challenges to the networking community, particularly as users become familiar with the benefits of high-speed connectivity. Along with an increased use of the Internet, there has also been a revolution in television transmission, with the emergence of Digital Video Broadcast (DVB) .The same system may support a high speed Internet, and is being supported on a number of DVB satellite systems. A high speed (6-34 MBPS) simplex data transmission system may be built using a digital Low Noise Block (LNB) and standard TV antenna connected via an L-band co-axial cable to a satellite data receiver card in a PC (or LAN adapter box). In many cases, a return link may be established using the available (standard dial-up modem) terrestrial infrastructure, providing the full-duplex communication required for the Internet service. Low-cost satellite return channels are also available. The overall system may provide low cost, high bandwidth, Internet access to any location within the downlink coverage of a DVB satellite service.

23

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Forward Data Transmission

Typical configuration for providing Direct to Home (DTH) Internet delivery using DVB

Data is already being sent over DVB networks using the MPEG-2 transport stream. A variety of proprietary encoding schemes are being used. Data transmission may be simplex or full duplex (using an interaction channel for the return) and may be unicast (point-to-point), multicast (one to many) or broadcast (all receivers receiving the assigned PID). A number of manufacturers supply DVB-S (satellite) receiver cards with data capability and data gateways to allow packetizing the data to be sent. Some suppliers include: y Media 4 Inc y PACE/TFC (see also ASTRA-Net) y SAGEM y NDS y Phillips y COMSTREAM (MediaCast) y Adaptec y Sony (Mediacaster) In an effort to standardize these services, the DVB specification suggests data may be sent using one of five profiles: 1. 2. Data Piping - where discrete pieces of data are delivered using containers to the destination. There is no timing relationship between the data and other PES packets. Data Streaming - where the data takes the form of a continuous stream which may be Asynchronous (i.e. without timing, as for Internet packet data) Synchronous (i.e. tied to a fixed rate transmission clock, as for emulation of a synchronous communication link) or Synchronized (i.e. tied via time stamps to the decoder clock and hence to other PES packets, as for the display of TV captions). The data is carried in a PES. Multi-Protocol Encapsulation (MPE) - the technique is based on DSM-CC and is intended for providing LAN emulation to exchange packet data System Engineering Group

3.

24

Ver 1.0 4.

MPEG-2 DVB GUIDE

02/02/12

Data Carousels - a scheme for assembling data sets into a buffer, which is played-out cyclic manner (periodic transmission). The data sets may be of any format or type. One example use is to provide the data for Electronic Program Guides (EPGs). The data are sent using fixed sized DSM-CC sections. Object Carousels - object carousels resemble data carousels, but primarily intended for data broadcast services. The data sets are defined by the DVB Network Independent Protocol specification. They may be used, for example, to download data to a set top box decoder.

5.

For Internet data transmission, the recommended procedure is to use MPE. Backwards compatibility with proprietary data transmission schemes using piping / streaming is provided by assigning a registered Service Information (SI) code to each format of data. Only the SI codes which are recognized by a receiver / decoder are processed, enabling continued processing of the proprietary encoding by receivers with the appropriate hardware.

Digital Storage Media Command and Control


DSM-CC is a toolkit for developing control channels associated with MPEG-1 and MPEG-2 streams. It is defined in part 6 of the MPEG-2 standard (Extensions for DSM-CC) and uses a client/server model connected via an underlying network (carried via the MPEG-2 multiplex or independently if needed). DSM-CC may be used for controlling the video reception, providing features normally found on videocassette Recorders (VCR) (fast-forward, rewind, pause, etc). It may also be used for a wide variety of other purposes including packet data transport. It is defined by a series of weighty standards, principally MPEG-2 ISO/IEC 13818-6 (part 6 of the MPEG-2 standard). DSM-CC may work in conjunction with next generation packet networks, working alongside such Internet protocols as RSVP, RTSP, RTP and SCP.

Digital Storage Media Command and Control (DSM-CC) Packet Download


Compared to other download protocols, DSM-CC download is designed for lightweight and fast operation in order to meet the needs of devices that contain limited memory. The DSM-CC download operates over heterogeneous connections and is applied to a number of network models one of which is the broadcast model with no upstream channel. The mechanisms used in download are: y Sliding window y No ACK for use in broadcasting y Maps to MPEG-2 Transport Stream for hardware multiplexing

DSM-CC Multi Protocol Encapsulation


Using the multi-protocol encapsulation, each frame of data is encapsulated in an Ethernet-style section.

25

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Return Channel Systems


Various types of Return Channel systems have been defined by DVB including: y RVB-RC Return Channel System y DVB-RCS Return Channel System for Satellite y DVB-RCT Return Channel System for Terrestrial TV y DVB-RCC Return Channel System for Cable TV

Return Data Transmission (Interaction Channel) via Satellite


The DVB-S standards provide transmission of data from a satellite hub station up-link to receivers (typically PCs or adoption boxes). The advantages of using DVB include the availability of low cost (mass produced) system components and the ability to integrate transmission with digital TV distribution (sharing the transmission costs and providing a future integration path for TV and Internet services). Using these components an Internet service may be easily offered. Simplex data transfer (using UDP) requires no further components at the receiver, but to provide full duplex communication (as required for TCP) requires the addition of a return channel (sometimes known as an "interaction channel"). The complete system consists of a packet data processor at the hub site (which typically formats the IP packets using MPE) and a decoder card at the PC. A client sends requests for data transfer (and later acknowledgments as the session progresses) through the terrestrial network, while the server transfers the data to the client through the higher speed satellite link. The client network software is configured to redirect the packets destined to the hub site to a "virtual connection" formed by an IP tunnel which sends packets from the client back to the hub site using the dialup modem connection. Once the return packets reach the hub site, they may be either forwarded to a server, or routed to the Internet (typically using a high-speed fibre connection to a terrestrial Internet Service Provider). 26 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Each user (i.e. DVB/MPEG-2 receiver) is allocated a MAC address (from for example form a subscriber database) to match the IP addresses of the equipment at the remote site. The MAC address is used to uniquely identify the user equipment.

Protocol architecture for (a) Outbound link to client and (b) return link from client Each frame of data is encapsulated by adding a section header, a Medium Access Control (MAC) address (arranged to ease processing by receivers with limited capability) and an optional header with Logical Link Control (LLC) / Sub Network Access Protocol (SNAP). The data is protected by a CRC-32 checksum. The entire block of data is known as a section. The section length is adjusted by adding padding bytes to ensure it may segmented into an integral number of 188B MPEG-2 transport packets. The transport packets are assigned a PID, based on the routing information at the hub site. A set of users may be allocated the same PID - forming a Virtual Private Network (VPN), or may alternatively one PID may be allocated to each user. The first packet (with the start of the section) has a flag bit set to indicate that it contains the start of the section. The packets in the DSM-CC section may be scrambled using conditional access control which scramble the MAC address (preventing other users from knowing the traffic to each MAC address) and/or the packet data. Encryption is controlled by flag bits in the DSM-CC encapsulation header. Normally packets are sent Unicast (i.e. point-to-point), in which case, only one receiver will forward the data, the other receivers in the network will receive, but discard the data (since either the MAC address and/or PID will not correspond to their internal filters). Multicast transmission is possible, using a multicast address. No provision is provided for group management (this is assumed to be provided by some other means, e.g. using a terrestrial return channel).

27

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Although a satellite-based system is capable of providing a high bandwidth (simplex) path from a satellite service provider to a receiver, the connectivity from a user to the service provider is usually provided using a low speed terrestrial link. This results in a network connection in which the capacity to a remote server differs from that from the server. Such an asymmetry in the network paths may be suited to user needs, since most Internet connections receive much more data than they need to send. However, a high degree of asymmetry in the forward and return paths introduces a potential bottleneck in performance. There are also other important considerations in providing a TCP Internet service via satellite that is discussed below.

MPEG-2 Encoders and Decoders

The secret of MPEG-2 is the flexibility offered to the manufacturers of the equipment, while adhering to the core MPEG-2 standard. Encoders and decoders come at a range of prices with a range of capabilities. Not all decoders are equal. Manufacturers decide how much functionality to include in their decoders. They may, for instance, support electronic programme guides, interpretation of optional MPEG data, and support for a range of audio formats. Even a stream of encoded MPEG-2 video may be handled at various levels of sophistication (advanced codecs provide features for instance which may mask the effect of loss of all or part of a frame of video). In many ways, the encoder is the key item. There are many options when encoding a video picture into an MPEG-2 bit stream. The ability to maintain picture quality with rapidly moving scenes and complex pictures will depend upon the algorithms and processing power of the encoder. For real-time transmission, such as encoding the output of a video camera, this may necessitate expensive equipment. For non-real-time, converting stored analogue / digital clips into an MPEG-2 video clip library, less expensive (but slower) equipment may be used.

MPEG-2 Decoders
The various types of MPEG-2 decoder may be broadly classified as shown below:

Software MPEG-2 Decoders


28 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Software-based MPEG-2 players, which usually require the support of specialized instruction sets to provide the bit manipulations required for video rendering. Such support is provided in most modern processors (e.g. MMX, viz). Such players are well suited to delivery from local high speed media (e.g. to replay video recorded on Digital Versatile Disk (DVD)), but are generally CPU intensive, and allow little extra processing with current CPUs for other tasks. Display is to a window on the computer display.

PC-Based MPEG-2 Accelerators

PC-based MPEG-2 accelerators are available for personal computers and/or workstations. The accelerator provides a range of functions to ease the processing of the MPEG-2 video decompression, but still requires CPU intervention to decode the MPEG-2 stream. The primary application of accelerators is the replay of recorded MPEG-2 video from a local disk. Mediamatics have specified a "standard" interface to such decoders.

Computer MPEG-2 Decoders

MPEG-2 decoders relieve the computer CPU of nearly all processing. They may play via an external monitor, a video overlay on the existing computer monitor, or by direct writing of the video bit map to the display memory. Decoders may receive data from a Network Interface Card (NIC) or a local disk. The host computer is free to do other tasks - although disk access speed may often be a limiting factor on overall system performance.

Network Computers / Thin Clients


29 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

An alternate design is to embedded the MPEG-2 decoder in an equipment which was designed to support MPEG-2 decoding together with other basic functions (e.g. web browsing, ftp, telnet, programme guides). Such equipment may be based around a virtual machine (e.g. a JAVA virtual machine) and be reprogrammable by the user and/or network applications. A high quality display would normally be connected via a S-video or SCART connector. By reducing the functionality of such a design, manufacturers may eliminate interface and configuration options - substantially reducing the overall cost of ownership (i.e. little additional software required, easy installation, simple user interface) compared to a common PC. The key emphasis will be on ease of use, with the intention of reaching the many people who do not need (or will not use) a common PC.

The MPEG-2 Set-Top Box (STB)

Most people use TV sets in a different way to computers and locate them in different rooms and have different expectations, particularly in the lifetime of equipment (10 years?), the initial cost, the running cost (zero for hardware, small charge for programming), and user interface (remote control). There is therefore a need for equipment that allows connection of standard (PAL/NTSC) TVs to a digital video network. Such equipment will take the form of a set top box which replacing or sitting alongside existing satellite / cable receivers. It is likely that it will support DVB (and/or ATSC) and provide an interaction channel (which may be optionally used).

MPEG-2 Consumer Equipment

30

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Future TVs and video equipment may support in-built decoders, reducing the cost of the set-top box and allowing further integration of functions. An alternate scenario is also possible, in which the settop box and a network computer/thin client are brought together into a single piece of equipment with a range of interfaces to various types of network (DVB/ ATSC) and packet/cell data networks). Such equipment may be viewed more as a utility interface (in the same way as an electricity/gas/water meter) and be hidden away out of sight. The household equipments (TV, VCR, Camcorder, PCs) would be connected back using a common high-speed communications link (e.g. IEEE 1394 FireWire).

Delivery of MPEG-2
MPEG-2 data may be delivered to the decoder in a variety of ways. These may be categorized in one of two groups, depending upon whether the receiver buffers a substantial volume of data prior to playing.

Streaming applications

These applications have only a small buffer at the receiver and therefore require the jitter introduced during transmission to not exceed the maximum delay, which the buffer can accommodate. These applications usually have insufficient space for retransmission and require data to be sent as a continuous stream (either to a group of synchronized receivers or to individual receivers: y Replay of "live" network media from a streaming video server (e.g. Windows Media Server) y Replay over a public/private broadcast network (e.g. DVB)

Buffered Applications
31 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

These applications have a buffer at the receiver. The applications may buffer all or part of the video clip before playing. They are therefore less sensitive to jitter introduced during transmission, and may be tolerant of a small level of loss - the protocol used may permit some form of retransmission: y Replay from local storage media (e.g. DVD) y Replay from network-downloaded content from a file/web server (e.g. using http) y Replay using a purpose-designed video-clip based transfer protocol (e.g. TimeLine)

Issues
There are two important issues to be considered when thinking about MPEG-2 delivery. The first issue arises directly from the bit rate needed to support MPEG-2 applications - rates typically of 3-6 Mbps per programme. If each user is selecting their own material, this may multiply the bandwidth needed by the number of users. This places exacting demands on the network and the server, and also on the client hosts used by the users (which need normally handle only one programme but are usually of a lower specification than the server). The other issue is the subject of Quality of Service (QOS). Different applications and delivery mechanisms will have different needs in terms of maximum tolerable jitter, and maximum tolerable loss rate.

Programming APIs for MPEG-2 Computer Cards


Applications may be written to use MPEG-2 on a range of platforms using such Application Programming Interfaces (APIs) as QuickTime (Mac, eta) and others. APIs are also available for the PC platform and may be categorized as: 1. Proprietary APIs developed by individual MPEG-2 decoder card suppliers 2. Microsoft MCI, defined as a 16-bit interface providing support for overlay cards 3. Microsoft Active Movie-1 4. Microsoft Direct Show (Active Movie-2) part of the Direct-X API allowing bit-mapped access to the display and filters to be defined to process the video. At the time of writing, many developers are committed to producing Direct-Show APIs, but few are able to supply this.

Broadband Multimedia Satellite Systems


What is Broadband Multimedia?
The term Broadband literally means large capacity, but it means much more than that in the communications industry. The most spoken about broadband systems were those designed based on the Asynchronous Transfer Mode 32 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

(ATM), or cell-based switching. These systems were designed to support multiple services and have become synonymous with multimedia capability. Multimedia is a term that may be applied to digital media-rich content, digital platforms, and networks. Multimedia Networks enable a number of services to be sent simultaneously over the same network. Examples include support for different classes of Internet customers (Home, Business) and high availability video on Demand, Voice over IP, Videoconference, etc. In the case of ATM, multimedia is linked to the ability to control the Quality of Service (QoS) of each class of service, and in some cases to specify individual Service Level Agreements. More recently, the full range of multimedia services have been demonstrated over next generation IP networks, leading to questioning as to the merits of introducing a unified ATM network. Types of multimedia service: y TV - well suited to satellite (likely to remain the killer-app for at least the next few years) y Video on Demand - Ability to download hours of TV programmes. y Electronic Programme Guide, EPG - now widely used by many digital TV users y E-commerce services (shopping, gaming, etc) y Internet proxied services (selected web, email) y Internet access - offered only by some current systems y Games - surprising take-up by some users Most modern broadband systems also permit some level of Interactivity. This is true of most current content: TV, Radio, etc, but especially true of the new digital services, where users often wish to sort, manipulate or participate based on the received content. Although it is possible to conceive a one way Internet service, one-way routed Internet (e.g. supplying supplementary capacity to network service providers requiring bandwidth exceeding their purchased terrestrial capacity), most services will require some form of return channel to allow two way packet flow. Multi-Platform delivery is also a key concept for many people thinking of Broadband Multimedia. This is the ability to deliver content (digital video, web pages, etc) to a range of network devices - TV sets, personal PCs, Multimedia devices, web-enabled telephones (e.g. WAP) and wireless devices. To date the very different user interfaces, display capabilities and human factors considerations has made multi-platform delivery difficult. Many proposed networks instead will restructure information for each type of display. Some suggest this is a short-term solution, and future generations will have to converge on a common technology (presumably Internet-based for data applications). There are five key players in a typical broadband multimedia system: y Content Owners / Providers y Middleware developers (platform, user interface, portals, etc) y Service Providers (customer management, proxies, supporting services, lease networking resources from Network Operators) y Network Operators (infrastructure & equipment to connect the various players) y Customers (people / companies willing to pay Service Providers to have the content delivered to them)

Broadband Multimedia Content - Is Content King?


The key issue in bringing broadband multimedia networks to reality was once thought of as Content. Finding the right content was seen as a key part of planning a broadband system. Early content (well-suited) news, sports, weather, financial data - mainly text based. One way to generate additional content for a new format is to use content gateways (which rewrite content across HTML, for OpenTV, for WAP, etc), although such gateways may seem attractive, they are unlikely to provide effective long-term use for access to rapidly evolving Internet content. Convergence is the term given to the perception that many platforms now have (or soon will have) common capabilities. Although specific platforms are optimized for specific types of content some capabilities are now becoming common. The TV set can receive and send email (as can a wireless personal computer, mobile phone, etc), networked PCs can receive broadcast TV, digital music can be downloaded from any digital appliance, and digital camera pictures uploaded. A multiservice network shows a similar convergence in network capability. 33 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

One advantage of convergence is the ability to access the same information from a TV set as from a PC. Particularly in Europe, many households do not have their own Internet PC (estimates place the proportion of European households with a PC to be 50% less than in the USA). In contrast to what is seen as the complexity of a PC, most potential broadband customers perceive the TV as a less difficult to use device. Never the less to evolve the set-topbox will need to acquire much more sophistication, and may never prove the ideal device for personal access. An advantage of such systems is that the service provider can leverage their existing (often-loyal) customer based for subscriber TV. There is growing perception that content alone is not the solution. There is a need for a sound commercial model for the new service. Traditionally few Internet users have been willing to pay more than modest charges for their data services. There are also legal issues on the distribution (and redistribution) of content, both from the viewpoint of national laws and the ownership of copyright / loyalties.

DVB Satellite Return Channel (DVB-RCS)


The DVB Return Channel System via Satellite (DVB-RCS) was specified by an ad-hoc ETSI technical group founded in 1999. This tracked developments by key satellite operators and followed a number of pilot projects organized by the European Space Agency (ESA). The DVB-RCS system is specified in ETSI EN 301 790. This specifies a satellite terminal (sometimes known as a Satellite Interactive Terminal (SIT) or Return Channel Satellite Terminal (RCST)) supporting a two-way DVB satellite system. The use of standard system components provides a simple approach and should reduce time to market. (There is also a related guideline document for use of EN 301 790.) The satellite user terminal receives a standard DVB-S transmission generated by a satellite hub station. Packet data may be sent over this forward link in the usual way (e.g. MPE, data streaming, etc). In addition, DVB-RCS provides transmit capability from the user site via the same antenna. The transmit capability uses a Multi-Frequency Time Division Multiple Access (MF-TDMA) access scheme to share the capacity available for transmission by the user terminal. The return channel is coded using rate 1/2 convolutional FEC and Reed Solomon coding. The standard is designed to be frequency independent - it does not specify the frequency band (or bands) to be used - thereby allowing a wide variety of systems to be constructed. The data to be transported may be encapsulated in Asynchronous Transfer Mode (ATM) cells, using ATM Adaptation Layer 5 (AAL-5), or use a native IP encapsulation over MPEG-2 transport. It also includes a number of security mechanisms.

One arrangement of the User Terminal Protocol Stack DVB-RCS terminals require a two-way feed arrangement / antenna system, able to transmit and receive in the appropriate satellite frequency bands. These are typically connected via a cable (or group of cables) to an indoor unit. This unit could be a Set Top Box (STB) with a network interface, could be integrated in a PC peripheral (e.g. a USB or FireWire device), or may be integrated in a PC expansion card. A key goal of DVB equipment suppliers is to reduce equipment costs. Since cost is likely to be (at least initially) dependent on the terminal transmit capability (that is the rated transmit power), a number of different classes of terminal are envisaged, supporting a range of transmit bit rates. 34 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Terminal Operation
A Return Channel Satellite Terminal (RCST), once power on, will start to receive general network information from the DVB-RCS Network Control Center (NCC). The NCC provides monitoring & control functions and generates the control and timing messages required for operation of the satellite network. All messages from the NCC are sent using the MPEG-2 TS using private data sections (DVB SI tables). These are transmitted over the forward link. Actually the DVB-RCS specification calls for two forward links - one for interaction control, and another for data transmission. Both links can be provided using the same DVB-S transport multiplex. The term forward link refers to the link from the hub station which is received by the user terminal. DVB-RCS allows this communication to use the same transmission path as used for data (that is the DVB-S receive path), or an alternate interaction path. Conversely the return link is the link from the user terminal to the hub station using the DVB interaction channel. The control messages received over the forward link also provide the Network Clock Reference (NCR). The NCR contains a 27 MHz clock reference and reception of the NCR is used by user terminals to adjust the transmit frequency of each user terminal to ensure a common reference for the MF-TDMA transmissions. All transmissions by a user terminal are controlled by the NCC. Before a terminal can send data, it must first join the network by communicating (logon) with the NCC describing its configuration. The logon message is sent using a frequency channel also specified in the control messages. This channel is shared between user terminals wishing to join the network using the slotted ALOHA access protocol.

Simplified Transponder Usage by a Group of DVB-RCS User Terminals After receiving a logon message from a valid terminal, the NCC returns a series of tables including the Terminal Burst Time Plan (TBTP) for the user terminal. The MF-TDMA burst time plan (TBTP) allows the terminal to communicate at specific time intervals using specific assigned carrier frequencies at an assigned transmit power. The terminal transmits a group of ATM cells (or MPEG-TS packets). This block of information may be encoded in one of several ways (using convolutional coding, RS/convolutional coding or Turbo-coding). The block is prefixed by a preamble (and optional control data) and followed by a postamble (to flush the convolutional encoder). The complete burst is sent using QPSK modulation. Before each terminal can use its allocated capacity, it must first achieve physical layer synchronization (of time, power, and frequency), a process completed with the assistance of special synchronization messages sent over the satellite channel. A terminal normally logs off the system when it has completed its communication. Alternately, if there is a need, the NCC may force a terminal to log off. One of the strengths of the system is the extreme flexibility this provides to configuring individual transmission capabilities. Some also criticize this as a weakness: The current standard allows a range of implementations, and therefore does not promote equipment interoperability between different systems. 35 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Broadband Interactive Service (BBI)


BBI is an example of a satellite system built according to the DVB-RCS standard. The system has been designed for Societe Europeene des Satellites (SES), the company operating the ASTRA Satellite network. BBI uses ASTRAs existing Ku-Band system (14 GHz) for transmission from the hub station to the SIT, and Ka-Band capacity (29.530.0 GHz) for the return channel from the SIT to the hub station using a 20 MHz MF-TDMA channel. The BBI system supports three classes of SIT, each with similar receive capabilities, but different transmit capabilities:

Antenna SIT Transmit EIRP (estimated) Return Transmission Speed Forward Link Speed (DVB-S)

SIT II 0.75 m 40 dBW 144 kbps 38 Mbps

SIT III 0.95 m 45 dBW 384 kbps 38 Mbps

SIT III 1.2 m 50 dBW 2 Mbps 38 Mbps

TCP over Satellite


Introduction
There is a popular misunderstanding concerning the performance of TCP (used by most Internet applications) over satellite. This is fuelled by various misconceptions that have become widespread in recent years. The misunderstanding stem from three areas: (i) Many people have experience with implementations that are now out of date. Research papers published in the later 1980's and early 1990's identified problems with TCP, most of that have now been resolved. Many reported experiments have used misconfigured TCP protocol stacks. Until recently few protocol stacks have been configured suitably for satellite operation. Unfortunately, many researchers still fail to understand the way in which TCP actually operates. This has lead to a lot of bad advice. The IETF has recently published two documents to try and address these issues.

(ii)

(iii)

The result of all this, is that many people are left with fear, uncertainty and doubt concerning whether TCP will actually operate over a satellite link. In fact, this doubt is mostly unfounded, as many satellite service users can testify. TCP was designed to work over satellites. The original goal of the research project that created TCP was to link a pilot satellite network (SATNET) with the terrestrial Internet (ARPANET). Although there is no practical limit to TCP's throughput (the maximum theoretical throughput is 1.5 Gbps - faster than any satellite link), there a number of significant issues which do directly impact the performance which real users are likely to achieve. To understand things more clearly, it is first necessary to examine how TCP has evolved. SPECIFIC ISSUES THAT IMPACT THE SATELLITE SERVICE 36 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

The main distinguishable characteristics of a satellite link compared to a terrestrial network which effect the performance of TCP are: Effect of Satellite Link Errors (ii) Satellite Propagation Delay (iii) Bandwidth and Path Asymmetry (iv) Channel Access and Network Interactions

Internet protocols are not optimized for satellite conditions, and consequently, the throughput over satellite networks is restricted to only a fraction of the available bandwidth, data networking over satellites is faced with overcoming the large latency and high bit error rates typical of satellite communications, as well as the asymmetric bandwidth design of most satellite networks. The effectiveness of TCP is optimized for short hops over low-loss cable or fiber. Satellite conditions adversely affect a number of elements of the TCP architecture, including its congestion avoidance algorithms, data acknowledgment mechanisms, and window size limitations, which combine to severely constrict the data throughput rate that can be achieved over satellite links. Congestion Avoidance: In order to avoid the possibility of congestive network meltdown, TCP assumes that all data loss is caused by congestion and responds by reducing the transmission rate. However, over satellite links, TCP misinterprets the long round trip time and bit errors as congestion and responds inappropriately. Similarly, the TCP "Slow Start" algorithm, which over the terrestrial infrastructure prevents new connections from flooding an already congested network, over satellites forces an excessively long ramp-up for each new connection. While these congestion avoidance mechanisms are vital in routed environments, they are ill suited to single-path satellite links. Data Acknowledgements: The simple, heuristic data acknowledgment scheme used by TCP does not adapt well to long latency or highly asymmetric bandwidth conditions. To provide reliable data transmission, the TCP receiver constantly sends acknowledgments for the data received back to the sender. The sender does not assume any data is lost or corrupted until a multiple of the round trip time has passed without receiving an acknowledgment. If a packet is lost or corrupted, TCP retransmits all of the data starting from the first missing packet. This algorithm does not respond well over satellite networks where the round trip time is long and error rates are high. Further, this constant stream of acknowledgments wastes precious back channel bandwidth. If the back channel bandwidth is small, the return of the acknowledgments to the sender can become the system bottleneck. Window Size: TCP utilizes a sliding window mechanism to limit the amount of data in flight. When the window becomes full, the sender stops transmitting until it receives new acknowledgments. Over satellite networks, where acknowledgments are slow to return, the TCP window size generally sets a hard limit on the maximum throughput rate. The minimum window size needed to fully utilize an error-free link, known as the "bandwidth-delay product," is 100 KB for a T1 satellite link and 675 KB for a 10 Mbps link. Bit errors increase the required window size further. However most implementations of TCP are limited to a maximum window size of 64 KB and many common operating systems use a default window size of only 8 KB, imposing a maximum throughput rate over a satellite link of only 128 Kbps per connection, regardless of the bandwidth of the data pipe.

MPEG-2 and DVB Standards


MPEG-2 Standards
MPEG-2 Standards are published by the International Standards Organization (ISO) and are produced by the Motion Picture Experts Group (MPEG). The MPEG-2 standard is currently in 9 parts. The first three parts of MPEG-2 have reached International Standard status, other parts are at different levels of completion. One has been withdrawn. 37 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

ISO/IEC DIS 13818-1 Information technology -- Generic coding of moving pictures and associated audio information: Systems ISO/IEC DIS 13818-2 Information technology -- Generic coding of moving pictures and associated audio information: Video ISO/IEC 13818-3:1995 Information technology -- Generic coding of moving pictures and associated audio information -- Part 3: Audio ISO/IEC DIS 13818-4 Information technology -- Generic coding of moving pictures and associated audio information -- Part 4: Compliance testing ISO/IEC DTR 13818-5 Information technology -- Generic coding of moving pictures and associated audio -- Part 5: Software simulation (Future TR) ISO/IEC IS 13818-6 Information technology -- Generic coding of moving pictures and associated audio information -- Part 6: Extensions for DSM-CC is a full software implementation ISO/IEC IS 13818-9 Information technology -- Generic coding of moving pictures and associated audio information -- Part 9: Extension for real time interface for systems decoders

DVB Standards
DVB stands for Digital Video Broadcast. DVB Standards and related documents are developed under the DVB Project (founded in 1993) and are published by the European Telecommunications Standards Institute (ETSI). The standards are based on the ISO MPEG-2 standard, but extend this to cover system specific details to ensure a fully specified system. DVB has been adopted by most countries world wide, with the exception of countries based on the U.S. NTSC analogue TV system (these include: USA, Mexico, Canada, South Korea, Taiwan). These countries have chosen a similar, but incompatible, system also based on MPEG-2. This alternate system is specified by the US Advanced Television Systems Committee (ATSC). A related (almost identical) system is known as Digicipher-2 (DCII). Both DVB and ATSC/DC-II support normal and High Definition TV (HDTV). DVB standards may be grouped by the transmission method employed: y DVB-C (Cable) y EN 300 800 Digital Video Broadcasting (DVB); DVB interaction channel for Cable TV distribution systems (CATV) y DVB-DSNG (Digital Satellite News Gathering) y EN 301 210 y TR 101 211 Guidelines y EN 201 222 y DVB-MC (Multipoint Microwave Video) y EN 300 749 Digital Video Broadcasting (DVB); DVB framing structure, channel coding and modulation for MMDS systems below 10 GHz y DVB-MS (Multipoint Microwave Video) y EN 300 748 Digital Video Broadcasting (DVB); DVB framing structure, channel coding and modulation for MVDS at 10 GHz and above y DVB-S (Satellite) y EN 301 421 Digital Video Broadcasting (DVB); Modulation and Coding for DBS satellite systems at 11/12 GHz y ETR 101 198 Implementation of BPSK modulation for DVB-S y DVB-SFN (Single Frequency Networks) y TS 101 190 Megaframe for Single Frequency Networks y DVB-SMATV (Single Master Antenna TV) y EN 300 473 Digital Video Broadcasting (DVB); DVB Satellite Master Antenna Television (SMATV) distribution systems y DVB-T (Terrestrial TV) 38 System Engineering Group

Ver 1.0 y

MPEG-2 DVB GUIDE

02/02/12

EN 300 744 Digital Video Broadcasting (DVB); Framing structure, channel coding and modulation for digital terrestrial television (DVB-T) y TR 101 190 Implementation Guidelines for DVB-T Terrestrial Digital Communication Networks y ETS 300 813 Digital Video Broadcasting (DVB); DVB interfaces to Plesiochronous Digital Hierarchy (PDH) networks y ETS 300 814 Digital Video Broadcasting (DVB); DVB interfaces to Synchronous Digital Hierarchy (SDH) networks y ETS 300 818 Broadband Integrated Services Digital Network (B-ISDN); Asynchronous Transfer Mode (ATM); Retainability performance for B-ISDN switched connections

There are many other DVB documents, defining parts of the complete system, including: y ETR 101 200 DVB-Cookbook; A guide to how to use the DVB standards and specifications y Service Details y EN 300 466 Specification of SI for DVB y ETR 211 Digital Video Broadcasting (DVB); DVB guidelines for implementation and usage of Service Information (SI) y ETR 162 Digital Video Broadcasting (DVB); Allocation of Service Information (SI) codes for DVB systems y Teletext & Subtitles y EN 300 743 Digital Video Broadcasting (DVB); DVB Subtitling system y ETS 300 743 Subtitling Systems y ETR 154 Digital Video Broadcasting (DVB); DVB implementation (guidelines for the use of MPEG-2 Systems, Video and Audio in satellite, cable and terrestrial broadcasting applications) y ETR 289 Digital Video Broadcasting (DVB); Support for use of scrambling and Conditional Access (CA) within DVB systems y DVB-RC/DVB-RCT/DVB-RCDECT/DVB-RCL/DVB-RCGSM/DVB-RCCS/DVB-RCS (Return Channel) y EN 300 802 Network Independent Protocols for DVB Interactive Services y TR 101 194 Guidelines for Network Independent Protocols y ETS 300 800 Digital Video Broadcasting (DVB); DVB interaction channel through Cable TV y TR 101 196 Guidelines for ETS 300 800 y ETS 300 801 Digital Video Broadcasting (DVB); DVB interaction channel through PSTN/ISDN y EN 301 193 Digital Video Broadcasting (DVB); DVB interaction channel through DECT y EN 300 199 Digital Video Broadcasting (DVB); DVB interaction channel through LMDS y TR 101 205 guidelines for EN300 199 y EN 301 195 Digital Video Broadcasting (DVB); DVB interaction channel through GSM y TR 101 201 Digital Video Broadcasting (DVB); DVB interaction channel through SMATV y Data Applications y EN 301 192 Specifications for Data Broadcasting y TR 101 202 Implementation Guidelines for Data Broadcasting ETSI Technical Reports are denoted by the prefix "ETR" whereas ETSI Standards are denoted by the prefix "ETS".

DAVIC Specifications
DAVIC Standards and related documents are published by the Digital Audio Visual Council (DAVIC): The DAVIC 1.0 standard includes: y Description of DAVIC System functions y System reference models and scenarios y Service Provider System architecture and interfaces y Delivery System architecture and interfaces 39 System Engineering Group

Ver 1.0 y y y y y y y

MPEG-2 DVB GUIDE Service Consumer System architecture and interfaces High-layer and Mid-layer protocols Lower-layer protocols and physical interfaces Information representation Security - not available Usage information protocols - not available Dynamics, reference points, and interfaces

02/02/12

Glossary
AC-3Audio compression standard adopted by ATSC and owned by Dolby. ADCAnalog to Digital Converter. ASCIIAmerican Standard Code for Information Interchange. ASIAsynchronous Serial Interface. A standard DVB interface for a transport stream. ATMAsynchronous Transfer Mode. ATSCAdvanced Television Systems Committee. Digital broadcasting standard developed in North America. ATVAdvanced Television. North American standard for Digital Broadcasting. BATBouquet Association Table. This DVB table describes a set of services grouped together by a broadcaster and sold as a single entity. It is always found on PID 0x0011. BERBit Error Rate. B-framesBidirectionally predicted pictures, or pictures created from references to past and future pictures. BitrateThe rate at which a bit stream arrives at the input of a decoder. BlockA set of 8x8 pixels used during Discrete Cosine Transform (DCT). BouquetA set of services sold as a single entity. BroadcasterSomeone who provides a sequence of scheduled events or programs to the viewer. CAConditional Access. This system allows service providers to control subscriber access to programs and services via encryption. CATConditional Access Table. This table identifies EMM streams with a unique PID value. The CAT is always found on PID 0x0001. CATVCommunity Access Television, otherwise known as Cable TV. ChannelA digital medium that stores or transports an MPEG-2 transport stream. COFDMCoded Orthogonal Frequency-Division Modulation. CompressionReduction of the number of bits needed to represent an item of data. Conditional AccessA system used to control viewer access to programming based on subscription. CRCCyclic Redundancy Check. This 32-bit field is used to verify the correctness of table data before decoding. CVCTCable Virtual Channel Table. This ATSC table describes a set of one or more channels using a number or name within a cable network. Information in the table includes major and minor numbers, carrier frequency, short channel name, and information for navigation and tuning. This table is located on PID=0x1FFB. CIF --Common Intermediate Format, also named Full CIF, used in videoconferencing to specify the frame rates of 30 frames per second (fps), and a resolution of 288 x 352 pixels. CIF supports both PAL and NTSC. See also D/ADigital to Analog Converter. DAVICDigital Audio Visual Council. DBSDirect Broadcasting Satellite or System. DCTDiscrete Cosine Transform. Temporal-to-frequency transform used during spatial encoding of MPEG video. Decoding Time StampDecoding Time Stamp. This stamp is found in the PES packet header. It indicates the time at which a piece of audio or video will be decoded. DigiTAGDigital Television Action Group. 40 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

DownlinkCommunication link from a satellite to earth. DTVDigital Television. A general term used to describe television that has been digitized. It can refer to Standard-definition TV or High-definition TV. DTS Decoding Time Stamp. DVBDigital Video Broadcasting. The DVB Project is a European consortium that has standardized digital TV broadcasting in Europe and in other countries. DVB ASIAsynchronous Serial Interface. This is a standard DVB interface for a transport stream. DVB-CDigital Video Broadcasting-Cable. The DVB standard for broadcasting digital TV signals by cable. The RF spectrum in digital cable TV networks has a frequency range of (approx) 46MHz to 850MHz. DVB-SDigital Video Broadcasting-Satellite. The DVB standard for broadcasting digital TV signals via satellite. DVB SPISynchronous Parallel Interface. This is a standard DVB interface for a transport stream. DVB-TDigital Video Broadcasting-Terrestrial. The DVB standard for broadcasting digital terrestrial TV signals. ECMEntitlement Control Message. ECMs carry private conditional access information that allows receivers to decode encrypted information. EIT (ATSC)Event Information Table. This table is part of the ATSC PSIP. It carries the TV guide information including titles and start times for events on all the virtual channels within the transport stream. ATSC requires that each system contain at least 4 EIT tables, each representing a different 3-hour time block. The PIDs for these tables are identified in the MGT. EIT Actual (DVB)Event Information Table. This table is part of the DVB SI. It supplies the list of events corresponding to each service and identifies the characteristics of each of these events. Four types of EITs are defined by DVB : 1) The EIT Actual Present/Following supplies information for the present event and the next or following event of the transport stream currently being accessed. This table is mandatory and can be found on PID=0x0012. 2) The EIT Other Present/Following defines the present event and the next or following events of other transport streams in the system that are not currently being accessed by the viewer. This table is optional. 3) The EIT Actual Event Schedule supplies the detailed list of events in the form of a schedule that goes beyond what is currently or next available. This table supplies a schedule of events for the transport stream currently being accessed by the viewer. 4) The EIT Other Event Schedule supplies the detailed schedule of events that goes beyond what is currently or next available. This table supplies a schedule of events for other transport streams in the system that are not currently being accessed by the viewer. The EIT Schedule tables are optional. EMMEntitlement Management Message. EMMs specify authorization levels or services of specific decoders. They are used to update the subscription options or pay-per-view rights for an individual subscriber or for a group of subscribers. EPGElectronic Program Guide. This guide represents a broadcasting data structure that describes all programs and events available to the viewer. It functions like an interactive TV guide that allows users to view a schedule of available programming and select what they want to watch. ESElementary Stream. A bit stream that includes video, audio or data. It represents the preliminary stage to the Packetized Elementary Stream (PES). ETRETSI Technical Report. ETR 290ETSI recommendation regarding measurement of MPEG-2/DVB transport streams. ETSIEuropean Telecommunication Standard Institute. ETTExtended Text Table. This table is part of the ATSC PSIP. It carries relatively long text messages for additional descriptions of events and channels. There are two types of ETTs, the Channel ETT, which describes a channel, and the Event ETT, which describes individual events in a channel. The PID for this table is identified in the MGT. EventA collection of elementary streams with a common time base and an associated start time and end time. An event is equivalent to the common industry usage of television program. FECForward Error Correction. This method adds error control bits before RF modulation. With these bits, errors in the transport stream may be detected and corrected prior to decoding. FrameLines of spatial information for a video signal. GOPSee Group Of Pictures Group of Picturesa set of pictures usually 12-15 frames long used for temporal encoding of MPEG-2 video. 41 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

HDTVHigh Definition Television. HDTVs resolution is approximately twice as high as that of Standard Definition Television (SDTV) for both horizontal and vertical dimensions. HDTV has an aspect ratio of 16x9 as compared to the 4x3 aspect ratio of SDTV. IECInternational Electrotechnical Commission. IEEEInstitute of Electrical and Electronics Engineers. I/FInterface. I-frameIntra-coded frame, or a picture encoded without reference to any other picture. I-frames provide a reference for Predicted and Bidirectionally predicted frames in a compressed video stream. IRDIntegrated Receiver Decoder. This is a receiver with an MPEG-2 decoder, also known as a set-top box. ISOInternational Standardization Organization. ITUInternational Telecommunications Union (UIT). LVDSLow Voltage Differential Signal. An electrical specification used by some manufacturers, usually on a parallel interface. It is a balanced interface with a low signal voltage swing (about 300mV). MacroblockA group of 16x16 pixels used for motion estimation in temporal encoding of MPEG-2 video. MFNMultiple Frequency Network (DVB-T). MGTMaster Guide Table. This table is part of the ATSC PSIP. It defines sizes, types, PIDs, and version numbers for all of the relevant tables within the transport stream. The PID value for this table is 0x1FFB. MHEGMultimedia & Hypermedia Expert Group. MIPMegaframe Initialization Packet. This packet is used by DVB-T to synchronize the transmitters in a multifrequency network. MP@HLMain Profile at High Level. MPEG-2 specifies different degrees of compression vs. quality. Of these, Main Profile at High Level is the most commonly used for HDTV. MP@MLMain Profile at Main Level. MPEG-2 specifies different degrees of compression vs. quality. Of these, Main Profile at Main Level is the most commonly used. MPEGMoving Picture Experts Group, also called Motion Picture Experts Group. MPEG-2ISO/IEC 13818 standard defining motion video and audio compression. It applies to all layers of transmission (video, audio and system). MPTSMultiple Program Transport Stream. An MPEG-2 transport stream containing several programs that have been multiplexed. Multiplex (n)A digital stream including one or more services in a single physical channel. (v)To sequentially incorporate several data streams into a single data stream in such a manner that each may later be recovered intact. NetworkThe set of MPEG-2 transport streams transmitted via the Same delivery system. NITNetwork Information Table. This DVB table contains information about a networks orbit, transponder etc. It is always located on PID 0x0010. DVB specifies two types of NITs, the NIT Actual and the NIT Other. The NIT Actual is a mandatory table containing information about the physical parameters of the network actually being accessed. The NIT Other contains information about the physical parameters of other networks. The NIT Other is optional. NTSCNational TV Standard Committee Color TV System (USA and 60 Hz countries). NVoDNear Video on Demand. This service allows for a single TV program to be broadcast simultaneously with a few minutes of difference in starting time. For example, a movie could be transmitted at 9:00, 9:15 and 9:30. PacketSee Transport Packet. PALPhase Alternating Line. PATProgram Association Table. This MPEG-2 table lists all the programs contained in the transport stream and shows the PID value for the PMT associated with each program. The PAT is always found on PID 0x0000. PayloadAll the bytes in a packet that follow the packet header. PCRProgram Clock Reference. A time stamp in the transport stream that sets the timing in the decoder. The PCR is transmitted at least every 0.1 seconds. PESPacketized Elementary Stream. This type of stream contains packets of undefined length. These packets may be comprised of video or audio data packets and ancillary data. PES PacketThe structure used to carry elementary stream data (audio and video). It consists of a header and a payload. PES Packet HeaderThe leading bytes of a PES packet, which contain ancillary data for the elementary stream. 42 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

PIDPacket Identifier. This unique integer value identifies elements in the transport stream such as tables, data, or the audio for a specific program. PLLPhase Lock Loop. This locks the decoder clock to the original system clock through the PCR. PMTProgram Map Table. This MPEG-2 table specifies PID values for components of programs. It also references the packets that contain the PCR. P-framePredicted frame, or a picture coded using references to the nearest previous I- or P- picture. ProgramSee Service. PSIProgram Specific Information. PSI refers to MPEG-2 table data necessary for the demultiplexing of a transport stream and the regeneration of programs within the stream. PSI tables include PAT, CAT, PMT and NIT. PSIPProgram and System Information Protocol. The ATSC protocol for transmission of data tables in the transport stream. Mandatory PSIP tables include MGT, STT, RRT, VCT and EIT. PTSPresentation Time Stamp. This stamp indicates the time at which an element in the transport stream must be presented to the viewer. PTSs for audio and video are transmitted at least every 0.7 seconds. The PTS is found in the PES header. QAMQuadrature Amplitude Modulation. This is a type of modulation for digital signals used in CATV transmission (DVB-C). Amplitude and phase of a carrier are modulated in order to carry information. QPSKQuadrature Phase Shift Keying. A type of modulation for digital signals used in satellite transmission (DVB-S). RRTRating Region Table. An ATSC PSIP table that defines ratings systems for different regions or countries. The table includes parental guidelines based on Content Advisory descriptors within the transport stream. RSReed-Solomon Protection Code. This refers to the 16 bytes of error control code that can be added to every transport packet during modulation. RSTRunning Status Table. A DVB-SI table that indicates a change of scheduling information for one or more events. It saves broadcasters from having to retransmit the corresponding EIT. This table is particularly useful if events are running late. It is located on PID 0x0013. SDTService Description Table. This DVB SI table describes the characteristics of available services. It is located on PID 0x0011. Two types of SDTs are specified by DVB, the SDT Actual and the SDT Other. The SDT Actual is a mandatory table that describes the services within the transport stream currently being accessed. The SDT Other describes the services contained in other transport streams in the system. SDTVStandard Definition Television. SDTV refers to television that has a quality equivalent to NTSC or PAL. SectionA syntactic structure used for mapping PSI/SI/PSIP tables into transport packets of 188 bytes. ServiceA collection of one or more events under the control of a single broadcaster. Also known as a Program. SFNSingle Frequency Network (DVB-T). SIService Information. This DVB protocol specifies all the data required by the receiver to demultiplex and decode the programs and services in the transport stream. Mandatory DVB SI tables include TDT, NIT, SDT and EIT. SMPTESociety of Motion Picture and Television Engineers. SNGSatellite News Gathering. This refers to the retransmission of events using mobile equipment and satellite transmission. SNMPSimple Network Management Protocol. This is the standard protocol for system and network administration. SPISynchronous Parallel Interface. This is a standard DVB interface for a transport stream. SPTSSingle Program Transport Stream. An MPEG-2 transport stream that contains one unique program. STStuffing Table. An optional DVB-SI table that authorizes the replacement of complete tables due to invalidation at a delivery system boundary such as a cable headend. This table is located on PID 0x0014. STBSet-top box. A digital TV receiver (IRD). STDSee System Target Decoder. STTSystem Time Table. An ATSC PSIP table that carries time information needed for any application requiring schedule synchronization. It provides the current date and time of day and is located on PID 0x1FFB. System Target Decoder (STD)A hypothetical reference model of the decoding process defined by MPEG-2. SIF - term describing a resolution of 352 x 240 pixels (full screen)

43

System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

TableService Information is transmitted in the form of tables, which are further divided into subtables, then into sections, before being transmitted. Several types of tables are specified by MPEG, DVB and ATSC. Refer to the Pocket Guide for more information on the different types of Service Information tables and their functions. TDTTime and Date Table. This mandatory DVB SI table supplies the time reference expressed in terms of UTC time/date. This enables joint management of the events corresponding to the services accessible from a single reception point. The PID for this table is 0x0014. Time-stampAn indication of the time at which a specific action must occur in order to ensure proper decoding and presentation. TOTTime Offset Table. This optional DVB SI table supplies the UTC time and date and shows the difference between UTC time and the local time for various geographical regions. The PID for this table is 0x0014. TransponderTrans(mitter) and (res)ponder. This refers to the equipment inside a satellite that receives and resends information. Transport Packet188-byte packet of information in a transport stream. Each packet contains a header and a payload. Transport StreamA stream of 188-byte transport packets that contains audio, video, and data belonging to one or several programs. T-STDSee System Target Decoder. TVTelevision. TVCTTerrestrial Virtual Channel Table. This ATSC table describes a set of one or more channels or services using a number or name within a terrestrial broadcast. Information in the table includes major and minor numbers, short channel name, and information for navigation and tuning. This table is located on PID=0x1FFB. UplinkCommunication link from earth to a satellite. UTCUniversal Time, Coordinated. VCTVirtual Channel Table. This ATSC table describes a set of one or more channels or services. Information in the table includes major and minor numbers, short channel name, and information for navigation and tuning. There are two types of VCTs, the TVCT for terrestrial systems and the CVCT for cable systems. VLCVariable Length Coding. This refers to a data compression method (Huffmann). VoDVideo on Demand. VSBVestigial Sideband Modulation. This is the terrestrial modulation method used in the ATSC. It can have either 8 (8 VSB) or 16 (16 VSB) discrete amplitude levels. QCIF --QCIF, Quarter Common Intermediate Format, has a resolution of 144 x 176 pixels, one-fourth the resolution of Full CIF. QCIF support is required by the ITU H.261 videoconferencing standard QSIF - term describing a resolution of 176 x 120 pixels (quarter screen)

Appendix A
Digital Video
To understand digital video, you must first understand that there are fundamental differences between video for broadcast television and video for personal computers. For years, broadcast professionals have demanded high 44 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

quality video. Their efforts and requirements are responsible for many technological advancements. The definition of digital video for this group, however, varies markedly from the one that is meaningful to computer professionals.

First There Was Analog


Several methods exist for the transmission of video signals. The earliest of these was analog. In an analog video signal, each frame of the video is represented by a fluctuating voltage signal. This is known as an analog waveform. One of the earliest of these analog video formats was composite video. Composite analog video has all the video components (brightness, color, sync, etc.) combined into one signal. Due to the compositing (or combining) of the video components, the quality of composite video is marginal at best. The results are color bleeding, low clarity and high generational loss. Composite video quickly gave way to component video, which takes the different components of the video and breaks them into separate signals. Improvements to component video have led to numerous video formats, such as S-Video, RGB, Y, Pb, Pr, etc. All of these are still analog formats however, susceptible to quality loss from one generation to another. Generational loss with video is similar to photocopying, in which a copy of a copy is never as crisp and sharp as the original.

Defining Digital Video


These limitations led to the birth of digital video. Think of digital video as a digital representation of the analog video signal. In the professional video world, there are quite a variety of digital video formats; D1, D2, Digital BetaCam, etc. Unlike analog video that degrades in quality from one generation to the next, digital video does not. Each generation of digital video is virtually identical to the parent. Even though the video data is digital in nature, virtually all the digital formats are still stored on sequential tape. As a result, most video professionals are more accustomed to working with tape media. Although tape holds considerably more data than a computer hard drive, there are two significant advantages in using computers for digital video. The ability to random access the storage of video and to also compress the video you store. There are also a number of issues related to the migration of video from traditional video equipment to the computer desktop. These are discussed in detail later in this document. Considering these issues, digital video for computers requires a different definition and understanding than the digital video formats we previously mention. Computer-based digital video is defined as a series of individual images and associated audio. These elements are stored in a format in which both elements (pixel or sound sample) are represented as a series of binary digits, or bits. Previous attempts were made to find the best procedure for capturing, storing, and playing back video from the computer desktop. Unfortunately, these early attempts were of a proprietary nature and resulted in various formats and incompatibilities. Subsequently, the International Standards Organization (ISO) worked to define the internationally accepted formats for digital video capture, storage, and playback. These formats will be reviewed in detail later in this document.

Four Factors of DV
With digital video, we should keep in mind four major factors. These are Frame Rate, Spatial Resolution, Color Resolution and Image Quality.

Frame Rate
The standard for displaying any sort of non-film video is 30 frames per second (film is 24 frames per second). This simply means that the video is made up of 30 (or 24) pictures or "frames" for every second of video. Additionally, these frames are split in half (odd lines and even lines), to form what is called "fields". Here again, there is a major difference between the way computers and television handle video. When a television set displays its analog video signal, it displays the odd lines (the odd field) first. Then it displays the even lines (the 45 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

even field). Each pair forms a frame and there are 60 of these fields displayed every second (or 30 frames every second). This is referred to as "interlaced" video. A computer monitor, however, uses a process called "progressive scan" to update the screen. With this method, the screen is not broken into fields. Instead, the computer displays each line in sequence, from top to bottom. This entire frame is displayed 30 times every second. This is often referred to as "non-interlaced" video.

Color Resolution
This second factor is a bit more complex. Color resolution refers to the number of colors displayed on the screen at one time. Computers deal with color in an "RGB" (red-green-blue) format, while video uses a variety of formats. One of the most common video formats is called "YUV". Although there is no direct correlation between RGB and YUV, they are similar in that they both have varying levels of color depth (number of maximum colors). Typical RGB color resolutions are 8 bits/pixel (256 colors), 16 bits/pixel (65,535 colors) and 24 bits/pixel (16.7 million colors). Typical YUV color resolutions are 7 bit, 4:1:1 or 4:2:2 (approximately 2 million colors), and 8 bit, 4:4:4 (approximately 16 million colors).

Spatial Resolution
The third factor is spatial resolution--or in other words, "How big is the picture?" Since PC and Macintosh monitors generally have resolutions of 640 x 480 pixels, most people assume that this resolution is the video standard. It isn't. As with RGB and YUV, there is no direct correlation between analog video resolutions and computer display resolutions. A standard analog video signal displays a full, over scanned image without the borders common to computer screens. The National Television Standards Committee (NTSC) standard used in North American and Japanese television uses a 768 x 484 display. The Phase Alternative system (PAL) standard for European television is slightly larger at 768 x 576. Most countries endorse one or the other, but not both. Since the resolution between analog video and computers is different, conversion of analog video to digital video at times must take this into account. This can often result in the downsizing of the video and the loss of some of your resolution.

Image Quality
The last, and ultimately most important factor is video quality. The final objective is video that looks acceptable for your application. For some, this may be a 1/4 screen, 15 frame per second video, at 8 bits per pixel. Others require full screen (768 x 484), full frame rate video (24 or 30 frames per second), at 24 bits per pixel (16.7 million colors).

Need For Compression


Determining your compression needs is not difficult but does require an understanding of how the four factors mentioned previously (Frame Rate, Color Resolution, Spatial Resolution, and Image Quality) affect your selection. As with most things in this world, there is a price to pay for quality. With more colors, higher resolution, faster frame rates, and better quality, the more horsepower you will need and the more storage space your video will require. By adjusting these factors, you can dramatically change your digital video compression requirements. Doing some simple math shows that 24-bit color video, with a 640 x 480 resolution, at 30 frames per second, requires an astonishing 26 megabytes of data per second! Not only does this surpass the capabilities of the standard PC-AT's data bus, but it quickly overburdens existing storage systems!

640 (Horizontal Resolution) X 480 (Vertical Resolution) 307,200 (Total Pixels Per Frame) X 3 (Bytes Per pixel ) 921,600 (Total Bytes Per Frame) X 30 (Frames Per Second) 27,648,000 (Total Bytes Per Second) 46 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

1,048,576 (Convert Bytes To Megabytes) 26.36 (Total Megabytes Per Second!) For some users, the way to reduce this amount of data down to a manageable level is to compromise on one of the four factors described above. Certain applications like games, low-end training systems, low-end kiosks, and business presentations may not need a full 30 frame per second frame rate. Instead, the application may be displaying its video in a window, and does not require that the entire frame be captured and stored digitally. As a quick exercise, lets reduce these factors and do our equation again. 320 (Horizontal Resolution) X 240 (Vertical Resolution) 76,800 (Total Pixels Per Frame) X 3 (Bytes Per pixel ) 230,400 (Total Bytes Per Frame) X 15 (Frames Per Second) 3,456,000 (Total Bytes Per Second) 1,048,576 (Convert Bytes To Megabytes) 3.3 (Total Megabytes Per Second!) As you can see, reducing these parameters, significantly reduces the data requirements. However, the standard PC ISA bus is only capable of sustained data transfer rates of approximately 600 Kilobytes per second. Even if you seriously compromise the video by reducing the size and frame rate, you still have 6 times too much data! In addition, 3.3 megabytes is the amount needed for just one second. For a two hour movie you would still require 23.73 Gigabytes of disk storage! Reducing the window size even further, sacrificing video quality, and shifting from RGB to YUV 4:1:1 would additionally reduce this data considerably; perhaps to as little as 1.5 megabytes per second. Even this is still too large. This is where digital video compression comes in.

Factors Effecting Compression


The goal of video compression is to massively reduce the amount of data required to store the digital video file, while retaining the quality of the original video. With this in mind, there are several factors that need to be taken into account when discussing desktop digital video compression.  Real-Time Versus Non-Real-Time  Symmetrical Versus Asymmetrical  Compression Ratios  Lossless Versus Lossy  Interframe Versus Intraframe  Bit Rate Control

Real-Time Versus Non-Real-Time


The term "Real-Time" has been badly abused. In the compression world it means exactly what it says. Some compression systems capture, compress to disk, decompress and play back video (30 frames per second) all in real time; no delays. Other systems are only capable of capturing some of the 30 frames per second and/or are only capable of playing back some of the frames. Insufficient frame rate is one of the most noticeable video deficiencies. Without a minimum of 24 frames per second, the video will be noticeably "jerky". In addition, the missing frames will contain extremely important lip 47 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

synchronization data. In other words, if the movement of a person's lips is missing due to dropped frames during capture or playback, it is impossible to match the audio correctly with the video.

Symmetrical versus Asymmetrical


This refers to how video images are compressed and decompressed. Symmetrical compression means that if you can play back a sequence of 640 x 480 video at 30 frames per second, then you can also capture, compress and store it at that rate. Asymmetrical compression means just the opposite. The degree of asymmetry is usually expressed as a ratio. A ratio of 150:1 means it takes approximately 150 minutes to compress one minute of video. Asymmetrical compression can sometimes be more elaborate and more efficient for quality and speed at playback because it uses so much more time to compress the video. The two big drawbacks to asymmetrical compression are that it takes a lot longer, and often you must send the source material out to a dedicated compression company for encoding (adding time and money to the project).

Compression Ratios
A second ratio is often referred to when working with compressed video. This is the compression ratio and should not be confused with the asymmetry ratio. The compression ratio relates to a numerical representation of the original video in comparison to the compressed video. For example, 200:1 compression ratio means that the original video is represented by the number 200. In comparison, the compressed video is represented by the smaller number, in this case, that is 1. The more the video is compressed, the higher the compression ratio or the numerical difference in the two numbers. Generally, the higher the compression ratio is, the poorer the video quality will be. With MPEG, compression ratios of 200:1 are common, with good image quality. Motion JPEG provides ratios ranging from 15:1 to 80:1, although 20:1 is about the maximum for maintaining a good quality image. Not only do compression ratios vary from one compression method to another, but hardware and software that perform well on a PC or Mac may be less efficient on a different machine.

Lossless versus Lossy


The "loss" factor determines whether there is a loss of quality between the original image and the image after it has been compressed and played back (decompressed). The more compression, the more likely that quality will be affected. Virtually all compression methods lose some quality when you compress the data. Even if the quality difference is not noticeable, these are considered "lossy" compression methods. At this time, the only "lossless" algorithms are for still image compression. Lossless compression can usually only compress a photo-realistic image by a factor of 2:1.

Interframe Versus Intraframe


This is probably the most widely discussed and debated compression issue. The intraframe method compresses and stores each video frame as a discrete picture. Interframe compression, on the other hand, is based on the idea that although action is happening, the backgrounds in most video scenes remain stable--a great deal of the scene is redundant. Compression is started by creating a reference frame. Each subsequent frame of the video is compared to the previous frame and the next frame, and only the difference between the frames is stored. The amount of data saved is substantially reduced.

Bit Rate Control


The final factor to be aware of with video compression is bit-rate control, which is especially important if your system has a limited bandwidth. A good compression system should allow the user to instruct the compression hardware and software which parameters are most important. In some applications, frame rate may be of paramount importance, while frame size is not. In other applications, you may not care if the frame rate drops below 15 frames per second, but the quality of those frames must be of impeccable quality. 48 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

The compression hardware and software should allow you to control these parameters to suite your specific application. When evaluating digital video compression systems, look for a system that gives you control. Not all compression systems allow you change every parameter.

Selecting a Compression
Compression methods use mathematical algorithms to reduce (or compress) video data by eliminating, grouping and/or averaging similar data found in the video signal. Different algorithms are suited to different purposes. Although there is alphabet soup of various compression methods, including Motion JPEG (Joint Photographic Experts Group), PLV, Compact Video, Indeo, RTV and AVC, only MPEG-1 and MPEG-2 are internationally recognized standards for the compression of moving pictures. With so many factors to consider and so many companies touting conflicting solutions, the decision process can be intimidating. First rule of thumb is stick to the standards. Standards don't guarantee that the solution is the best, but they are there for a reason. Years ago two large companies fought over Beta versus VHS videotape formats. Beta was clearly better, but millions of dollars were lost when VHS was adopted as the defacto standard. The MPEG formats are not a defacto standard. They are the internationally accepted ISO standard. Some of the most brilliant minds in the video and computer industries have spent years looking at every possible compression solution for full-motion video and are responsible for this specification and standard. This is also not an attempt by one company to push a proprietary format on the computer and video industry. This is an open format available to all. The second thing to consider is the acceptance of the standard. As many people in the computer and video world are aware, the big joke of standards is that "you have so many to choose from." The unfortunate truth is that many very real standards have been clouded by conflicting pseudo-standards, or in some cases, the real standard just simply wasn't followed, or multiple self-proclaimed "independent standards organizations" developed very different standards and pushed them on the industry.

MPEG Overview
The Moving Picture Experts Group (MPEG) is a joint committee of the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC). This group of experts meets about four times each year to generate standards for digital video and audio compression. They define a compressed bit stream, which defines a specific decompression for decoding the stream. Compression algorithms can be determined by each encoding manufacturer and that is where a proprietary advantage can be obtained.

The Status of MPEG


As of January 1992, MPEG completed the Committee Draft of MPEG-1. This draft defines a bit stream of compressed video and audio to fit in a data rate of 1.5 megabits per second (Mbits/sec.) although it is possible to generate MPEG-1 files as high as 4-5 Mbits/sec. The importance of the lower data rate is that it is the bandwidth of CD-ROM's, Video-CD's and CD-i's. There are three parts to the draft which are; video, audio and systems. The last part integrates the video and audio streams with the proper time stamping to enable synchronization. The final approval of MPEG-2, which is also three parts, MPEG-2 Systems, MPEG-2 Video and MPEG-2 Audio, as International Standard (IS) was given by the 29th meeting of MPEG held in Singapore in November 1994. This draft defined a bit stream of compressed video and audio for data rates of 2 to 10 Mbits/sec. The original application for MPEG-2 was all-digital transmission of broadcast TV quality video, but now includes HDTV. HDTV applications were to be covered by MPEG-3 with sampling dimensions up to 1920 x 1080 x 30 Hz and coded bitrates between 20 and 40 Mbit/sec. It has been discovered that with some fine tuning, MPEG-2 and MPEG1 syntax worked very well for HDTV rate video. Subsequently, MPEG-3 was dropped and HDTV applications are now encompassed by MPEG-2. MPEG has also started work on MPEG-4. This standard is targeted at a very low bitrate for applications such as videophone, multimedia electronic mail, electronic newspapers, sign language captioning, etc. They began work officially at the MPEG meeting in Bruxelles in September 1993. This newest MPEG format is loosely defined at sampling dimensions up to 176 x 144 x 10 Hz and bitrates between 4800 and 64,000 bits per second. Because of the extremely low bitrates, it's expected that a new coding technique will be developed to allow much higher compression of video and audio to achieve acceptable quality. A draft specification is scheduled in 1997 with a November 1998 target date for the official sanction of this proposed standard. 49 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Reference Frames and Redundancy


MPEG-1 and MPEG-2 use the Interframe method of compression as we mention before. (See Factors Affecting Compression.) In most video scenes, the background remains relatively stable while action takes place in the foreground. The background may move, but a great deal of the scene is redundant. MPEG starts its compression by creating a reference frame called an I or Intra frame. These I frames contain the entire frame of video and are placed every 10 to 15 frames. Since only a small portion of the frames that fall between the reference frames is different from the rest of the reference frames, only the differences are captured, compressed and stored.

Inside an MPEG Stream


The three types of pictures in an MPEG stream are: Intra (I) Predicted (P) Bi-directional Interpolated (B). Intra frames provide entry points into the file for random access, but can only be moderately compressed. Predicted frames are encoded with reference to a past frame (Intra or previous predicted frame), and, in general, will be used as a reference for future Predicted frames. Predicted frames receive a fairly high amount of compression. Bidirectional pictures provide the highest amount of compression but require both a past and a future reference in order to be encoded. Bi-directional frames are never used for references.

Step 1, Finding the Macro Block (Motion Compensation)


The reference picture is divided into a grid of 16 x 16 pixel squares called macro blocks. Each subsequent picture is also divided in these same macro blocks. The computer then searches for an exact, or near exact, match between the reference picture macro block and those in succeeding pictures. When a match is found, the computer transmits only the difference through what is called a vector movement code. The blocks that experienced no change are ignored, and thus the amount of data that is actually compressed and stored is significantly reduced.

Step 2, Tracking the Changes (Spatial Redundancy)


After finding the changes in location of macro blocks, the MPEG algorithm will further reduce the data by describing the difference between corresponding macro blocks. This is accomplished through a math process call discrete cosine transform or DCT. This process divides the macro block into four sub blocks, seeking out changes in color and brightness. The human perception is more sensitive to brightness changes than color changes. Armed with this knowledge, the MPEG committee specified the MPEG process so the MPEG algorithm spends most of its efforts reducing color space rather than brightness. Between the motion compensation and DCT processes, MPEG compression can yield compression ratios more than 200:1.

Applications for MPEG-1


Video Kiosk The video kiosks or information kiosks, are a new opportunity for the use of video. Retail stores, auto dealerships and banks are all finding that automated information kiosks are effective tools for increasing sales. These kiosks come to life with the addition of professional quality video found in MPEG-1. Information that was laboriously displayed as slides and text can now be brought to life with video. Using MPEG-1 and a standard hard disk or CDROM, the developer can easily update their kiosk information on a regular basis with hours of new video. Advanced 50 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

kiosk features become possible through the advent of friendly, personal help video tailored to explain each process for the user. New applications are only limited by the imagination of the kiosk developer. Video on Demand Video on Demand (VOD) encompasses nearly all video-based applications. However, the most common application referred to regarding VOD is movies on demand. Initially in hotels and hospitals, and eventually in our homes, all of us will have an interactive television system from which we will order movies, on demand, at any given time. The technology exists today for this application to become a reality. The missing ingredient for home use is low priced, interactive decoders for the home. Current estimates for upgrading cable ready homes for this new capability are running around 11 billion dollars. This price tag is based upon a $200 set top TV decoder (a price point completely unrealistic today). Given that this application is also considering the MPEG-2 standard, VOD to the home appears years away from a large scale implementation. However, VOD in hotels is well underway in many areas of the country and the world. Video Dial Tone The telephone companies are preparing systems that will allow us to order our movies through the existing telephone infrastructure. Given the limited band width of today's telephone lines, MPEG-1 becomes the ideal choice. Numerous pilot programs are being installed throughout 1994. This application also has ramifications to the telecommunications industry and the corporate presentation market since very high quality presentations can now be produced and distributed very affordably across standard telephone lines. Training The training market has historically used laser disc players to deliver high quality video. MPEG-1 is an ideal replacement for the analog laser disc player. The advantages of MPEG are lower costs, ease of delivery, ease of updating and networking capability. The training market is a large user of video gear and MPEG is considered a main stream product for this application. Corporate presentations The presentation market evolved from 35 MM slides, to overheads to computer generated slide shows. As presentation software packages evolve, they are now beginning to support video. MPEG is a natural fit for the presentation market due to its small file size and extremely high quality. It can easily be integrated with most presentation programs. Also, almost all conference rooms now include a VCR and a television as well as a computer. Another presentation media that is fast growing in Europe and just beginning in the United States is Video-CD. Video-CD allows you create a presentation of graphics with "hotspots" or buttons and you can include MPEG video as well. Up to 28 minutes of MPEG video can be incorporated into a Video-CD presentation. Video Library Organizations storing massive quantities of videocassettes for occasional playback, can benefit by encoding their existing and new materials. Storing the MPEG files on a digital library video server allows long-term storage and multiple playback without any quality degradation, fast random access retrieval, and multi-point playback. Museums, large libraries, government agencies, and legal court houses using video footage from trials, are now converting to digital video. Advertising agencies also need storage of advertising video clips and quick-glance reference today. When creating a new video clip, the agencies normally sort huge amounts of clips, to draw ideas and research options. Random-access fast retrieval increases efficiencies dramatically. Internet and Intranet The advantage of MPEG-1 is its comparable small file size and low transmission rate. MPEG-1 can be send in realtime over T1 telephone lines in VHS quality. MPEG-1 can also be sized down to QSIF (176x120) resolution and can therefore be transmitted in quarter screen resolution over modems as slow as 28.8K. This offers great possibilities for LAN and WAN based applications. MPEG-1 is starting to become more and more popular for internet and especially intranet usage. 51 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

Applications for MPEG-2


DVD-ROM DVD-ROM with 4.7 GB per layer storage capacity and the capability to contain up to 4 layers or 17GB storage will soon be the replacement of the CD-ROM. DVD-ROM upgrade kits including MPEG-2 DVD compliant decoder boards are already entering the market at a low price point. Industry analyst expect 55 Million DVD-ROM and DVD Video devices by the year 2000. DVD Video Companies such as Sony, Panasonic, JVC, Toshiba and Philips introduced a new living room device called the DVD player. These devices are currently entering the market for under $500. DVD-Video players utilize MPEG-2 and MPEG-1. Many movies are currently converted to the new DVD disc format. Most movies are encoded at 720x480 MPEG2 resolution with AC-3 5.1 channel audio. DVD video can have up to 8 layers of sound and 32 layers of subpictures for sub-titles or interactive pushbuttons. These devices have a large amount of interactivity built-in. A huge market will open up soon for multimedia developers creating applications for DVD Video and DVD-ROM. CATV (Cable Television) CATV currently uses MPEG-2 1/2 D1 (352x480) as the standard for compressing and decompressing video for distribution and for broadcasting. They want perfect-quality video and have the bandwidth needed to handle high data rates. Because of these requirements, the industry has settled on MPEG-2 video although many are still using MPEG-1 in the interim. DBS (Direct Broadcast Satellite) The Hughes/USSB service uses MPEG-2 video and audio for direct broadcast. Thomson has exclusive rights to manufacture the decoding boxes for the first 18 months of operation. No doubt Thomson's STi-3500 MPEG-2 video decoder chip will be featured. Hughes/USSB DBS already began service in North America in 1994. Two satellites at 101 degrees West share the power requirements of 120 Watts per 27 MHz transponder. Multi-source channel rate control methods are employed to optimally allocate bits between several programs on one data carrier. An average of 150 channels is planned. HDTV (High-Definition Television) The U.S. Grand Alliance, a consortium of companies that formerly competed for the U.S. terrestrial HDTV standard, has already agreed to use the MPEG-2 Video and Systems syntax (including B-pictures). Both interlaced (1440 x 960 x 30 Hz) and progressive (1280 x 720 x 60 Hz) modes will be supported. The Alliance must then settle upon a modulation (QAM, VSB, OFDM), convolution (MS or Viterbi), and error correction (RSPC, RSFC) specification. MPEG Playback System Configuration MPEG delivers high quality digital video with CD quality stereo sound. The advanced capabilities of MPEG are new to the PC environment and the PC application developer. It may help to think of MPEG as a high quality video source such as a laser disc or 3/4" tape deck. Any application using these high quality analog A/V devices will be enhanced and made more portable by converting from analog to MPEG, digital compressed audio and video. The following configurations are suggestions on how to configure a playback system. MPEG Displayed on Television Monitor One simple viewing option is to use a TV monitor. In this configuration, the NTSC signal of the Optibase PCMotion board (MPEG-1) or VideoPlex (MPEG-2 is sent directly to the monitor while the audio portion is sent to stereo speakers. This is an ideal solution for video kiosks, corporate presentations or any application requiring a large screen display. MPEG-1 quality wil be equivalent to VHS, MPEG-2 quality ranges from SVHS / Beta Sp to D1 quality depending on bitrate.

52

System Engineering Group

Ver 1.0 MPEG Video in a Window SVGA

MPEG-2 DVB GUIDE

02/02/12

This configuration is common in personal multimedia use or in the more compact video kiosk designs. In this system, there are many low-cost MPEG-1 playback cards now available that can produce MPEG video-in-a-window on a VGA monitor. IVidea recommends the Sigma Realmagic card because of its advanced MCI interface. Drivers for this card are available for DOS, Windows, OS/2, Windows 95 as well as Windows NT. Sigma also introduced an MPEG-2 decoder with NTSC output. An alternative to hardware playback is software decoding of MPEG. Microsoft is now shipping Active Movie which decodes MPEG-1, AVI and Quicktime without the use of hardware. Software decoders typically drop frames when the CPU can not keep up and they demand a large amount of CPU power. The fast your computer, the better the quality of the MPEG file. Software decoder can not reach the same quality of playback than a hardware device can. If you intend to distribute MPEG-1 files to a large amount of users, think of the lowest common denominator. IVidea suggests to stay within the Video CD data rates of no more than 1.4 Mega bits per second. This datarate is compatible with most 1x CD's and can easily be decoded on a Pentium 90 with close to 30 frames per second. Video-CD with MPEG Video Video-CD is a special format of MPEG video on a audio-sized CD. This configuration is very simple and can be completely setup and ready-to-go in only a minute or two. Portable Video CD players are ideal sales, training and presentation tools and a viable alternative to expensive notebook computers. Portable Video CD players retail for approximately $495.00. They can hold up to 74 minutes of interactive MPEG-1 video and or a large number of highresolution (720x480) slides. It is an ideal and inexpensive format for training applications and sales presentations and a great alternative to portable computers. Video CD 2.0 applications are compatible with CDi, many DVD players and computer based software decoders. There are several popular models available from Panasonic and Magnavox. You attach an RCA stereo cable and a video cable from the player to an NTSC or PAL television or a monitor with speakers. Then you insert the VideoCD you previously created using an MPEG-1 encoder that is capable of creating Video-CD format MPEG and a Video-CD authoring software package. Video-CD creation tools and players are available from IVidea upon request. Conclusion The evidence is clear that MPEG-1 and MPEG-2 are the standard for full motion digital video, except for non-linear digital video editing where currently Motion JPEG prevails. Although, as more encoders enter the market, MPEG-2 will slowly take over as the non-linear digital editing format. Expect to see many more announcements of products using MPEG video. From television set-top video decoder boxes and game machines, to kiosks and hotel video-ondemand systems, the MPEG format has won the compression standards race. DVD and Video CD players quickly dropping in price will make these formats an inexpensive and interactive alternative to VHS tape for corporate presentation and Computer-based Training (CBT) and other multimedia applications. With lower quality software MPEG playback now a reality and hardware playback to be affordable for the masses, you should expect to see hundreds of applications supporting MPEG. The MPEG standard is here to stay like the VHS standard was here to stay for almost 20 years. Quality will vary. Prices will vary with your required quality level. MPEG formats will be incorporated into many consumer devices. MPEG-1 and MPEG-2 are not meant to compete. They are standards that coexist. By the end of 1997 we expect 70 Million MPEG playback devices worldwide due to the fact that Microsoft announced the support for software playback of MPEG-1 with the next release of Windows 97. A large install base of MPEG players will stimulate the production of MPEG-1 based applications and increase the demand for MPEG encoding. With DVD and DVDROM shipping in the first quarter of 1997 the MPEG-2 demand will skyrocket as well. Questions and Answers What is the difference between MPEG-1 and MPEG-2? To simplify, MPEG-1 and MPEG-2 differ in the amount of information captured and processed. While MPEG-1 can handle CCIR-601 resolutions (720 x 480), it is handled with significantly less bandwidth and delivers lesser quality video. In addition, MPEG formats support additional layers such as a data layer and a transport layer. These layers allow for the addition of other types of data along with the audio and video data, and provide error correction to assist in the transmission of the MPEG-defined data stream. 53 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

MPEG-1 generally is associated with SIF resolution video; 352 x 240 in size. This decreased size is scaled back up to full screen during playback while significantly reducing the amount of data by 75% (over MPEG-2). This SIF resolution video is generally considered 30 field per second video. MPEG-2 handles both MPEG-1 sizes as well as full CCIR-601 resolution (720 x 480). The data produced by MPEG-2 is more than four times the size of MPEG-1 and is considered full 60 field per second video. MPEG-2 will be used most in applications where high bandwidth video is not an issue and the ultimate quality is paramount. In MPEG-1, the frames from one I-frame to the next I-frame are call a Group Of Pictures (GOP's). GOP's are optional in an MPEG-2 stream, and direct access to a bitstream can be done at any repeated sequence header, even if there is no GOP header there. Other key features of MPEG-2 are the scaleable extensions that permit the division of a continuous video signal into two or more coded bit streams representing the video at different resolutions, picture quality, or picture rates. This is important for applications such as HDTV, which would allow the broadcaster to simulcast both the HDTV and the NTSC signals. The settop box would display the appropriate signal depending upon the receiver's television. The following table demonstrates the differences in bandwidth, space and quality between MPEG-1 and MPEG-2.

Digitizing Resolution MPEG-1 352x240

Hard disk Space (per minute) 9-12 MB 16-20 MB 18-30 MB same

Bandwidth (per second) 1.15-3 Mb 2 - 8 Mb 4 - 12 Mb same

Analog Equivalent Application VHS - SVHS Betacam SP Broadcast Broadcast VideoCD, PC Applications Video & Cable Transmission Video Transmission DVD

MPEG-2 1/2 D1 352x480 MPEG-2 Full 704x480 D1 720x480 MPEG-2 VBR same

Why Digital TV ? The Digital revolution is upon us! Well that's what the man at Currys told me when he tried to sell a new all in one TV and satellite decoder and very convincing he was too! But luckily I've seen most of the developments of digital TV from the inside, no I haven't been to Parkhurst I mean the inside of TV. Digital TV has been with us for many years with the first domestic digits coming via D2Mac a system that never quite found favour with the Sun reading masses, right the way up to present day and the battle between Sky and On Digital. So what exactly is Digital TV? And why bother switching after all my TV picture is fine, isn't it? Before you can understand Digital TV you first have to get a grip of the existing system of analogue transmission. The TV pictures and sound are modulated onto a carrier wave (or TV signal) this wave is an analogue signal and can be disrupted by interference or bad weather. This disruption to the signal leads to the loss of information that makes up the picture you and I watch. However Digital TV works on the same basic principles as the domestic CD player or any other digital signal, data 54 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

is sent as packets of 1's and 0's which are also put on a carrier wave which is beamed to your home. The vital difference is how the information is encoded and the effect it has for ordinary punter like you and me. Imagine a man standing on a hill way off in the distance he has 20 semaphore flags which he uses to send a message to you, this is fine on a clear day but on a cloudy day it can be very difficult to tell if he is waving a red flag or a red flag with poker dots. This is the equivalent to the analogue system with lots of information being sent by varying the carrier wave thousands of times per second. Now if our chap on the hill had only one flag and held it up to represent 1 and down to represent 0 it would be very easy to see what he was sending whatever the weather. This can be done hundreds of thousands of times per second and so the same amount of information can be sent as in the analogue system. But the bit that matters to you and me is that the data sent using a digital signal is far more robust and is generally either on or off a bit like a Duracell battery, so if you can receive digital then it going to be good quality. Ok so you now have a grasp of Digits, so what else makes it such a panacea for the viewing public? Well the obvious difference is the number of channels available on digital platforms compared to the analogue system. This is achieved by the use of compression, a few years ago a well known national broadcaster held a meeting to discuss the future of TV broadcasting in the UK. They already had 2 channels on air (guessed who yet?) and wanted to decide how best to use the space in the digital era. They had a choice of 4 options. 1. To transmit 2 High definition, wide screen channels (without question the best quality available) 2. To transmit 4 Wide screen channels with surround audio. 3. To transmit 8 Channels of equal quality when compared to their 2 existing channels. 4. Or 30 channels of compressed TV Signals (SVHS quality). Ok so which did they choose? Well let's just say your auntie will be providing many more channels in years to come. So is this bad news? Well not really after all you will have many more channels to choose from but they will be of a poorer video quality than HDTV. So given the fact that Digital TV is here what do you do as a consumer, the government has announced that analogue TV will be switched off by 2015 but the leading TV companies are pressing for this to be brought forward to 2010. Therefore you will have to migrate to Digits sooner or later and at the moment the choice is between Sky and On Digital, the decision between them is simple with Sky you need a dish and with On Digital you should be able to use your existing aerial. I say, "should" because it is quite common for aerials to require an upgrade to get a good signal and that means a hidden cost. Im not going to tell you which way to jump as that is a personal decision for you to take based on what channels you want to receive, but I do have some words of warning as regards to set top boxes. The box manufacturers have designed a system that is very robust and provides excellent features in a very compact 55 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

unit, but to achieve this they have used a very proprietary format that is very difficult to upgrade. The boxes all handle MPEG2 data which is the current broadcast format but MPEG4 is on its way and can be seen on the web already, then MPEG7 and also MPEG21, don't ask who numbered it! Sadly the box you buy today is not re-programmable like a home PC you can't just load up a new operating system or download a patch I'm afraid its back to Currys again in 2 years time. The cynic in me says this was the plan all along but I think they just hadn't appreciated how quickly the web would push video technology along. So what do you do? Well you want a box right? Don't fret go and buy a nice Pace box and look on it as a two year investment. DO NOT! Take the man in Currys advice and buy an integrated decoder and TV other wise you will have to ditch your telly as well. Digital TV is a vast improvement over the analogue system and like it or not the government will switch off the analogue signal in the next ten years, so don't wait too long and miss out on the wealth of TV that is now available. But the caveat "buyer beware" has never been more relevant, think about the future and look at the pace of change within the PC market because your humble telly is being sucked into the same technological whirlpool. Don't buy integrated systems no matter how nice they look as you will be upgrading them soon and don't expect the greatest picture quality as the decision was taken for you many years ago. But dip your toe in the water and spend time reading the reviews of TV's and Decoder's on DooYoo they'll put you in good shape for the digital revolution

MPEG21 The general goal of MPEG-21 activities is to describe an open framework which allows the integration of all components of a delivery chain necessary to generate, use, manipulate, manage, and deliver multimedia content across a wide range of networks and devices. The MPEG-21 multimedia framework will identify and define the key elements needed to support the multimedia delivery chain, the relationships between and the operations supported by them. Within the parts of MPEG-21, MPEG will elaborate the elements by defining the syntax and semantics of their characteristics, such as interfaces to the elements. MPEG-21 will also address the necessary framework functionality, such as the protocols associated with the interfaces, and mechanisms to provide a repository, composition, conformance, etc. The seven key elements defined in MPEG-21 are: y Digital Item Declaration (a uniform and flexible abstraction and interoperable schema for declaring Digital Items); y Digital Item Identification and Description (a framework for identification and description of any entity regardless of its nature, type or granularity); y Content Handling and Usage (provide interfaces and protocols that enable creation, manipulation, search, access, storage, delivery, and (re)use of content across the content distribution and consumption value chain); y Intellectual Property Management and Protection (the means to enable content to be persistently and reliably managed and protected across a wide range of networks and devices); y Terminals and Networks (the ability to provide interoperable and transparent access to content across networks and terminals); y Content Representation (how the media resources are represented); y Event Reporting (the metrics and interfaces that enable Users to understand precisely the performance of all reportable events within the framework); 56 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

MPEG-21 recommendations will be determined by interoperability requirements, and their level of detail may vary for each framework element. The actual instantiation and implementation of the framework elements below the abstraction level required to achieve interoperability, will not be specified.

MPEG7 More and more audio-visual information is available in digital form, in various places around the world. Along with the information, people appear that want to use it. Before one can use any information, however, it will have to be located first. At the same time, the increasing availability of potentially interesting material makes this search harder. The question of finding content is not restricted to database retrieval applications; also in other areas similar questions exist. For instance, there is an increasing amount of (digital) broadcast channels available, and this makes it harder to select the broadcast channel (radio or TV) that is potentially interesting. In October 1996, MPEG (Moving Picture Experts Group) started a new work item to provide a solution for the urging problem of generally recognised descriptions for audio-visual content, which extend the limited capabilities of proprietary solutions in identifying content that exist today. The new member of the MPEG family is called "Multimedia Content Description Interface", or in short MPEG-7 1. What is MPEG-7? MPEG-7 will be a standardized description of various types of multimedia information. This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user. MPEG-7 is formally called Multimedia Content Description Interface. The standard does not comprise the (automatic) extraction of descriptions/features. Nor does it specify the search engine (or any other program) that can make use of the description. 2. From whom or where did the demand for MPEG-7 come? The demand logically follows the increasing availability of digital audiovisual content. MPEG members recognized this demand, and initiated a new work item. The work on the definition of MPEG-7 has already started to attract new people to MPEG. 3. Why is MPEG-7 needed? Nowadays, more and more audiovisual information is available, from many sources around the world. Also, there are people who want to use this audiovisual information for various purposes. However, before the information can be used, it must be located. At the same time, the increasing availability of potentially interesting material makes this search more difficult. This challenging situation led to the need of a solution to the problem of quickly and efficiently searching for various types of multimedia material interesting to the user. MPEG-7 wants to answer to this need, providing this solution. 4. Who is currently participating in the development of the MPEG-7 standard? The people taking part in defining MPEG-7 represent broadcasters, equipment manufacturers, digital content creators and managers, transmission providers, publishers and intellectual property rights managers, as well as university researchers. 5. Where are you in the process of specifying the MPEG-7 standard? We are in the collaborative phase of the standardisation process. This means that we have passed the Call for Proposals and the evaluation of the submissions to that CfP. We are currently performing experiments (so-called Core Experiments) to continuously improve the technology on the table for standardization. This testing is carried out in a common environment, called the eXperimentation Model (XM). Experiments are carried out in well-defined test conditions and according to pre-defined criteria. The goal is to develop the best possible standard. The work plan is as follows:; Call for Proposals October 1998 Working Draft December1999 Committee Draft October 2000 Final Committee Draft February2001 Draft International Standard July 2001 57 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

International Standard September 2001 6. Will MPEG-7 include audio or video content recognition? The standardization of audiovisual content recognition tools is beyond the scope of MPEG-7. Following its principle specifying the minimum for maximum usability, MPEG-7 will concentrate on standardizing a representation that can be used for description. Development of audiovisual content recognition tools will be a task for industries which will build and sell MPEG-7 enabled products. In developing the standard, however, MPEG might build some coding tools, just as it did with the predecessors of MPEG-7, namely MPEG-1, -2 and -4. Also for these standards, coding tools were built for research purposes, but they did not become part of the standard itself. 7. Will MPEG-7 support audio or video content retrieval? In the same way that MPEG will not standardize the tools to generate the description, MPEG-7 will also not standardize the tools that use the description. It might however be necessary to address the interface between the description and the search engine. 8. What form will the "descriptions" of multimedia content in MPEG-7 take? The words descriptions or features represent a rich concept, that can be related to several levels of abstraction. Descriptions vary according to the types of data. Furthermore, different types of descriptions are necessary for different purposes of the categorization. 9. Will the standard allow automatic extraction of descriptions as well as manual entry? The descriptions that conform to the MPEG-7 standard could be entered by hand, but they could also be automatically extracted. Some features can be best extracted automatically (color, texture), but for some other features (this scene contains three shoes and that music was recorded in 1995) this is very hard or even impossible. 10. A 'Call for Proposals', how does that work? A Call for Proposals (CfP) asks for technology for inclusion in the standard. It is addressed at all interested parties, no matter whether they participate or have participated in MPEG. MPEG work is usually carried out in two stages, a competitive and a collaborative one. In the competitive stage, participants work on their technology by themselves. In answer to the CfP, people submit their technology to MPEG, after which MPEG makes a fair comparison between the submissions. In MPEG-2 and -4 this was done using subjective tests and additional expert evaluation. How such evaluations will be carried out for MPEG-7 is not yet known, but this will be described in the CfP when it is published in 1998. Based on the outcome of the evaluation, MPEG will decide which proposals to use for the collaborative stage. In this stage, members of the Experts Group work together on improving and expanding the standard under construction, building on the selected proposals. Before the final CfP in November 1998, preliminary versions may be published. This is comparable to what happened for MPEG-4. 11. What is the relationship between MPEG-7 and other MPEG activities? MPEG-7 can be used independently of the other MPEG standards - the description might even be attached to an analog movie. The representation that is defined within MPEG-4, i.e. the representation of audiovisual data in terms of objects, is however very well suited to what will be built on the MPEG-7 standard. This representation is basic to the process of categorization. In addition, MPEG-7 descriptions could be used to improve the functionalities of previous MPEG standards. 12. If I want to get involved in MPEG-7, what do I need to know about the other MPEG standards? In principle, knowledge about the other three MPEG standards is not required for taking part in the MPEG-7 work. However, since some of MPEG-7's tools may be close to those of MPEG-4, some knowledge about them could be useful. 13. If I want to know more about the other MPEG standards, where do I look? You can start by taking a look at MPEG's home page (http://www.cselt.it/mpeg/) which contains many useful references, including more lists with "Frequently Asked Questions" about MPEG activities. 14. So what happened to MPEG-5 and -6? (And how about 3?) MPEG-3 existed once upon a time, but its goal, enabling HDTV, could be accomplished using the tools of MPEG-2, and hence the work item was abandoned. So after 1,2 and 4, there was much speculation about the next number. Should it be 5 (the next) or 8 (creating an obvious binary pattern)? MPEG, 58 System Engineering Group

Ver 1.0

MPEG-2 DVB GUIDE

02/02/12

however, decided not to follow either logical expansion of the sequence, but chose the number of 7 instead. So MPEG-5 and MPEG-6 are, just like MPEG-3, not defined. 15. When will MPEG-7 replace the existing MPEG-1 and MPEG-2 standards? MPEG-7 will not replace MPEG-1 MPEG-2 or in fact MPEG-4 it is intended to provide complementary functionality to these other MPEG standards: representing information about the content, not the content itself ("the bits about the bits") This functionality is the standardisation of multimedia content descriptions.

59

System Engineering Group

Das könnte Ihnen auch gefallen