Sie sind auf Seite 1von 6

Real-time Audio/Video Decoders for Digital

Multimedia Broadcasting
Victor H. S. Ha, Sung-Kyu Choi, Jong-Gu Jeon, Geon-Hyoung Lee, Won-Kap Jang, and Woo-Sung Shim
Samsung Electronics Co., Ltd.
Digital Media R&D Center, Suwon, Korea
Tel: +82-31-200-3028, Fax: +82-31-200-3147
jgjeon@samsung.com

Abstract— A new national standard for Digital Multimedia The paper is organized as follows. In section II, we give a
Broadcasting (DMB) has been drafted in Korea to provide brief overview of the Korean DMB system, MPEG-4 BSAC
high quality digital audio, video, and data broadcasting services audio coding standard, and H.264 video coding standard. In
to fixed, mobile, and portable receivers. We have developed
the world’s first DSP/FPGA implementation of the portable sections III and IV, we discuss the design and implementation
DMB receiver, complete with an RF receiver, a 6.4-inch LCD of the audio and video decoders for DMB, respectively. In
display, and audio/video/data decoders. In this paper, we present section V, we present the performance of the DMB receiver
the design, implementation, and performance of this portable based on our DSP/FPGA implementation. Section VI summa-
DMB receiver. First, we provide a brief overview of the DMB rizes the paper.
system and the audio/video coding tools supported by it, i.e.,
MPEG-4 BSAC and MPEG-4 Part 10 AVC/H.264. We discuss
the low-power high-performance design of the DMB receiver, II. AUDIO AND V IDEO C ODING IN DMB
focusing particularly on the audio/video decoding parts. Finally, This section provides a brief overview of the Korean DMB
we illustrate the performance of the portable DMB receiver that
system, MPEG-4 BSAC audio coding standard, and H.264
operates in real-time at the overall frequency of 25 MHz.
video coding standard. Readers are referred to [5] [6] [7] [8]
I. I NTRODUCTION and references therein for further information on BSAC, H.264
and DMB.
In Korea, a new national standard for terrestrial Digital
Multimedia Broadcasting (DMB) has been developed [1]. A. Digital Multimedia Broadcasting in Korea
The Korean DMB system adopts as its base the European
DMB is the next generation digital broadcasting service
Digital Audio Broadcasting (DAB) system known as Eureka-
for indoor and outdoor users. The DMB users can enjoy
147 [2]. The DMB system adds to Eureka-147 various coding,
CD-quality stereo audio services and real-time video/data
networking, and error correcting tools to process multimedia
streaming services anywhere in the nation while moving at
contents at the overall bit error rate (BER) of 10−9 . The DMB
the speed of up to 200 km/h. The Korean DMB system
service is expected to provide high-quality digital broadcasting
is based on Eureka-147, the European DAB system. To
services to fixed, mobile, and portable receivers with a nation-
support multimedia contents, the DMB standard incorporates
wide coverage.
various networking and error correcting tools such as the
In this paper, we introduce the portable receiver for terres- Reed Solomon (RS) coding, MPEG-2 Transport Stream (TS)
trial DMB with an emphasis on the audio and video decoder packets, MPEG-2 PES packets, and MPEG-4 SL packets.
parts. The audio coding in DMB is performed using the Multimedia contents are processed by the following set of
MPEG-4 BSAC audio codec [3]. BSAC builds upon MPEG-4 codecs:
AAC (Advanced Audio Coding) to provide fine grain bitstream
• Video : MPEG-4 AVC (ISO/IEC 14496-10) / H.264
scalability and error resilience. The video coding in DMB is
• Audio : MPEG-4 (ISO/IEC 14496-3) BSAC
based on the newest international standard known as MPEG-
• Data : MPEG-4 (ISO/IEC 14496-1) Core2D @ Level 1
4 Part 10 AVC/H.264 [4]. This new video coding standard
improves coding efficiency over other existing standards such In this paper, we focus on the design and implementation of
as MPEG-2 and MPEG-4 Advanced Simple Profile (ASP). the audio and video decoders.
H.264 has a large set of applications ranging from low bit-
rate conversational services to high definition (HD) digital B. BSAC Audio Coding Standard
video broadcasting. The implementation of BSAC and H.264 BSAC is a part of the newest audio coding standard from
in the portable DMB receiver raises a number of design ISO/IEC known as ISO/IEC 14496-3 or MPEG-4 Audio.
issues such as portability, low power consumption, and high BSAC is mainly based on MPEG-4 AAC and uses most of its
performance. In this paper, we focus on these issues and tools. BSAC improves upon MPEG-4 AAC by offering a fine
present our solutions in the design and implementation of the grain bitstream scalability and error resilience. Its compression
DMB receiver. rate is comparable to the AAC Main profile.

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
2) BSAC in DMB: The audio service in DMB should sup-
port the standardized stereo audio broadcasting at the sampling
rates of 24, 44.1, or 48 KHz. The service should provide CD-
quality audio for the audio-only broadcasting and better than
the analog FM radio quality for the audio accompanying the
video. The maximum bit-rate for the audio data in stereo is
set to 128 Kbps.
DMB employs the MPEG-4 BSAC standard’s LC profile
that is suitable for mobile applications. BSAC allows adap-
tive bit-rate control, a smaller initial buffer, and a seamless
play of digital audio. In BSAC for DMB, the prediction
Fig. 1. Bit-Slicing in BSAC and long-term prediction tools are not used. The variable
epConfig is set to zero in AudioSpecificConfig(). Variables
frameLengthFlag and DependOnCoreCoder are set to zero in
1) BSAC Features: MPEG-4 BSAC uses the same set of GASpecificConfig(). The error resilience tool is not supported
tools as MPEG-4 AAC including 1024/128 point MDCT to lower the implementation complexity, i.e., sba mode is set
(Modified Discrete Cosine Transform), Noiseless coding, Long to zero in bsac header(). ltp data present is set to zero in
Term Prediction, etc. The main features of BSAC are (a) fine general header().
grain bitstream scalability (b) efficient audio compression, and
(c) error resilience. C. H.264 Video Coding Standard
The scalable coder achieves audio coding at different bit-
rates and qualities by processing the bitstream in an ordered MPEG-4 Part 10 AVC/H.264 is the state-of-the-art interna-
set of layers. The base layer is the smallest sub-set of the tional video coding standard developed by Joint Video Team
bitstream that can be decoded independently to generate the (JVT) consisting of experts from ITU-T’s Video Coding Ex-
audio output. The remaining bitstream is organized into a perts Group (VCEG) and ISO/IEC’s Moving Picture Experts
number of enhancement layers such that each enhancement Group (MPEG). The new standard has recently been approved
layer improves the audio quality. BSAC supports a wide by ITU-T as Recommendation H.264 and by ISO/IEC as
range of bit-rates from the low bit-rate stream of 16 kbps International Standard 14496-10 (MPEG-4 part 10) Advanced
per channel (kbps/ch) at the base layer to the higher bit-rate Video Coding (AVC). H.264 is composed of two main parts:
stream of 64 kbps/ch at the top layer. BSAC offers a fine grain (i) Video Coding Layer(VCL) that efficiently represents the
bitstream scalability at 1 kbps/ch. This fine grain scalability video contents and (ii) Network Abstraction Layer (NAL) that
of BSAC comes from the bit-sliced coding technique. In bit- provides network friendliness. H.264 significantly improves
sliced coding, the quantized spectral values are first grouped by coding efficiency over prior video coding standards. For exam-
the frequency bands. Then, the bits in each group are processed ple, it is established that H.264 improves the coding efficiency
in slices, i.e., “bit-sliced”, in the order from MSB (most over MPEG-2 by a factor of 2 at the same video quality. The
significant bits) to LSB (least significant bits). See Figure 1. cost is an increased complexity.
The most significant bits across the groups form the first bit- 1) H.264 Features: In H.264, the overall gain in coding effi-
slice. This bit-slice is fed into the noiseless coding part and ciency results from a plurality of small improvements in Video
then transmitted in the base layer. The next significant bits Coding Layer (VCL). The important improvements in VCL are
form the second bit-slice, which is processed and transmitted (a) enhanced motion-prediction, (b) 4×4 integer transform, (c)
in the first enhancement layer. This process continues until the adaptive deblocking filter, and (d) enhanced entropy coding.
least significant bits are processed and transmitted in the top The H.264 video codec enhances the motion-prediction by
enhancement layer. Each enhancement layer adds 1 kbps/ch employing a new set of techniques such as variable block-size
of bitstream and thus provides fine grain scalability in BSAC. motion compensation, quarter-sample-accurate motion vectors,
multiple reference pictures, and spatial prediction for intra
The bit-slices that are fed into the noiseless coding part coded pictures. Due to these highly sophisticated prediction
go through an entropy coding process. In BSAC, the entropy techniques, the transform coding in H.264 is simplified to
coding is carried out by an arithmetic coder. In AAC, a a 4 × 4 integer transform. This new transform is closely
Huffman coder is used instead. The arithmetic coder in BSAC related to the Discrete Cosine Transform (DCT) but is imple-
enhances the coding efficiency of the bit-sliced audio streams. mented with 16-bit integer arithmetic operations. The condi-
The Error resilience feature of BSAC is implemented by tional application of a deblocking filter removes the artifacts
SBA (Segmented Binary Arithmetic coding). In SBA, multiple across block boundaries and improves the visual quality of
layers of audio streams are grouped again into segments. decoded pictures. Finally, the entropy coding is improved by
Any error propagation is constrained to a single segment in introducing new context adaptive entropy coding tools such
BSAC by re-initializing the arithmetic coder after every N th as Context Adaptive Variable Length Coding (CAVLC) and
enhancement layer. Context Adaptive Binary Arithmetic Coding (CABAC).

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
The goal of Network Abstraction Layer (NAL) is to pro-
vide “network friendliness” by formatting video contents and
appending appropriate header information for the transport
and storage of VCL processed video data. There are a
number of highlighted features of NAL. The parameter set
structure separates handling of important but infrequently
changing information such as sequence parameter sets and
picture parameter sets for robust and efficient conveyance of
header information. NAL units provide a generic format for Fig. 2. Block Diagram of BSAC decoder
use in both packet-oriented and bitstream-oriented transport
systems. Flexible slice sizes allow customized packaging of
compressed video data appropriate for each specific network. III. AUDIO D ECODER
A slice in H.264 is the smallest unit that can be encoded or
decoded independently. A picture is composed of one or more In this section, we discuss the design and implementation
slices. Other NAL-related techniques that provide robustness of the real-time audio decoder based on MPEG-4 BSAC.
to data errors and losses include Flexible Macroblock Ordering We implemented the audio decoder using Teaklite DSP and
(FMO), Arbitrary Slice Ordering (ASO), Redundant Pictures an FPGA. Teaklite is the 16-bit processor with a low power
(RP), Slice Data Partitioning, and SP/SI pictures. consumption level and is suitable for the development of mo-
bile devices. To improve the decoding time, we implemented
Both VCL and NAL of the H.264 video coding standard IMDCT as a hardware module in an FPGA. The components
offer a wide selection of tools for the compression and of the BSAC decoder are shown in Figure 2. The Teaklite DSP
transmission of video contents. Clearly, not all of these tools core implements most of the audio decoder functions and is
are needed in every implementation of H.264-based video equipped with X-memory, Y-memory, and Program-memory
coding systems. To deal with this issue, the notions of profiles connected to the core by X-, Y-, and P-buses, respectively. The
and levels are introduced. Profiles specify a set of application- IMDCT module is implemented in an FPGA with Z-memory
dependent algorithmic features to be supported by decoders. connected to the Teaklite core by the Z-bus. The system bus
Levels set limits on performance-related parameter values connects the audio decoder to the ARM processor, SDRAMs,
(maximum picture size, frame rate, etc.) corresponding to and the audio controller.
the processing and memory capabilities of video coders and The audio decoding process starts with the ARM processor.
decoders. The audio bitstream is stored into SDRAM by the ARM
processor . When Teaklite core sends a request for a new
2) H.264 in DMB: The DMB standard specifies the video frame of audio stream using an interrupt to the ARM, the
service to be provided with a maximum display dimension of requested audio stream is delivered to the core via Z-memory.
352 × 288 pixels at the frame rate of 30 frames per second Teaklite core then starts processing the input audio stream
(fps). It should deliver a VCD-quality video on a 7-inch LCD in the noiseless decoding module and in other subsequent
display, allow a random access at 0.5-second random access processing blocks. The processed stream is finally sent to the
intervals, and resume playing the video sequence without IMDCT module in the FPGA. The decoded audio data is the
skipping after a 5-second long pause. output of the IMDCT module and is stored in the SDRAM
first, then transmitted to the D/A converter in a serial format
The DMB video decoder supports H.264 Baseline profile by the audio controller. The audio controller includes a 64-tap
at Level 1.3. However, the video decoder is not required to sampling rate converter. Note that the IMDCT module and Z-
support FMO, ASO, and RP capabilities. The DMB video memory exchange data with the SDRAM using a DMA and
decoder also respects the following set of constraints. First, do not take up the processing time of the ARM processor.
the decoder supports a variety of display formats such as Small Memory Size To reduce the required memory size,
QCIF, QVGA, WDF, and CIF. In the picture parameter set, the the Teaklite codes were manually programmed and optimized
number of slice group is set to 1 (num slice groups minus1 in assembly codes. The X-, Y-, and Program-memories are
= 0) and the redundant picture count is set to zero (redun- implemented as SRAMs inside the chip after the ASIC pro-
dant pic cnt present flag = 0). In the sequence parameter set, cess. To minimize the size of these SRAMs, the tables are
the picture order count type is set to 2 (pic order cnt type downloaded from the SDRAM each time data is needed.
= 2) and the number of reference pictures is limited to 3 32-bit Processing To improve the quality of the decoded
(num ref frames = 3). The range of vertical component of mo- audio signal, we maintained all spectral data at 32-bit/sample.
tion vectors (maxVmvR) is set to [−64, 63.75]. The maximum The table entries were also kept at 32-bits except the IMDCT
size of the decoded picture buffer (maxDPB) is constrained table that used 24-bit entries.
to 445.5 Kbytes. To allow the random access interval of 0.5 Pipelining BSAC decoder operations are pipelined using
seconds, an IDR (Instantaneous Decoding Refresh) picture is three pipeline stages as depicted in Fig 3. The first pipeline
inserted in the sequence every 0.5 seconds. stage F stands for Frequency domain processing (Noiseless

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
Fig. 3. Pipeline Stages in BSAC

decoding, M/S, PNS, I/S, TNS), and the stage W stands for
Time domain processing (Windowing and Overlapping with
previous frame). These two stages, F and W, are processed
in Teaklite, while IMDCT is processed independently in a
hardware module. Teaklite processes the modules in the time
order denoted by small numbers, for example, 1F, 2W, 3F, 4W,
and so on. Note that there is no overlap between the pipeline
stages F and W at any time. Fig. 4. Block Diagram of H.264 Video Coder

IV. V IDEO D ECODER


This section presents the real-time video decoder based on diagram of the DMB video decoder. The functional blocks
H.264. The section starts with a discussion on the design are divided largely into 3 parts (parser, decoder, and display
issues. Then, the hardware architecture is described. parts). The first group of blocks, the parser part, consists of
the channel, RISC processor, and parser units that connect
A. Design Issues the RF channel to SDRAM A on the A bus. The channel
There are a number of design issues that are specific to the unit receives the RF transmission of the DMB video stream
DMB video decoder. These issues, listed below, are reflected and stores into SDRAM A connected to the A bus. The RISC
in our FPGA implementation in the next section. processor unit then decodes the sequence parameter set, picture
• Bandwidth: The DMB standard specifies the maximum parameter set, and slice layer header information of the H.264
bit-rate for transmitting a MPEG-2 TS packet at different video stream while managing the system level control signals.
protection levels. The portion of video streams in the The parser unit processes the remaining syntax elements at
overall bit-rate ranges roughly from 500 Kbps to 1.5 the slice data layer and below. The decoded syntax elements
Mbps. Considering the portability and mobility of the are written back to SDRAM A. The second group of blocks,
DMB receivers, the broadcasting service providers favor the decoder part, decodes the video stream stored in SDRAM
the protection level 2-A with the bit-rate of 572 Kbps A and writes the decoded pictures in SDRAM B. First, the
allocated to the video service. entropy unit reads the CAVLC syntax elements from SDRAM
• Low-power: The DMB video decoder is a portable device A, generates an array of transform coefficients, and sends to
and requires a low-power design. The amount of dynamic the transform/quantization (T/Q) unit. The T/Q unit performs
power consumption depends on the number of switching inverse integer transform and dequantization to obtain the
operations at the gate level. In our design of the DMB residue data. The prediction unit reconstructs the image block
video decoder, we achieve a low-power design on an by computing intra/inter prediction values and adding to the
architectural level by reducing the number of memory residue data from the T/Q unit. Finally, the deblock filter
access and employing a low system clock frequency. unit filters the reconstructed data. The reconstructed image
• High-performance: The real-time operation at 30 frames blocks are stored into SDRAM B on the B bus. The third
per second requires that the video decoder processes and group of blocks, the display part, performs functions related
displays each picture within the maximum processing to displaying the decoded and reconstructed video images
time of 33 msec. We minimize the number of bus access to the LCD screen. Each unit connected to the SDRAM is
and use pipeline processing techniques to achieve a equipped with a DMA (Direct Memory Access) unit. The bus
real-time operation at the lowest system and bus clock arbiter units control the bus access requests and the SDRAM
frequency. controller units manage the SDRAMs.
The main features of the architecture are (i) One Clock
B. Architecture System, (ii) Dual Bus System, and (iii) Two-Level Pipeline
The goal in the design of the DMB video decoder is to come Processing. We discuss each of these features next.
up with a low-power high-performance architecture. This goal One Clock System The DMB video decoder is designed
is achieved in our implementation by operating the decoder as a simple one-clock system. That is, the codec unit and the
at a low clock frequency. The architecture presented in this buses operate at the same clock frequency. This results in a
section reflects this. In Figure 4, we show an overall block dual-bus two-level pipeline architecture as discussed later.

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
Dual Bus System The DMB video decoder in Figure 4 is
a dual-bus system. The two buses, A bus and B bus, are both
32-bit wide and operate at the same bus clock frequency. The
parser and the decoder parts are connected to the A bus while
the decoder and the display parts are connected to the B bus.
During the decoding process, most of the bus access requests Fig. 5. Proportion of Decoding Time
arise for the B bus. The traffic on the A bus is thus relatively
lighter. Connecting the RISC processor to the A bus thus
allows a stable operation of the processor running other system A. Audio
applications. Also, both buses can operate at a lower clock
frequency by splitting the bus traffic to two separate buses. The DMB audio decoder has been built and tested using
In our implementation, the parser part is separated out from the Teaklite DSP and an FPGA board. To verify the error-free
the rest of the decoder parts. The parser part communicates operation of the decoder, we tested it with MPEG-4 ER BSAC
with the decoder part through SDRAM A connected to the conformance bitstreams and compared the results with the 24-
A bus. This helps the decoder part to perform independently bit references [9]. The decoder output was confirmed in every
of the parser part that is heavily affected by the bit-rate of test with sba mode = 0 where more than 17-bits matched
the incoming video sequence. Since the A bus is not used as precisely with the references.
heavily as the B bus, the accesses to the A bus by the parser The SDRAMs used in the decoder includes 6KW (Kwords)
part are accommodated easily. of program memory, 4KW of X-memory, 10KW of Y-memory,
Two-Level Pipeline Processing We employ a two-level 1KW of Z-memory, and 2KW for the IMDCT module. Each
pipeline architecture in the DMB video decoder. First, the word was 16-bit long. In addition, 4KBytes of ROM was
parser part and the decoder part form a slice-level pipeline assigned to store the tables used by the IMDCT module. The
structure. Then, the decoder part operates under a macroblock- total size of the memory used by the BSAC decoder was thus
level pipeline structure. The pipeline architecture reduces the 50KBytes.
overall decoding time by removing the idle intervals in the To decode audio streams at 44.1 KHz, 2 channel, and 96
decoder part. As a result, the video decoder can operate at Kbps, the total of 32 MIPS was required by Teaklite DSP. Out
a lower system clock frequency. In the slice-level pipeline of this, 17 MIPS was devoted to the BSAC noiseless coding
process, each of the parser and the decoder parts becomes a part (arithmetic coding). This is about 10 MIPS higher than
pipeline stage. In the parser stage, the parser part decodes the the MPEG-4 AAC counter-part, Huffman coding, which takes
incoming slice data and writes the resulting syntax elements only 6 MIPS in Teaklite DSP.
to SDRAM A. In the decoder stage, the decoder part starts
B. Video
processing the syntax elements from SDRAM A. The two
pipeline stages are executed concurrently and SDRAM A The DMB video decoder has been built and tested on an
maintains two slices of video data, one for each pipeline stage, FPGA with the ARM920T processor and a display interface to
during the decoding process. Each of the pipeline stages has a 6.4-inch LCD screen. The video decoding process is divided
a variable execution time. Thus, the pipeline processing stalls into 3 steps: (1) software parsing, (2) hardware parsing, and (3)
whenever the previous stages are not completed on time. The hardware decoding. In the software parsing step, the sequence
second level pipelining is carried out on a macroblock-level parameter set, picture parameter set, and slice header informa-
in the decoder part. Each tool unit of the decoder part does tion are parsed and entropy decoded by the ARM processor.
not have the same processing time. If the macroblock-level In the hardware parsing step, the slice data layers including
pipeline is not used, some tool units must stay idle, waiting macroblock and sub-macroblock layers are parsed and entropy
for the previous tool unit to send the processed data. We reduce decoded by hardware. The resulting syntax elements values are
this idle intervals by allowing each tool unit to start processing written to SDRAM A. Finally, in the hardware decoding step,
the next macroblock while transmitting the already-processed the syntax elements are processed by the CAVLC decoder,
macroblock to the subsequent tool unit. Each tool unit of the Transform & Quantization, Prediction, and Deblocking filter
decoder part has an internal buffering for two macroblocks. blocks. The resulting video pictures are stored in SDRAM B
While one macroblock is being processed and written to an for display.
internal buffer, the macroblock data in the other buffer (that Figure 5 shows the proportions of decoding time needed
has already been processed) is transmitted to the subsequent to process one slice by the three steps. The decoding time of
tool unit. the hardware decoding step takes longer than both of the S/W
and H/W parsing steps summed together. Therefore, the total
V. P ERFORMANCE decoding time is computed only from the decoding time of
In this section, we illustrate the performance of the portable the hardware decoding step.
DMB receiver. The audio and video decoders have been The real-time operation of the video decoder was verified
incorporated into the portable DMB receiver that has been using different sequences at the bit-rates ranging from 572
successfully demonstrated. Kbps to 1.5 Mbps. In this paper, we illustrate the performance

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.
Tool Name Gate Count (%) VI. C ONCLUSION
Parser 17
We presented the real-time portable receiver for Digital
Multimedia Broadcasting in Korea. The receiver was designed
Entropy 9
using DSP and FPGA boards. The audio decoder was based
Transform 6 on MPEG-4 BSAC while the video decoder was based on the
Prediction 33 Baseline profile of MPEG-4 Part10 AVC/H.264 at Level 1.3.
Deblock Filter 26 Our DSP/FPGA implementation of the portable DMB receiver
consisted of about 530K gates and operated at the overall
IMDCT 1
clock frequency of 25 MHz. We believe that the DMB service
TS DEMUX 9 will promote and enhance the Digitallc
lifestyle of its users
TABLE I by providing various kinds of information and entertainment
G ATE C OUNT P ROPORTIONS PER T OOL (T OTAL = 530K) services at anytime and at any place.
VII. ACKNOWLEDGEMENT
Authors would like to thank the members of Mobile Solu-
of the video decoder using two sets of test sequences at tion Lab in the Digital Media R&D Center, Samsung Elec-
different bit-rates. The first set of sequences were generated at tronics and the members of ETRI (Electronics and Telecom-
the bit-rate of 572 Kbps corresponding to the the Protection munications Research Institute) in Korea for participating in
Level A-2. The second set of sequences were generated at the development of the DMB standard and the portable DMB
the bit-rate of 1.5 Mbps corresponding roughly to the highest receiver.
bit-rate that the DMB standard allocates for the video stream. R EFERENCES
The first four out of the five test sequences, Stefan, Foreman,
[1] “Digital Multimedia Broadcasting,” Telecommunications Technology As-
Coastguard, and Hall, are the MPEG standard test sequences sociation, 2003SG05.02-046, 2003.
while the last sequence is the scenes extracted from the [2] “Radio Broadcasting System: Digital Audio Broadcasting (DAB) to
movie “Fly Away Home” (Columbia Pictures, 1996). Each mobile, portable and fixed receivers,” ETSI EN 300 401 v1.3.3, May
2001.
sequence contained 300 frames with an IDR-frame inserted [3] “Coding of audio-visual objects part 3: Audio,” ISO/IEC 14496-3:1999.
every 0.5 seconds. The results show that the low bit-rate [4] “Draft ITU-T recommendation and final draft international standard of
(572 Kbps) sequences are real-time decoded at the operation joint video specification (ITU-T Rec. H.263/ISO/IEC 14496-10 AVC,”
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, JVT-
clock frequency of 14.5 MHz. The higher bit-rate (1.5 Mbps) G050, 2003.
sequences are decoded at the clock frequency of 15.5 MHz. We [5] S. W. Kim, S. H. Park, and Y. B. Kim, “Fine grain scalability in MPEG-4
conclude that our FPGA implementation of the DMB video audio,” in The 111th Audio Eengineering Society Convention, September
2001.
decoder operates at a low clock frequency of 14.5 ∼ 15.5 [6] Thomas Wiegand, Gary J. Sullivan, Gisle Bjontegaard, and Ajay Luthra,
MHz. “Overview of the H.264/AVC video coding standard,” IEEE Transactions
on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 560–
576, 2003.
[7] Seung-Gi Chang, Victor H. S. Ha, Zhi-Ming Zhang, and Yong-Je Kim,
C. System “Performance evaluation of Eureka-147 with RS(204,188) code for mo-
bile multimedia broadcasting,” in SPIE’s Visual Communications and
The bus clock and the processing clock are derived by Image Processing, July 2003, pp. 934–940.
a single clock at the same frequency in the DMB receiver. [8] Seung-Gi Chang, Ga-Hyun Ryu, Victor H. S. Ha, and Yong-Je Kim,
“Standardization and implementation of DMB (Digital Multimedia
The operation frequency of the overall system was set at Broadcasting) system in Korea,” in International Technical Conference
25 MHz. This frequency is the typical operation frequency on Circuits/Systems, Computers and Communiations, July 2003, vol. 2,
required by the 6.4-inch 640 × 480 LCD display attached to pp. 933–936.
[9] “Conformance testing,” ISO/IEC JTC1/SC29/WG11 N2204, February
the receiver. Since other parts of the receiver operate well 1998.
under this frequency, the overall system frequency was set at
this value.
The gate count of our DMB video decoder was esti-
mated using Samsung Semiconductor’s Standard Cell Library
(STDL130) implemented in the 0.18 µm L18L process tech-
nology. The total count was estimated to be 530,000 gates.
The proportions in each tool are shown in Table I.
The DMB receiver presented here has been verified by a
series of conformance tests. The tests were conducted using
the test broadcasts from the participating national broadcasting
corporations in Korea. The portable DMB receiver performed
successfully in real-time both in a laboratory environment and
during the field tests carried out on a moving vehicle.

Proceedings of the 4th IEEE International Workshop on System-on-Chip for Real-Time Applications (IWSOC’04)
0-7695-2182-7/04 $ 20.00 IEEE
Authorized licensed use limited to: Pusan National University Library. Downloaded on December 7, 2009 at 23:36 from IEEE Xplore. Restrictions apply.

Das könnte Ihnen auch gefallen